The aim of this presentation is to give an overview of a recent project aimed at utilising AI, specifically machine learning, in supporting the Police in searching for missing people. Typically searching is performed by a helicopter and officers on the ground. Recently the Police have started to use a Remotely Piloted Aircraft System in searching for vulnerable people in rural environments. This presentation describes the outputs of a successfully collaboration with Police Scotland, Thales, CENSIS and the University of the West of Scotland where the aim of the activity was to build a real-time object detection system. The AI system is fully integrated within a mobile platform compatible with a number of RPAS platforms.
Within line of sight pointing and stabilisation of EO (Electro-optic) systems operating under motion disturbances it is desirable to measure the inertial orientation of different parts of the system, not just the line of sight - this would allow additional information to be added to the control loop. To implement this a framework to fuse the multiple inertial sensors of the EO system is considered, with an example implemented. The fusion of higher performance sensors located at the line of sight is implemented within the proposed framework, to improve the performance of the estimate at the location of the lower performance sensor. The fusion framework makes use of cascaded Multiplicative Extended Kalman Filter that estimate the multiplicative error of the quaternion orientation estimate.
Deep neural networks achieve state-of-the-art performance on object detection tasks with RGB data. However, there are many advantages of detection using multi-modal imagery for defence and security operations. For example, the IR modality offers persistent surveillance and is essential in poor lighting conditions and 24hr operation. It is, therefore, crucial to create an object detection system which can use IR imagery. Collecting and labelling large volumes of thermal imagery is incredibly expensive and time-consuming. Consequently, we propose to mobilise labelled RGB data to achieve detection in the IR modality. In this paper, we present a method for multi-modal object detection using unsupervised transfer learning and adaptation techniques. We train faster RCNN on RGB imagery and test with a thermal imager. The images contain object classes; people and land vehicles and represent real-life scenes which include clutter and occlusions. We improve the baseline F1-score by up to 20% through training with an additional loss function, which reduces the difference between RGB and IR feature maps. This work shows that unsupervised modality adaptation is possible, and we have the opportunity to maximise the use of labelled RGB imagery for detection in multiple modalities. The novelty of this work includes; the use of the IR imagery, modality adaption from RGB to IR for object detection and the ability to use real-life imagery in uncontrolled environments. The practical impact of this work to the defence and security community is an increase in performance and the saving of time and money in data collection and annotation.
The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.
A modular vehicle detection system, using a two-stage hypothesis generation (HG) and hypothesis combination
(HC) approach is presented. The HG stage consists of a set of simple algorithms which parse multi-modal data and provide a set of possible vehicle locations. These hypotheses are subsequently fused in a combination stage. This modular design allows the system to utilise additional modalities where available, and the combination of multiple information sources is shown to reduce false positive detections. The system uses Thales' high-resolution long wave infrared polarimeter and a four-band visible/near infrared multispectral system. Vehicle cues are taken from motion
ow vectors, thermal intensity hot spots, and regions with a locally high degree of linear polarisation. Results using image sequences gathered from a moving vehicle are shown, and the performance of the system is assessed with Receiver Operator Characteristics.
The aim of this paper is to describe the progress and results of an imaging system designed to optimise the performance
of human operator tasks through exploitation of multimodal sensors and scene context. The performance of tasks such as
surveillance, target detection and situational awareness is dependent on the scene content, the sensors available and the
algorithms deployed. Intelligent analysis of the scene into contextual regions allows specific algorithms to be optimised
and appropriate sensors to be selected, thereby increasing the performance of the operator's tasks. Context-specific
algorithms, which will adapt as the scene changes, are required. In the case discussed in this paper, the contextual
regions include road, sky and vegetation, and the dynamic detection of each region utilises different sensor modalities.
The paper will describe the overall system concept and a real-time imaging demonstrator using GPUs, which will be
used for future demonstrations of the context-specific processing. Simulations of the context-specific scene analysis will
be described using sensor data from a vehicle in a rural environment. The performance of a motion detection system with
and without context will also be illustrated using measured image data.
The aim of this paper is to describe the results of various trials involving a high-resolution thermal imager that has been
designed to be sensitive to polarised radiation. Polarisation has the potential to discriminate man-made objects and
disturbed earth from background clutter. Polarisation combined with conventional thermal imaging within the one
camera offers the potential to significantly reduce false alarms in surveillance and detection applications. The camera
used during the trials is a technology demonstrator developed by Thales Optronics, UK. The camera operates in the longwave
infra red and has a QWIP polarisation-sensitive detector. The results presented in this paper include recent trials in
the UK and USA. The aim of the trials was to assess the utility in using a LWIR polarimeter for detection of difficult
objects from background clutter. Thermal and polarised images were captured and processed in order to detect
anomalies. Several polarisation-based discriminative imaging techniques are applied to trials imagery. The effect of the
diurnal cycle on the effectiveness of polarisation for object discrimination will be assessed.
The phenomenon of polarisation causes smooth man-made objects, such as metal and glass, to have a different polarisation signature to that of natural vegetation. Therefore, polarisation has the potential to discriminate man-made objects from background clutter. Polarimetric information, combined with conventional thermal imaging, provides a powerful means of reducing false alarms in applications such as situational awareness, detection of low signature targets and disturbed earth. The paper presents results of discriminative imaging algorithms that were designed to augment polarimetric signatures. Recent results from a LWIR polarimetric imager are presented and these show the merit of discriminative imaging techniques when applied to polarimetric thermal imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.