The current development of increasingly sensitive low-light detector technologies in the VNIR/SWIR regions shows many promises for future night vision applications, including digital image fusion. By combining spectral bands from the reflective and the thermally emissive domains, providing complementary band-specific cues and advantages, it is anticipated that a fused representation will increase situational awareness and target discrimination performance. Performance assessment of image fusion still remains an open problem however, as suitable procedures, models and image quality metrics are still largely missing. A night-time data collection was made on a side-aspect two-hand object identification task over several ranges in a rural/woodland area using a common line-of-sight VNIR/LWIR system. Perception experiments based on an 8-alternative forced choice (8AFC) object ID task were performed, on both the two individual bands as well as several common pixel-based fusion algorithms (including maximum, subtraction and averaging). As image fusion is highly task and scene dependent it is difficult to draw any general conclusions from a single experiment, but for the particular task/scene combination investigated most of the fusion algorithms are shown to perform better than the VNIR channel, albeit most of them fail to perform as well as the LWIR. This is thought to be the result of the VNIR channel being contrast-limited for the particular task/scene being studied and the low dynamic range of the low-light EBCMOS camera used in the fusion setup.
We propose a novel Deep learning approach using autoencoders to map spectral bands to a space of lower dimensionality while preserving the information that makes it possible to discriminate different materials. Deep learning is a relatively new pattern recognition approach which has given promising result in many applications. In Deep learning a hierarchical representation of increasing level of abstraction of the features is learned. Autoencoder is an important unsupervised technique frequently used in Deep learning for extracting important properties of the data. The learned latent representation is a non-linear mapping of the original data which potentially preserve the discrimination capacity.
The use of Improvised Explosive Devices (IEDs) has increased significantly over the world and is a globally widespread phenomenon. Although measures can be taken to anticipate and prevent the opponent's ability to deploy IEDs, detection of IEDs will always be a central activity. There is a wide range of different sensors that are useful but also simple means, such as a pair of binoculars, can be crucial to detect IEDs in time.
Disturbed earth (disturbed soil), such as freshly dug areas, dumps of clay on top of smooth sand or depressions in the ground, could be an indication of a buried IED. This paper brie y describes how a field trial was set-up to provide a realistic data set on a road section containing areas with disturbed soil due to buried IEDs. The road section was imaged using a forward looking land-based sensor platform consisting of visual imaging sensors together with long-, mid-, and shortwave infrared imaging sensors.
The paper investigates the presence of discriminatory information in surface texture comparing areas with disturbed against undisturbed soil. The investigation is conducted for the different wavelength bands available. To extract features that describe texture, image processing tools such as 'Histogram of Oriented Gradients', 'Local Binary Patterns', 'Lacunarity', 'Gabor Filtering' and 'Co-Occurence' is used. It is found that texture as characterized here may provide discriminatory information to detect disturbed soil, but the signatures we found are weak and can not be used alone in e.g. a detector system.
We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which
means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate
previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some
robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.
We also assess methods for target classification and target recognition on these new 3D data.
An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow
estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging
problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused
by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing
significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is
more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused
by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and
turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered
target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets,
target ranges and background clutter.
Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive
imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active
imaging for target recognition.
This paper briefly describes a field trial designed to give a realistic data set on a road section containing areas with
disturbed soil due to buried IEDs. During a time-span of a couple of weeks, the road was repeatedly imaged using a
multi-band sensor system with spectral coverage from visual to LWIR. The field trial was conducted to support a long
term research initiative aiming at using EO sensors and sensor fusion to detect areas of disturbed soil.
Samples from the collected data set is presented in the paper and shown together with an investigation on basic statistical
properties of the data. We conclude that upon visual inspection, it is fully possible to discover areas that have been
disturbed, either by using visual and/or IR sensors. Reviewing the statistical analysis made, we also conclude that
samples taken from both disturbed and undisturbed soil have well definable statistical distributions for all spectral bands.
We explore statistical tests to discriminate between different samples showing positive indications that discrimination
between disturbed and undisturbed soil is potentially possible using statistical methods.
Conference Committee Involvement (3)
Artificial Intelligence and Machine Learning in Defense Applications III
13 September 2021 | Madrid, Spain
Artificial Intelligence and Machine Learning in Defense Applications II
22 September 2020 | Online Only, United Kingdom
Artificial Intelligence and Machine Learning in Defense Applications