Performance assessment of image processing systems is typically carried out using large volumes of data with known ground truth. Unfortunately such data can be challenging to source for many problems of interest. In particular, trials data collection for performance assessment may require the acquisition of imagery for a range of sensing, target and environmental conditions. This can be time consuming and expensive to achieve. We might also wish to assess the performance of systems which are yet to be built, or over target objects for which access is, at best, limited. These problems might be addressed through the use of synthetically-generated imagery which may be produced in volume for the sensor, environment and target configurations required. If sufficiently representative these may then be used within the performance assessment process for system characterization. A scheme for the generation of synthetic imagery has previously been published. This is deemed fit-for-purpose for algorithm and systems performance assessment of image processing for Automatic Target Detection and Recognition (ATDR) tasks in Synthetic Aperture Radar (SAR) imagery. The approach is effective up to the intermediate resolutions which are typically used for ATDR applications. The input models used for synthesis are comprised of three-dimensional triangulations representing the geometric structure of the scene content, with each triangle having a parameterized scattering response based on SAR distributional models. The modelling process is reliant on a skilled analyst to identify and encode the salient detail of the target object including: the geometry of the principal structural features, and; by encoding the surface textures within the scattering response. The synthesis process generates a collection of two-dimensional arrays of distributional parameters the same size as the image to be produced. It is straightforward to use these to generate representations of expected scattering response, or realistic - looking simulated SAR images with speckle. This paper examines the development of target models for use within the simulator, along with a selection of performance assessment results produced for a range of sensor characteristics.
Performance assessment of image processing systems may be carried out using large volumes of data with known ground truth. Unfortunately such data, collected in sensor trials, can be challenging to source for many problems of interest. In particular, trials collection may require the acquisition of imagery in a range of scenario settings, imaging geometries and environmental conditions. An alternative to trials data collection uses synthetically generated imagery of objects and environments configured into realistic scenarios. For performance assessment of image processing chains, large volumes of synthetic imagery may be required in order to characterise individual algorithmic steps or for complete system assessment. In order to generate sufficiently large volumes for such characterisations the simulation approach must also be fast to execute. This paper presents a process for the generation of simulated Synthetic Aperture Radar (SAR) imagery which is fit-for-purpose for the task of algorithm and systems performance assessment of image processing for Automatic Target Detection, Recognition and Identification (ATDRI) tasks. The approach taken is based on the exploitation of computational geometry primitives. It uses a simplified imaging model and correctly treats both layover effects and shadowed regions on both the target object and within the background region. For speed and simplicity the simulation process synthesises single bounce reflections only. This means that the simulation is effective only up to the intermediate resolutions which are typically used for ATDRI applications. The input models are comprised of three-dimensional triangulations representing the geometric structure of the scene content, with each triangle having a parameterised scattering response based on distributional models often used for SAR imagery. The synthesis process generates a collection of two-dimensional arrays of distributional parameters of the same size as the image to be produced. It is straightforward to use these to generate representations of, for example, mean scattering response, or realistic-looking simulated SAR images with speckle ‘noise’. Results are presented for different scene content and sensor configurations, including target aspect and sensor depression angles.
Situational understanding requires an ability to assess the current situation and anticipate future situations, requiring both pattern recognition and inference. A coalition involves multiple agencies sharing information and analytics. This paper considers how to harness distributed information sources, including multimodal sensors, together with machine learning and reasoning services, to perform situational understanding in a coalition context. To exemplify the approach we focus on a technology integration experiment in which multimodal data — including video and still imagery, geospatial and weather data — is processed and fused in a service-oriented architecture by heterogeneous pattern recognition and inference components. We show how the architecture: (i) provides awareness of the current situation and prediction of future states, (ii) is robust to individual service failure, (iii) supports the generation of ‘why’ explanations for human analysts (including from components based on ‘black box’ deep neural networks which pose particular challenges to explanation generation), and (iv) allows for the imposition of information sharing constraints in a coalition context where there is varying levels of trust between partner agencies.
Performance assessment is carried out for a simple target delineation process based on thresholding and shape fitting. The method uses the information contained in Receiver Operating Characteristic curves together with basic observations regarding target sizes and shapes. Performance is gauged by considering the delineations that might result from having particular arrangements of detected pixels within the vicinity of a hypothesized target. In particular, the method considers the qualities of delineations generated when having various combinations of detected pixels at a number of locations around the inner and outer boundaries of the underlying object. Three distinct types of arrangement for pixels on the inner target boundary are considered. Each has the potential to lead to a good quality delineation in a thresholding and shape fitting scheme. The deleterious effect of false alarms within the surrounding local region is also taken into account. The resulting ensembles of detected pixels are treated using familiar rules for combination to form probabilities for the delineations as a whole. Example results are produced for simple target prototypes in cluttered SAR imagery.
This paper examines target models that might be used in simulations of Synthetic Aperture Radar imagery. We
examine the basis for scattering phenomena in SAR, and briefly review the Swerling target model set, before
considering extensions to this set discussed in the literature. Methods for simulating and extracting parameters
for the extended Swerling models are presented. It is shown that in many cases the more elaborate extended
Swerling models can be represented, to a high degree of fidelity, by simpler members of the model set. Further, it
is shown that it is quite unlikely that these extended models would be selected when fitting models to typical
The modelling of the Automatic Target Detection, Recognition and Identification performance in systems of multiple sensors and/or platforms is important in many respects. For example, in the selection of sensors or sensor combinations of sufficient effectiveness to achieve operational requirements, or for understanding how the system might be best exploited. It is possible that a sensing system may be comprised of sensors of several different types, including active and passive approaches in the radio frequency and optical portions of the spectrum. Some may have well-understood performance, whereas others may be only poorly characterised. A simulation framework has been developed examining sensor options across different sensor types, parameterisations, search strategies, and applications. The framework is based around Bayesian Decision Theoretic principles along with simple sensor models and search environment. It uses Monte-Carlo simulation to derive statistical measures of performance for systems. The framework has been designed to encompass detection, recognition and identification problems and also to treat sensor characterisation. The modelling framework has been applied to a number of illustrative problems. These range from simple target detection scenarios using sensors of differing performance or of different regional search schemes, through to examinations of: the number of measurements required to reach threshold performance; the effects of sensor measurement cost; issues relating to the poor characterisation of sensors within the system, and; the performance of combined detection and recognition sensor systems. Results are presented illustrating these effects. These generally show that the method is able to quantify qualitative expectations of performance, and is sufficiently powerful to highlight some unexpected aspects of operation.
The modelling of the Automatic Target Detection, Recognition and Identification performance in systems of multiple
sensors and/or platforms is important in several respects. For example, in the selection of sensors or sensor combinations of sufficient performance to achieve operational requirements or; for understanding how the system might be best exploited. To this end a simulation framework has been developed examining sensor options across different sensor types, parameterisations, search strategies, and applications. It uses Bayesian Decision Theoretic principles, along with simple sensor models and Monte-Carlo simulation, to derive the expected performance of single deployed sensors and of sensor combinations. The basic framework has been significantly extended to include recognition and identification problems along with the detection problem for which it was originally designed. The framework has also been expanded to treat cases in which the sensors are poorly characterised, and recommendations for parameterisation in this mode are made. The sensor system modelling framework has been applied to a number of illustrative problems. These range from simple target detection problems using sensors of differing performance or of different regional search schemes, through to examinations of: the number of measurements required to reach threshold performance; the effects of sensor measurement cost; issues relating to the poor characterisation of sensors within the system, and; the performance of a more elaborate combined detection and recognition sensor system. Generally, these results tend to show that the method is able to quantify qualitative expectations of performance, and is sufficiently powerful to highlight some unexpected aspects of operation.
Change detection provides a powerful means for the initial detection of small target objects of interest. However, speckle
effects mean this type of approach can be difficult to apply to Synthetic Aperture Radar (SAR) imagery. This paper
examines methods for object detection using change between a registered pair of SAR images.
The techniques discussed are designed to detect change over small areas ranging in size from a few to perhaps a few
hundred pixels. The techniques considered include the ratio of pixels and the ratio of variances covering small regions.
The former is a straightforward approach and can provide a good performance baseline. The latter utilises the
observation that many man-made objects have a somewhat spiky scattering response, the variance tends to capture this
type of response and the ratio of variance enables comparison.
Ideally any test statistic should be characterized by a known statistical distribution such that formal tests of a null
hypothesis might be carried out. Here the null hypothesis corresponds to no change, and knowledge of the distribution of
the test statistic enables the implementation of a Constant False-Alarm Rate (CFAR) detection process. The analysis
carried out herein considers the distribution of the ratio statistics under realistic operating parameterisations for target
detection in SAR imagery. Results are presented for a registered image pair in the form of detection maps. The simple
ratio is found to be considerably more sensitive to image speckle than techniques covering small regions in the imagery.
Change detection provides a powerful means for the initial detection of small target objects. However, speckle effects
mean this type of approach can be difficult to apply to Synthetic Aperture Radar (SAR) imagery. This paper examines
one method for target detection using change between a registered pair of SAR images. The technique may be
parameterized to detect small target objects ranging in size from a few to perhaps a few hundred pixels. The approach
considered here exploits the observation that the scattering response of many target types of interest is dominated by a
small number of bright scatterers, whilst natural clutter regions tend not to display this property. The variance provides a
useful statistic summarizing this effect, consequently the detection method considered here is based on the ratio of the
variances of corresponding patches in the pair of images. Ideally any test statistic should be characterized by a known
statistical distribution; this will allow formal tests of a null hypothesis to be carried out. Here the null hypothesis
corresponds to no change, and knowledge of the distribution of the test statistic enables the implementation of a Constant
False-Alarm Rate (CFAR) detection process. The analysis carried out herein considers the distribution of the variance
ratio under realistic operating parameterisations for target detection in SAR imagery. Synthetic data is used to
characterize this distribution, and Monte Carlo techniques are applied to derive empirical formulae for use in an online
application. Results are presented for synthetic data and for a registered image pair, in the form of detection maps.
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent
scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model,
and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be
detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components,
the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In
particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the
mixture components. This will bias the statistics of at least some of the components away from their true values and
towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false
alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such
distributions are better able to account for anomalies in the training data within the tails of their distributions, and the
balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the
scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are
examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is
presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for
different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given
significance levels and Receiver Operating Characteristic curves.
The foremost approach to the detection of militarily significant targets in hyperspectral imagery is through the use of anomaly detection processes. These may be applied to imagery in order to identify those pixels that contain materials uncommon in the scene, on the assumption that military targets will match this criterion. The most common approach to anomaly detection for hyperspectral data is through the use of local-area anomaly detection techniques. These extract statistics of the scene in the near-locality of the pixel of interest and then use hypothesis test methods to decide whether the test pixel is anomalous to the training area. Alternative and potentially superior approaches are also available which first attempt to understand the composition of the whole scene in terms of ground cover types. These methods go on to use the extracted scene understanding model to find pixels containing materials that are rare or unseen in the imagery, and mark these as anomalies. This paper compares three anomaly detection approaches, one based on the local area paradigm and two using the scene understanding (global anomaly detection) approach. The latter pair of methods exploit different ways of extracting the scene model. The anomaly detection techniques are examined using real hyperspectral imagery with inserted anomaly pixels. A range of results is presented for different parameterisations of the algorithms. These include anomalous pixel maps at given detection rates and receiver operating characteristic curves.
The majority of anomaly detection processes used for hyperspectral image data are based on pixel-by-pixel whitening and thresholding operations using local area statistics. This paper discusses an alternative approach to anomaly detection in which a mixture model is fitted to the whole of the image. This mixture model may be used to segment the image into component memberships and these may, in turn, be used for anomaly detection.
In this study the mixture model is generated for the whole scene using the stochastic expectation maximization (SEM) algorithm. This is parameterized such that mixture components consisting of small numbers of pixels are eliminated. The maximum a-posteriori probability (MAP) mixture component for each pixel is then determined. The pixel may then be examined using a conventional statistical hypothesis test to see whether it is plausible that it was drawn from the distribution of the identified component, at a given significance level.
This anomaly detection process has been examined using both synthetic and real hyperspectral imagery and results are presented here for real data containing no known military targets and for synthesized imagery which includes military target pixels. A range of results is presented for different parameterizations of the SEM algorithm and significance test. These results include the component map of the imagery and anomalous pixel maps at given significance levels.
The classification of pixels in hyperspectral imagery is often made more challenging by the availability of only small numbers of samples within training sets. Indeed, it is often the case that the number of training samples per class is smaller, sometimes considerably smaller, than the dimensionality of the problem. Various techniques may be used to mitigate this problem, with regularized discriminant analysis being one method, and schemes which select subspaces of the original problem being another. This paper is concerned with the latter class of approaches, which effectively make the dimensionality of the problem sufficiently small that conventional statistical pattern recognition techniques may be applied. The paper compares classification results produced using three schemes that can tolerate very small training sets. The first is a conventional feature subset selection method using information from scatter matrices to choose suitable features. The second approach uses the random subspace method (RSM), an ensemble classification technique. This method builds many 'basis' classifiers, each using a different randomly selected subspace of the original problem. The classifications produced by the basis classifiers are merged through voting to generate the final output. The final method also builds an ensemble of classifiers, but uses a smaller number to span the feature space in a deterministic way. Again voting is used to merge the individual classifier outputs. In this paper the three feature selection methods are used in conjunction with a variant of the piecewise quadratic classifier. This classifier type is known to produce good results for hyperspectral pixel classification when the training sample sizes are large. The data examined in the paper is the well-known AVIRIS Indian Pines image, a largely agricultural scene containing some difficult to separate classes. Removal of absorption bands has reduced the dimensionality of the data to 200. A two-class classification problem is examined in detail to determine the characteristic performance of the classifiers. In addition, more realistic 7, 13 and 17-class problems are also studied. Results are calculated for a range of training set sizes and a range of feature subset sizes for each classifier type. Where the training set sizes are large, results produced using the selected feature set and a single classifier outperform the ensemble approaches, and tend to continue to improve as the number of features is increased. For the critical per-class sample size, of the order of the dimensionality of the problem, results produced using the selected feature set outperform the random subspace method for all but the largest subspace sizes attempted. For the smaller training samples the best performance is returned by the random subspace method, with the alternative ensemble approach producing competitive results for a smaller range of subspace sizes. The limited performance of the standard feature selection approach for very small samples is a consequence of the poor estimation of the scatter matrices. This, in turn, causes the best features to be missed from the selection. The ensemble approaches used here do not rely on these estimates, and the high degree of correlation between neighboring features in hyperspectral data allow a large number of 'reasonable' classifiers to be produced. The combination of these classifiers is capable of producing a robust output even in very small sample cases.
The classification of aircraft into types is an important aspect of the problem of air picture compilation and is required if good situation awareness is to be maintained. If this can be achieved when the aircraft are at long range (significantly beyond visual range) then these processes may be significantly enhanced.
This paper examines methods for exploiting high-resolution radar range profiles of aircraft using statistical pattern recognition techniques to produce classifications into types. The paper describes the data available, and covers pre-processing steps and the development of a range of classifiers of increasing complexity. The classifiers applied in the target recognition process include simple parametric and non-parametric methods based on single range profile samples, approaches which fuse classifications from a temporal sequence of measurements, and methods that use a sub-classing based approach. The latter technique uses multiclassifier system methods that cope well with the small training set sizes. As the assumptions in the model, and the complexity of the classifiers, increases so does the performance of the target recognition system, with error rates as low as 6% being achieved for a problem with three aircraft types. One issue with the available experimental data is that only a limited number of samples of each aircraft type are available. Care is taken to ensure the results produced using this limited data are achievable in an equivalent real-world application.
The availability of only small samples of training data presents a problem when using statistical pattern recognition techniques. Recently new methods for pattern recognition, designed specifically for use with small training data samples, have begun to appear in the literature. These methods sample the training data many times to assemble a range of different classifiers. The classifiers produced may then be collected into an ensemble and, when presented with an unseen sample, use a voting scheme to determine class membership. A particular example of this ensemble classification technique, the random subspace method, is examined here and tested using both synthetic data having known properties, and with data from the AVIRIS hyperspectral-imaging sensor. The paper discusses the application of the method to problems that are not linearly separable; the selection of parameters for the method, and examines the performance envelope for different problems and parameterizations. Good results are produced for both datasets, even where the training samples are too small for conventional classification techniques to be used. Specifically, error rates of only twice those calculated for a large training sample may be achieved using training sets with as few as 20 examples per class, for a thirteen-class classification problem, using the 200-dimensional AVIRIS "Indian Pines" hyperspectral image.