The Motion Imagery Standards Board (MISB) has created the Video National Imagery Interpretability Rating Scale (VNIIRS). VNIIRS extends NIIRS to scene characterization from streaming video to include object recognition of various targets for a given size. To apply VNIIRs for target tracking, there is a need to understand the operating conditions of the sensor type, environmental phenomenon, and target behavior (SET). In this paper, we explore VNIIRS for target tracking given the sensor resolution to support the relative tracking performance using track success. The relative assessment can be used in relation to the absolute target size associated with the VNIIRS. In a notional analysis, we determine the issues and capabilities of using VNIIRS video quality ratings to determine track success. The outcome of the trade study is an experiment to understand how to use VNIIRS can support target tracking evaluation.
A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.
Spectral remote sensing provides solutions to a wide range of commercial, civil,
agricultural, atmospheric, security, and defense problems. Technological advances have
expanded multispectral (MSI) and hyperspectral (HSI) sensing capabilities from air and space
borne sensors. The greater spectral and spatial sensitivity have vastly increased the available
content for analysis. The amount of information in the data cubes obtained from today’s sensors
enable material identification via complex processing techniques. With sufficient sensor resolution,
multiple pixels on target are obtained and by exploiting the key spectral features of a material
signature among a group of target pixels and associating the features with neighboring pixels,
object identification is possible. The authors propose a novel automated approach to object
classification with HSI data by focusing on the key components of an HSI signature and the
relevant areas of the spectrum (bands) of surrounding pixels to identify an object. The proposed
technique may be applied to spectral data from any region of the spectrum to provide object
identification. The effort will focus on HSI data from the visible, near-infrared and short-wave
infrared to prove the algorithm concept.
High value target tracking and identification (ID) performance is impacted by sensor, target,
and environmental conditions. Radar sensors are preferred since they provide sensor capabilities over a
wide range of weather conditions. Sensor management provides some control, such as adjustment of the
collection geometry. However, ground target dynamics and the collection environment can't be controlled
and degrade tracking and identification performance. Some examples are when the target maneuvers
into dense traffic, stops at intersections, or travels in a cluttered environment and is obscured by
vegetation or buildings. Target identification algorithms using high range resolution (HRR) profiles formed
from moving target data and range profiles formed from synthetic aperture radar (SAR) data have been
demonstrated. Feature aided tracking (FAT) exploits the features derived from HRR data to improve
target tracking. Identifying the dominant features which can be reliably exploited when a target is either
moving or stationary that can then be used to maintain track and ID the target is expected to enhance
algorithm performance in realistic scenarios. A simultaneous tracking and recognition (STAR)
performance model is developed and applied to realistic scenarios to provide performance gain estimates
based on the number of exploited features and operating conditions. This paper presents performance
results for simultaneous target tracking and identification using HRR and SAR sensor data.
This paper describes the automatic target recognition (ATR) challenge problem which
includes source code for a baseline ATR algorithm, display utilities for the results, and a high
range resolution (HRR) data set consisting of 10 civilian vehicles. The Ku-band data in this data
set has been processed into 1-dimensional range profiles of vehicles in the open, moving in a
straight line. It is being released to the ATR community to facilitate the development of new and
improved HRR identification algorithms which can provide greater confidence and very high
identification performance. The intent of the baseline algorithm included with this challenge
problem is to provide an ATR performance comparison to newly developed algorithms. Single-look
identification performance results using the baseline algorithm and the data set are provided
as a starting point for algorithm developers. Both the algorithm and data set can support single
look and multi-look target identification.
KEYWORDS: Automatic target recognition, Performance modeling, Data modeling, Data fusion, Sensors, Mahalanobis distance, Machine learning, Detection and tracking algorithms, Systems modeling, Analytical research
The US Air Force Research Laboratory (AFRL) is exploring the decision-level fusion (DLF) trade space in the Fusion
for Identifying Targets Experiment (FITE) program. FITE is surveying past DLF approaches and experiments. This
paper reports preliminary findings from that survey, which ultimately plans to place the various studies in a common
framework, identify trends, and make recommendations on the additional studies that would best inform the trade space
of how to fuse ATR products and how ATR products should be improved to support fusion. We tentatively conclude
that DLF is better at rejecting incorrect decisions than in adding correct decisions, a larger ATR library is better (for a
constant Pid), a better source ATR has many mild attractors rather than a few large attractors, and fusion will be more
beneficial when there are no dominant sources. Dependencies between the sources diminish performance, even when
that dependency is well modeled. However, poor models of dependencies do not significantly further diminish
performance. Distributed fusion is not driven by performance, so centralized fusion is an appropriate focus for FITE.
For multi-ATR fusion, the degree of improvement may depend on the participating ATRs having different OC
sensitivities. The machine learning literature is an especially rich source for the impact of imperfect (learned in their
case) models. Finally and perhaps most significantly, even with perfect models and independence, the DLF gain may be
quite modest and it may be fairly easy to check whether the best possible performance is good enough for a given
application.
The identification of a target from an electro-optical or thermal imaging sensor requires accurate sensor
registration, quality sensor data, and an exploitation algorithm. Combining the sensor data and exploitation,
we are concerned with developing an electro-optical or infrared (EO/IR) performance model. To combat the
registration issue, we need a detailed list of operating conditions (i.e. collection position) so that the sensor
exploitation results can be evaluated with sensitivities to these operating conditions or collection parameters.
The focus of this paper will build on the NVSED AQUIRE model2. We are also concerned with developing an
EO/IR model that affords comparable operating condition parameters to a synthetic aperture radar (SAR)
performance model. The choice of EO/IR modeling additions are focused on areas were Fusion Gain might
be realized through an experiment tradeoff between multiple EO/IR looks for ATR exploitation fusion. The
two additions to known EO/IR models discussed in the paper are (1) adjacency and (2) obscuration. The
methods to account for these new operating conditions and the corresponding results on the modeled
performance are presented in this paper.
The processing of airborne multi-channel radar data to cancel the clutter near moving ground targets can be
accomplished through Doppler filtering, with displaced phase center antenna (DPCA) techniques, or by space-time
adaptive processing (STAP). Typical clutter suppression algorithms recently developed for moving ground targets were
designed to function with two-channel displaced phase center radar data. This paper reviews the implementation of a
two-channel clutter cancellation approach used in the past (baseline technique), discusses the development of an
improved two-channel clutter cancellation algorithm, and extends this technique to three-channel airborne radar data.
The enhanced performance of the improved dual channel method is expanded upon by exploiting the extra information
gained from a third channel. A significant improvement between the moving target signature level and the surrounding
clutter level was obtained with the multi-channel signal subspace (MSS) algorithm when comparing results from dualchannel
and three-channel clutter suppression to the baseline two-channel technique.
Real world Operating Conditions (OCs) influence sensor data that in turn affects the performance of target detection
and identification systems utilizing the collected information. The impact of operating conditions on collected data is
widely accepted, but not fully characterized. OCs that affect data depend on sensor wavelength and associated scenario
phenomenology, and can vary significantly between electro-optical (EO), infrared (IR), and radar sensors. This paper
will discuss what operating conditions might be modeled for each sensor type and how they could affect automatic target
recognition (ATR) systems designed to exploit their respective sensory data. The OCs are broken out into four
categories; sensor, environment, target, and ATR algorithm training. These main categories will further contain
subcategories with varying levels of influence. The purpose of this work is to develop an OC distribution model for the
"real world" that can be used to realistically represent the performance of multiple ATR systems, and ultimately the
decision made from the fused ATR results. An accurate OC model will greatly enhance the performance assessment of
ATR and fusion systems by affording Bayesian conditioning in fusion performance analysis and aiding in the sensitivity
analysis of fusion performance over different operational conditions. Accurate OC models will also be useful in the
fusion algorithm operation.
High-Range Resolution (HRR) radar modes have become increasingly important in the past few years due to the ability to form focused range profiles of moving targets with enhanced target-to-clutter ratios via Doppler filtering and/or clutter cancellation. To date, much research has been performed on using HRR radar profiles of both moving and stationary ground targets for Automatic Target Recognition (ATR) and Feature-Aided Tracking (FAT) applications. However, little work evaluating the correlation between moving versus stationary HRR profiles has been reported. This paper presents analytical comparisons between HRR profiles generated from a moving vehicle and profiles formed from Synthetic Aperture Radar (SAR) images of the identical stationary vehicle. The moving target HRR profiles are formed by integrating range-Doppler target images detected from clutter suppressed phase history data. The stationary target HRR profiles are formed from SAR imagery target chips by segmenting the target from clutter and reversing the image formation process. The purpose of this research is to identify which features, such as profile peaks, peak intensity, electrical length, among others, are common to profiles of the same target type and class and at the same imaging geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.