Performance-driven sensing is a promising new concept that relies on sensing, processing, and exploiting only
the most "decision-relevant" sets of target data for the purpose of reducing requirements on data collection,
processing, and communications. An example of a device supporting such a concept is a MEMS-based single pixel
Fabry-Perot spectrometer being developed at the Rochester Institute of Technology, which can record selected
wavelengths on a per-pixel basis throughout an image. This paper presents an autonomous target-dependent
waveband selection approach for performance-driven sensing with an adaptive hyperspectral imaging sensor.
Given a target that is to be tracked, a subset of wavebands is estimated from locally recorded hyperspectral data
that provides optimal target detectability against local background. The waveband selection algorithm relies on
finding a subset of bands that provides maximum separation between a target histogram and local background
histogram constructed from the respective bands. To illustrate the concept, we perform a simulation study for
vehicle tracking in a set of synthetic DIRSIG rendered HSI images. The simulations demonstrate improved
vehicle tracking accuracy when using the adaptively-selected subset of wavebands for tracking by histogram
matching compared to performing tracking by histogram matching with regular (fixed) color bands. We extend
the framework to a dynamic concept where the waveband subset is updated over time as a function of position
estimation accuracy and discuss the full integration of the Feature-Aided Tracking (FAT) component derived
from the selected wavebands within a Multiple Hypothesis Tracking (MHT) framework.
A common problem in video-based tracking of urban targets is occlusion due to buildings and vehicles. Fortunately,
when multiple video sensors are present with enough geometric diversity, track breaks due to temporary
occlusion can be substantially reduced by correlating and fusing source-level track data into system-level tracks.
Furthermore, when operating in a communication-constrained environment, it is preferable to transmit track
data rather than either raw video data or detection measurements. To avoid statistical correlation due to
common prior information, tracklets can be formed from the source tracks prior to transmission to a central
command node, which is then responsible for system track maintenance via correlation and fusion. To maximize
the operational benefit of the system-level track picture, it should be distributed in an efficient manner to all
platforms, especially the local trackers at the sensors. In this paper, we describe a centralized architecture for
multi-sensor video tracking that uses tracklet-based feedback to maintain an accurate and complete track picture
at all platforms. We will also use challenging synthetic video data to demonstrate that our architecture improves
track completeness, enhances track continuity (in the presence of occlusions), and reduces track initiation time
at the local trackers.
A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large
UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are
widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research
efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section
when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter
objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support
this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an
acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with
preliminary results from three of the sensors.