This paper discusses the concept and hardware development of an all fiber-based, solid state, coherent array directional sensor that can locate and track bright objects against a darker background. This sensor is not an imager. It relies on the inherent structure of the global fiber distribution. Methods for characterizing and calibrating hardware embodiments are also presented.
A human perception test has been conducted to determine the correlation between observer response and the number of spatial cues without thermal attributes, thermal cues, and total cues in an image. The experiment used the NVESD 12 target LWIR tracked vehicle image set. Various levels of Gaussian blur were applied to twelve aspects of the twelve targets in order to reduce both the number of resolvable cycles and the number of observable thermal and spatial cues. The author then counted every observable thermal and spatial cue in each of the processed images. A thermal cue was defined as either a hot spot or a cool spot. Typically, hot spots are produced by a vehicle's engine or exhaust. Cool spots are features such as air intakes and trim vanes. Spatial cues included characteristics such as barrel length, turret size, and number of wheels. The results of a 12 alternative forced choice identification perception test were analyzed to determine the correlation coefficients between probability of identification and the number of thermal, spatial, and total cues. The results show that the number of spatial cues in an image was strongly correlated with observer performance.
This paper describes research on the measurement of the 50% probability of identification cycle criteria (N50,V50) for a set of hand-held objects normally held or used in a single hand. These cycle criteria are used to calibrate the Night Vision Electronic Sensors Directorate (NVESD) target acquisition models. The target set consists of 12 objects, from innocuous to potentially lethal. Objects are imaged in the visible, midwave infrared (MWIR), and long-wave infrared (LWIR) spectrum at 12 different aspects. Two human perception experiments are performed. The first experiment simulates an incremental constriction of the imaging systems modulation transfer function (MTF). The N50, and V50 calibration criteria are measured from this perception experiment. The second experiment not only simulates an incremental constriction of the system MTF but also down samples the imagery to simulate the objects at various ranges. The N50 and V50 values are used in NVTherm 2002 and NVThermIP, respectively, to generate range prediction curves for both the LWIR and MWIR sensors. The range predictions from both NVTherm versions are then compared with the observer results from the second perception experiment. The comparison between the results of the second experiment and the model predictions provides a verification of measured cycle criteria.
This paper describes research on the determination of the fifty-percent probability of identification cycle criterion (N50) for two sets of handheld objects. The first set consists of 12 objects which are commonly held in a single hand. The second set consists of 10 objects commonly held in both hands. These sets consist of not only typical civilian handheld objects but also objects that are potentially lethal. A pistol, a cell phone, a rocket propelled grenade (RPG) launcher, and a broom are examples of the objects in these sets. The discrimination of these objects is an inherent part of homeland security, force protection, and also general population security.
Objects were imaged from each set in the visible and mid-wave infrared (MWIR) spectrum. Various levels of blur are then applied to these images. These blurred images were then used in a forced choice perception experiment. Results were analyzed as a function of blur level and target size to give identification probability as a function of resolvable cycles on target. These results are applicable to handheld object target acquisition estimates for visible imaging systems and MWIR systems. This research provides guidance in the design and analysis of electro-optical systems and forward-looking infrared (FLIR) systems for use in homeland security, force protection, and also general population security.
IR synthetic scene fidelity improves with each leap ahead in computing capability. Military training, in particular, is reaping the benefits from each improvement in rendering fidelity and speed. However, in order for these synthetic scenes to be useful for signature virtual prototyping or laboratory observer trials, a particularly challenging aspect still needs to be addressed. Synthetic scenes need to have the ability to include robust physically reasonable active source prediction models for vehicles and to include physically reasonable interaction of vehicles with the terrain. Ground heating from exhaust, radiative heating and reflections between the vehicle and terrain, and tracks left on the terrain are just some examples of desired capabilities. For determining the performance of signature treatments, the effects must be more than artistic renderings of vehicle terrain interaction, but physically representative enough to make engineering determinations. This paper will explore the results of a first phase study to include MuSES targets in an existing IR synthetic scene program and the inclusion of exhaust impingement on the terrain.
In preceding work, it was shown that the relative error in the predicted intensity of an individual pixel in any broadband MWIR image simulation that employs some form of band-average emissivity and/or average detector responsivity approximation in its models will be about equal to the fractional standard deviation of the MWIR emissivity of the corresponding material. This relationship between the error in simulated integrated image intensity and the variation in spectral emissivity over the MWIR band for a set of 27 commonly encountered scenery materials was shown to behave like a simple power curve, with the power inversely proportional to scene temperature. However, what is more important than the error in a single simulated image pixel is how a multi-pixel simulated image is effected. In this follow-on paper, the errors associated with band-average emissivity approximations are quantified with respect to errors in synthetic images. Comparisons of image contrast and image correlation using band-average emissivity and spectral emissivity are performed. The impact of band-average emissivity in a simple synthetic image on a perception experiment is presented as an example of an application-dependent effect.
A summary of the development and impact of the Night Vision and Electronic Sensors Directorate (NVESD) Time-Limited Search (TLS) Model for target detection is presented. This model was developed to better represent the search behavior when an observer is placed under time-constrained conditions. The three primary components of the search process methodology are (1) the average detection time (based on characteristics of the image), (2) occurrence and time delay associated with false alarms, and (3) the time spent searching a Field-of-View (FOV) before moving on to another FOV. The results of four independent perception experiments served as the basis for this methodology. The experiments, which were conducted by NVESD, portrayed time limited search conditions for different sensor resolution and background clutter levels. The results of the experiments showed that these factors influence the search process and their impacts are represented within the components of the TLS methodology. The discussion presents the problems with the current model and details the constraints that must be understood to correctly apply the new model.
There is a push in the Army to develop lighter vehicles that can get to remote parts of the world quickly. This objective force is not some new vehicle, but a whole new way of fighting wars. The Future Combat System (FCS), as it is called, has an extremely aggressive timeline and must rely on modeling and simulation to aid in defining the goals, optimizing the design and materials, and testing the performance of the various FCS systems concepts. While virtual prototyping for vehicles (both military and commercial) has been around as a concept for well over a decade and its use is promoted heavily in tours and in boardrooms, the actual application of virtual protoyping is often limited and when successful has been confined to specific physical engineering areas such as weight, space, stress, mobility, and ergonomics. If FCS is to succeed in its acquisition schedule, virtual prototyping will have to be relied on heavily and its application expanded. Signature management is an example of an area that would benefit greatly from virtual prototyping tools. However, there are several obstacles to achieving this goal. To rigorously analyze a vehicle's IR and visual signatures extensively in several different environments over different weather and seasonal conditions could result millions of potentially unique signatures to evaluate. In addition, there is no real agreement on what evaluate means or even what value is used to represent signature; Delta T( degree(s)C), Probability of Detection? What the user really wants to know is: how do I make my system survivable? This paper attempts to describe and then bound the problem and describe how the Army is attempting to deal with some of these issues in a holistic manner using SMART (Simulation and Modeling for Acquisition, Requirements, and Training) principles.
Recent experiments performed at the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD) provide significant insight into the validation of synthetic imagery for use in human perception experiments. This paper documents the procedures and results of target identification (ID) experiments using real and synthetic thermal imagery. Real imagery representing notional first generation and advanced scanning sensor systems was obtained. Parameters derived from the sensor data were used to generate synthetic imagery using the NVESD Paint the Night simulation. Both image sets were then used in a target identification experiment with trained human observers. Perception test results were analyzed and compared with metrics derived from the imagery. Several parameters missing from the original truth data were found to correlate with differences in the perception data. Synthetic data were regenerated using these additional parameters. A subsequent perception experiment confirmed the importance of these parameters, and a good match was obtained between real and synthetic imagery. While the techniques used in this series of experiments do not constitute a definitive method for validating synthetic imagery, they point to some important observations on validation. The main observation is that both target and local background characteristics must be sufficiently specified in the truth data in order to obtain good agreement between synthetic and real data. The paper concludes with suggestions as to the level of detail necessary for truth data when using synthetic imagery in perception experiments.
Human perception tests have been performed by Night Vision Electronic Sensor Directorate (NVESD) addressing the process of searching an image with the intent of detecting a target of military importance. Experiments were performed using both real thermal imagery and synthetic imagery generated using 'Paint the Night' simulation. It was demonstrated that trained observers acquire targets much more quickly than previously expected. This insight was gained by changing the instructions the observers were given for the test. Rather than telling the observers that they were being timed, the observers were given a time limit, as short as 3 seconds. When time limited, the observers found targets quicker. Although false alarms per second increased with the shorter time limits, the ratio of false alarms to detected targets did not increase. A modification to the traditional NVESD search model has been developed and incorporated into Army war games and simulations.
Perception tests have been performed by Night Vision and Electronic Sensors Directorate (NVESD) addressing the process of searching an image with the intent of detecting a target of military importance. The imagery used in the experiments was generated using NVESD's Paint-the-Night (PTN) thermal image simulation. The use of PTN simulation permits the same scene and target to be viewed with different sensor characteristics (such as resolution, noise and sampling). This allows the isolation of single variables in an experimental environment and the evaluations of their effect on probability of detection. Typical first and second generation FLIR sensor effects were applied to each of 100 synthetic images resulting in an experimental data set with identical thermal signatures and sensor fields of view. Experimental results are presented and the advantages of using synthetic imagery to evaluate differences in sensor resolution, noise, and other characteristics are discussed.
Dynamic measurement of minimum resolvable temperature difference (MRTD) has been shown to avoid the problems of phase optimization and beat frequency disruption associated with static MRT testing of under sampled systems. In order to predict field performance, the relationship between static and dynamic MRTD (DMRTD) must be quantified. In this paper, the dynamic MRTD of a sampled system is performed using both laboratory measurements and a simulation. After reviewing, the principles of static and dynamic MRTD, the design of a sensor simulator is described. A comparison between real and simulated DMRTD is shown. Measurement procedures are documented for both the static and dynamic MRTD. Conclusions are given regarding the utility of the simulator for performing comparative experiments between static and dynamic MRTD.
Virtual minimum resolvable temperature difference (MRTD) measurements have been performed on an infrared sensor simulation based on FLIR 92 input parameters. By using this simulation , it is possible to perform virtual laboratory experiments on simulated sensors. As part of the validation of this simulation, a series of MRTD experiments were conducted on simulated and real sensors. This paper describes the methodology for the sensor simulation. The experimental procedures for both real and simulated MRTD are presented followed by a comparison and analysis of the results. The utility of the simulation in assessing the performance of current and notional sensors is discussed.
An experiment has been performed at the U.S. Army's Night Vision and Electronic Sensors Directorate to fully test these models. The experiment imagery is intended to test the bounds of the models under which various blur and sampling is representative of the sensor in the task of target identification. The perception experiment is compared to the estimates of performance given by the various models. The model results are then compared and contrasted.
A multispectral, multiaperture, nonimaging sensor was simulated and constructed to show that the relative location of a robot arm and a specified target can be determined through Neural Network processing when the arm and target produce different spectral signatures. Data acquired from both computer simulation and actual hardware implementation was used to train an artificial Neural Network to yield the relative position in two dimensions of a robot arm and a target. The arm and target contained optical sources of different spectral characteristics which allows the sensor to discriminate between them. Simulation of the sensor gave an error distribution with a mean of zero and a standard deviation of 0.3 inches in each dimension across a work area of 6 by 10 inches. The actual sensor produced a standard deviation of approximately 0.8 inches using a limited number of training and test sets. No significant differences were found in the system performance where 9 or 18 apertures were used, indicating a minimum number of apertures required is equal to or less than nine.