KEYWORDS: Black bodies, Sensors, Calibration, Radiometry, Infrared radiation, Target detection, Nonuniformity corrections, Infrared sensors, Infrared search and track, Signal to noise ratio
The intensity of objects in the infrared is an important quantity for a number of applications. Intensity in watts per steradian is the parameter that is used to describe either small targets or targets that are far away. Intensity is used because these cases are usually presented to a detection sensor where the object is smaller than the sensor detector angular subtense, a situation known as an “unresolved target.” In the military, unresolved targets can be rocket-propelled grenades, man-portable air defense threats, enemy aircraft at long range, or even ground vehicles that are being engaged by ground-to-ground or air-to-ground missiles. Typical “resolved target” metrics such as root-sum-squared differential temperature do not work well for unresolved targets. In addition, a given target intensity coupled with range, atmospheric transmission, and sensor noise equivalent irradiance can provide a quick signal-to-noise estimate of a particular sensor against a particular target. Target intensity can even be a measure of how visible ones platform is to other sensor and can be used to reduce platform signatures. Measurement of intensity is always a difficult procedure, where there is typically a sensor that does not encompass all aspects of measurement parameters. For example, there are very few radiometers that include high-resolution spatial measurements with high-resolution spectral measurements with high-resolution temporal measurements, not to mention polarization. For the few systems that exist that can provide a simultaneous measurement with most of these parameters, the cost is prohibitive. Usually, a spectral radiometer will provide high spectral resolution with no spatial information and a slow temporal rate. These measurements are common. In the case we describe here, the intensity measurement is taken broadband in the midwave or longwave infrared regions with good spatial resolution. This measurement provides a band integrated intensity measurement. We describe an approach for sensor calibration and object intensity measurement that can be used for broadband sensors applications.
Recent developments in infrared focal plane array technology have led to the wide use of staring sensors in many tactical
scenarios. With all of its advancements, however, several noise sources remain present to degrade imagery and impede
performance. Fixed pattern noise, arising from detector nonuniformities in focal plane arrays, is a noise source that can
severely limit infrared imaging system performance. In addition, temporal noise, arising from frame to frame
nonuniformities, can further hinder the observer from perceiving the target within the tactical scene and performing a
target acquisition task.
In this paper, we present a new method of simulating realistic spatial and temporal noise effects, derived from focal
plane array statistics, on infrared imagery, and study their effect on the tasks of search and identification of a tank
vehicle. In addition, we assess the utility of bad pixel substitution as a possible correction algorithm to mitigate these
effects. First, tank images are processed with varying levels of fixed pattern and temporal noise distributions with
differing percentage of highly noisy detectors lying outside the operability specification. Then, a series of controlled
human perception experiments are performed using trained observers tasked to identify and search for tank targets,
respectively, through the combinations of noise. Our results show promise for a relaxation of the operability
specification in focal plane array development without severe degradation in task performance.
As the number of fielded sensors proliferates, sensors are being implemented in sensor networks with wired or wireless exchange of information. To handle the expanding load of data with limited network bandwidth resources, both still and moving imagery can be highly compressed. However, high levels of compression are not error-free, and the resulting images contain artifacts that may adversely affect the ability of observers to detect or identify targets of interest. This paper attempts to quantify the effect of image compression on observer tasks such as target identification. We addressed two typical compression algorithms, at two levels of compression, in a series of controlled perception experiments to isolate and quantify the effects on observer task performance. We find that the performance loss caused by image compression is well modeled by the use of an effective per-pixel blur, and give those blurs for the cases we used.
Superresolution processing is currently being used to improve the performance of infrared imagers through an increase in sampling, the removal of aliasing, and the reduction of fixed-pattern noise. The performance improvement of superresolution has not been previously tested on military targets. This paper presents the results of human perception experiments to determine field performance on the NVESD standard military eight (8)-target set using a prototype LWIR camera. These experiments test and compare human performance of both still images and movie clips, each generated with and without superresolution processing. Lockheed Martin's XR® algorithm is tested as a specific example of a modern combined superresolution and image processing algorithm. Basic superresolution with no additional processing is tested to help determine the benefit of separate processes. The superresolution processing is modeled in NVThermIP for comparison to the perception test. The measured range to 70% probability of identification using XR® is increased by approximately 34% while the 50% range is increased by approximately 19% for this camera. A comparison case is modeled using a more undersampled commercial MWIR sensor that predicts a 45% increase in range performance from superresolution.
The US Army Night Vision and Electronic Sensors Directorate (NVESD) Modeling and
Simulation Division develops sensors models (FLIR 92, NV Therm, NV Therm IP) that
predict the comparative performance of electro-optical sensors. The NVESD modeling
branch developed a 12-vehicle, 12-aspect target signature set in 1998 with a known cycle
criteria. It will be referred to as the 12-target set. This 12-target set has and will continue
to be the modeling "gold standard" for laboratory human perception experiments
supporting sensor performance modeling, and has been employed in dozens of published
experiments. The 12-target set is, however, too costly for most acquisition field tests and
evaluations. The authors developed an 8-vehicle 3-aspect target set, referred to as the 8-
target set, and measured its discrimination task difficulty, (N50 and V50). Target
identification (ID) range performance predictions for several sensors were made based on
those V50/N50 values. A field collection of the 8-target set using those sensors provided
imagery for a human perception study. The human perception study found excellent
agreement between predicted and measured range performance. The goal of this
development is to create a "silver standard" target set that is as dependable in measuring
sensor performance as the "gold standard", and is affordable for Milestone A and other
field trials.
KEYWORDS: Modulation transfer functions, Sensors, Infrared sensors, Performance modeling, Electro optical modeling, Visual process modeling, NVThermIP, Minimum resolvable temperature difference, Point spread functions, Imaging systems
New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated.
These methods are designed to allow the separation and extraction of presampling and postsampling components from the
total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as
optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters
and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques.
Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night
Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization
of sensor performance.
As the number of fielded sensors increases, together with increasing sensor format size, and more spectral bands, the
amount of sensor information available is rapidly multiplying. Additionally, sensors are increasingly being implemented
in sensor networks with wired or wireless exchange of information. To handle the increasing load of data, often with
limited network bandwidth resources, both still and moving imagery can be highly compressed, resulting in a 50 to 100
fold (or more) decrease in required network bandwidth. However, such high levels of compression are not error-free,
and the resulting images contain artifacts that may adversely affect the ability of observers to detect or identify targets
of interest.
This paper attempts to quantify the effect of image compression in its impact on observer tasks such as target
identification. We will address multiple typically-used compression algorithms, at varying degrees of high compression,
in a series of controlled perception experiments to isolate the effects and quantify the impact on observer tasking.
Recommendations will be made on how to incorporate performance degradation caused by image compression with
other sensor design factors in designing a remote sensor with compressed imagery.
The two most important characteristics of every infrared imaging system are its resolution and its sensitivity. The resolution is limited by the system's Modulation Transfer Function (MTF), which is typically measurable. System sensitivity is limited by noise, which for infrared systems is usually thought of as a Noise Equivalent Temperature Difference (NETD). However, complete characterization of system noise in modern systems requires the 3D-Noise methodology (developed at NVESD), which separates the system noise into 7 orthogonal components including both temporal-varying and fixed-pattern noises. This separation of noise components is particularly relevant and important in characterizing Focal Plane Arrays (FPA), where fixed-pattern noise can dominate. Since fixed-pattern noise cannot be integrated out by post-processing or by the eye, it is more damaging to range performance than temporally-varying noise. While the 3D-Noise methodology is straightforward, there are several important practical considerations that must be accounted for in accurately measuring 3D Noise in the laboratory. This paper describes these practical considerations, the measurement procedures used in the Advanced Sensor Evaluation Facility (ASEF) at NVESD, and their application to characterizing modern and future infrared imaging systems.
KEYWORDS: Sensors, 3D metrology, Thermography, Nonuniformity corrections, Analog electronics, Cameras, Image sensors, 3D modeling, 3D acquisition, Infrared sensors
Uncooled staring thermal imagers have noise characteristics that are different from cooled thermal imagers (photon detector sensors). For uncooled sensors, typical measurements of some noise components can vary as much as 3 to 5 times the original noise value. Additionally, the detector response often drifts to the point that non-uniformity correction is only good for a short time period. Because the noise can vary so dramatically with time, it can prove difficult to measure the noise associated with uncooled systems. However, it is critical that laboratory measurements provide repeatable and reliable measurement of constructed uncooled thermal imagers. In light of the above difficulties, a primary objective of this research has been to develop a satisfactory measurement for the noise of uncooled staring thermal imagers. In this research effort, three-dimensional noise (3D Noise) data vs. time was collected for several uncooled sensors after nonuniformity correction. Digital and analog noise data vs. time were collected nearly simultaneously. Also, multiple 3D Noise vs. time runs were made to allow the examination of variability. Measurement techniques are being developed to provide meaningful and repeatable test procedures to characterize the uncooled systems.
The Minimum Resolvable Temperature Difference (MRTD or MRT) is the most widely accepted and inclusive figure of merit for describing a thermal imaging system's performance. It is the product of analytic mathematical models and traditional man-in-loop system hardware performance measurements that describe IR systems. MRT is a basis for thermal field performance model predictions and is commonly used in specification of thermal imagers. The MRT test is subjective because it requires human observers to just discern increasingly smaller 4-bar patterns as a function of temperature differences between bars and the background. When performed by trained observers, the MRT test is an accurate measure of sensitivity as a function of spatial resolution. The ability to resolve 4-bar patterns varies between observers. Furthermore, MRT is a psychophysical task, for which biases are unavoidable. In this paper, uncertainties in MRT measurements are reported for individual trained observers and between observers as functions of some biases, such as random and fixed pattern noise. For this paper, virtual MRTs were performed on a new, custom visual acuity test simulator, developed for NVESD, that allows precise control over significant sensor and display parameters, and these results are compared. Through a process of eliminating sources of MRT variability, we have been able to quantify the observer variability.
Minimum Resolvable Temperature Difference (MRTD) is the primary measurement of performance for infrared imaging systems. Where Modulation Transfer Function (MTF) is a measurement of resolution and three-dimensional noise (or noise equivalent temperature difference) is a measurement of sensitivity, MRTD combines both measurements into a test of observer visual acuity through the imager. MRTD has been incorrectly applied to undersampled thermal imagers as a means for assessing the overall performance of the imager. The incorrect application of the MRTD (or just MRT) test to undersampled imagers includes testing to the half-sample (or Nyquist rate) of the sensor and calling the MRT unresolvable beyond this frequency. This approach is known to give poor predictions in overall system performance. Also, measurements at frequencies below the half-sample rate are strongly dependent on the phase between the sampling geometry and the four-bar target. The result is that very little information in the MRT measurement of an undersampled thermal imager is useful. There are a number of alternatives including Dynamic MRT (DMRT), Minimum Temperature Difference Perceived (MTDP), Triangle Orientation Discrimination (TOD), and objective MRT tests. The NVESD approach is to measure the MTF and system noise and to use these measurements in the MRT calculation to give good sensor performance predictions. This paper describes the problems with MRT for undersampled imagers, describes the alternative measurements, and presents the NVESD approach to MRT measurements.
The Minimum Resolvable Temperature Difference (MRT or MRTD) of an IR imaging sensor provides a measure of system performance in terms of sensitivity as a function of resolution. It's expressed as the temperature difference (ΔT) between a target and a background at which target features are just discernable as a function of spatial frequency. Traditionally, MRT has been measured in the laboratory by imaging a flat plate “blackbody” at slightly elevated and depressed temperatures through a sequence of 4-bar slot patterns (“targets”) in high Emissivity disks (“backgrounds”) at ambient temperature. In this traditional, purely emissive MRT approach, the luminance modulation in the images of the smaller targets ride upon luminance pedestals against the ambient background that make the image modulation hard to discern. Consequently, in the laboratory, sensor gain and offset adjustments sometimes must be performed in order to see the modulation on the smaller, i.e., higher spatial frequency, MRT targets. Quite frequently this part of the procedure does not correspond to how the sensor is operated in the field. An alternative approach, called reflective MRT, uses a disk in which the target region consists of slots and highly reflective, low Emissivity spaces and is surrounded by a high Emissivity background. Two flat plate “blackbodies” are used, one in transmission through the slots and one in reflection from the spaces between the slots. Both are at slightly different temperatures controlled and regulated above and below ambient. This results in target luminance modulation that does not ride upon ambient luminance pedestals, thus allowing MRT to be measured at the same sensor gain and offset for all target spatial frequencies. The intent of this approach is to improve the accuracy of laboratory MRT measurements as predictors of field performance. This paper describes this problem with emissive MRT, reflective MRT as a possible solution, and the experimental research planned for calendar year 2003 to compare emissive and reflective MRT measurements of well sampled imaging sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.