Current development of optical sensors has lead to their increased utility and potential. Applications for these imagers
encompass not just single regions of the electromagnetic spectrum but indeed all parts of the thermal radiation spectrum,
ultraviolet through long-wave infrared, indicative for instance of Earth's atmosphere. Accordingly, these multispectral
imagers mandate the development of entirely new test methods and test hardware to measure and calibrate the
benchmarks of their performance; such as SNR, uniformity, sensitivity, linearity, and dynamic range. The role of the test
hardware is thus driven not only to provide high-resolution, uniform, and stable output but also to provide multispectral
output to minimize the amount of measurement equipment required and in order to demonstrate their full functionality.
Multispectral imagers require that test hardware be capable of producing an output that matches high daylight down
through low light/starlight irradiance levels. This paper explores the characterization, testing, and advantages and
drawbacks of various types of multispectral sources spanning UV through SWIR over a high dynamic range of output.
The infrared test equipment industry has matured over the past half century and has historically offered test equipment
that met and often exceeded the capabilities of the units under test. This may seem to be a moot or trivial point.
However, in the past decade infrared imagers have begun to press the limits of infrared test equipment. Today, infrared
imagers incorporate focal plane arrays that offer a significantly higher resolution and sensitivity than their predecessors.
Additionally, current infrared imagers are expanding their role in the field and are being developed for a wide variety of
applications. These applications demand that optical infrared test equipment begin to expand their capability. Roles such
as: larger emitting surface areas, temperature ranges from cryogenic to sunlight, wide ambient temperature ranges,
vacuum ambient conditions, vehicle installation, field portability, computer interface compatibility, applications level
software integration, and high off-axis uniformity and emissivity. Therefore, how does infrared test equipment meet
these demands while maintaining excellent uniformity and stability, two of the traditionally most scrutinized
specifications? This paper will present methods for achieving the rigorous demands for test equipment outlined above, it
will present an outline of the development and technology trends of blackbody/infrared test equipment over the past 50
years, and finally this paper will discuss the expected development of blackbody/infrared test equipment for the years to
Continued innovation with optical components and optical materials not only increases the utility and value of optical
sensors and devices, it also mandates the development of new test methods and test hardware. Thus, in order to evaluate
the enhanced performance of these new optical components and systems - SWIR imagers, silicon-based photodetectors,
and single-photon detectors; as well as detectors that may utilize novel materials for highly specific spectral regions -
equally enhanced test and measurement equipment must be used. A task such as this is greatly simplified for these
detectors when a minimal amount of hardware can be used to test, measure, and calibrate the benchmarks of their
performance; benchmarks such as SNR, uniformity, sensitivity, linearity, and dynamic range. The role of the test
hardware is driven by its ability to provide high resolution, uniform, and stable broadband output. A broadband source
encompassing the UV through the SWIR region of the electromagnetic spectrum capable of producing high daylight
irradiance levels down to low star light irradiance levels. All of this functionality is integrated into one calibrated test set.
A test set that is robust but does not sacrifice precision and accuracy. This paper will explore characterization testing and
the advantages and drawbacks of various types of broadband sources spanning UV through SWIR over a high dynamic
range of output. This paper will further suggest standardization of test methods and presentation of results (for example,
SNR) such that results from various detectors can be compared directly.
How to quantify something that is typically subjective in nature can be a daunting task. Image quality is no exception
and the pursuit of quantifiable results has thus led to an exhaustive battery of tests, methodology, and reporting formats.
How many specifications are really required of a camera to establish its imaging performance? Of these which are
actually pertinent and further which are truly unique? Most all design decisions can eventually be reduced down to a
simple tradeoff. Whether it be, for example, weight versus strength or cost versus reliability there is always a struggle to
be had at some point during the design process. For sensor makers this tradeoff typically manifests as resolution versus
sensitivity. It is irrelevant to have a 100 megapixel sensor if the pixels are not sensitive enough to respond to reasonable
illumination. On the other hand it is also irrelevant if you can image with virtually no light but do not have enough pixels
to resolve your subject. Resolution and sensitivity are essential to ascertaining imager performance. This paper will
discuss how these two specifications are more than just a megapixel count or simply an ISO film speed equivalent.
Resolution of a sensor is best reported as its modulation transfer function (MTF) and sensitivity is more informative
when described in terms of a photon transfer curve (PTC). This paper will show how to create and interpret these two
curves and finally how to translate the results back into the qualitative realm.
With the continuing innovation in night vision and multispectral imaging technologies, the requirements for more
sophisticated test systems continue to increase. Various manufacturers of Visible and Near Infrared (V-NIR) cameras
and detection systems need to verify the lowest detectable light level and check system performance at very low light
levels as well as recovery from exposure to typical daytime light levels. Typical low level requirements are in the range
of 10-4 to 10-6 foot-lamberts, equivalent to starlight radiance levels; typical daytime light levels are 103 foot-lamberts.
There is a relatively straightforward approach to producing low light level output using "neutral" density filters to reduce
the light to the proper level. Although neutral density filters are not spectrally neutral over the entire V-NIR wavelength
range. For some test applications the loss of spectral fidelity is unacceptable for tests of sensor response. The challenge
was to create an adjustable output V-NIR source that maintains the color temperature setting over the entire output
range. This paper explains how the requirement of True Color Temperature Low Light Level source is met and the
benefits compared to prior methods. In addition how the daylight level is also met with the same source. Once the high
and low light levels are achieved in a stable and repeatable manner and are calibrated; the unique tests that can be
performed with this source are discussed.
The performance of imaging systems continues to increase and diversify as a result of the ability to measure, analyze,
and improve the limiting aspects of imaging systems. Bloom is one such limiting aspect. In an image, bright regions of
light noticeably bleed into darker regions of light causing the phenomenon referred to as "bloom". The occurrence of
bloom is theoretically a direct consequence of the diffraction pattern of an aperture. In practice, bloom is caused both
optically by non-ideal lenses and electronically by the bleeding of overly saturated pixels. In analyzing optical
instruments, circular apertures are of particular interest since their theoretical diffraction patterns are well known,
consisting of an Airy Disk and alternating concentric dark and bright rings. In the image formed by a circular aperture,
relative intensity can be observed by dividing all pixel intensity values by the peak pixel intensity. Bloom cut off
percentages may be analyzed from their relative distances to the threshold peak intensity. Instrument performance may
thus be measured against theoretical Airy function values or by comparing different images produced by the same
instrument under similar conditions. Additionally, polynomials of single digit orders may be accurately fit to the pixel
array data. By approximating the data with polynomials, pertinent information on derivatives, local slopes, and integrals
may be analytically as well as numerically obtained.
The objective of any imaging system is to optimize the amount of pertinent information collected from a scene. Whether
it is used for artistic reproduction, scientific research, or camouflage detection, a camera has the same ultimate
requirement. In the era of broadband, multi-spectral, hyperspectral, and fused sensor systems, both spectral and spatial
data continue to play battling roles in determining which is dominant in how well an imaging system meets its definitive
objective. Typically sensor testing requires hardware and software exclusively designed for the spectral region of
interest. Thus an imaging system with ultraviolet through infrared imaging capabilities could require three or more
separate test benches for sensor characterization. Obviously this not only increases the complexity, and subsequently the
cost of testing, but also more importantly tends to produce discontinuous results. This paper will outline the hardware
and software developed by the authors that employ identical test methods and shared optics to complete infrared, visible,
and ultraviolet sensor performance analysis. Challenges encompassing multiple emitting source switching, splitting, and
combining will be addressed along with new single fused type source designs. Decisions related to specifying optics and
targets of sufficient quality and construction to provide coverage of the full spectral region will be discussed along with
sample performance specifications and data. Test methodology controlled by a single automated software suite will be
summarized including modulation transfer function, signal to noise ratio, uniformity, focus, distortion, intrascene
dynamic range, and sensitivity. Selected examples of results obtained by this test set will be presented.
This paper starts with a back to basics review of the definition of blackbody emissivity, how it is measured and how it is
specified. Infrared source vendors provide emissivity specifications for their blackbodies and source plates, but there is
fine print associated with their declarations. While there is an industry agreement concerning the definition of emissivity,
the data sheets for blackbodies and source plates are not consistent in how they base their claims. Generally, there are
two types of emissivity specifications published in data sheets; one based on design properties of the source and
thermometric calibration, and another based on an equivalent radiometric calibrated emissivity. The paper details how
the source properties including geometry, surface treatment, and coatings are characterized and result in an emissivity
value by design. The other approach is that the emissivity can be claimed to be essentially 100% when measured directly
with a radiometer. An argument is derived to show that as the optical parameters of the unit under test and the
radiometer diverge, the less useful an equivalent radiometric emissivity claim is. Also discussed, is under what test
conditions the absolute emissivity does not matter. Further suggestions on how to achieve the clearest comparative
emissivity specifications are presented.
The human eye has the ability to distinguish millions of colors. Employing this attribute along with cognitive spatial cues a human being can differentiate between even the slightest color variations. The goal of any imager is to collect the maximum amount of information from a scene, both spatially and spectrally. Whether it is used for artistic reproduction or camouflage detection, a camera has the same ultimate specifications. While much sensor research and development has been conducted to improve both spatial and intensity resolution, less effort has been directed to color contrast delineation. This specification is not only difficult to define but complex to test. Most color testing is confined to print or display technology and is supported by a myriad of test equipment and standards. Typical camera color calibration may rely on color standards with defined illuminants but is ineffective in contrast resolution definition. This paper will discuss hardware and software developed by the authors that is utilized to project precise dual color controlled images to determine the color contrast resolution of an imager. Algorithmic challenges related to human-perceived versus machine-created color in conjunction with real-time color feedback loops will be addressed. Design issues including system stability, color resolution, channel matching, and target registration will also be discussed. Calibration routines and verification will be presented along with example results of the complete system.
Complex systems ranging from unmanned vehicles to night vision goggles rely on a various spectral regions to achieve
the demanding imaging performance they require. The lines between infrared, visible, and ultraviolet are quickly
blurring as multi-sensor systems become more sophisticated and image fusion becomes commonplace. Typically sensor
testing requires hardware and software exclusively designed for the spectral region of interest. Thus a system with
ultraviolet through infrared imaging capabilities could require up to three separate test benches for sensor
characterization. This not only drives up the cost of testing but also leads to a discontinuity of methods and possibly
skewed results. This paper will discuss hardware and software developed by the authors that utilize identical test
methods and shared optics to complete infrared, visible, and ultraviolet sensor performance analysis. Challenges
encompassing multiple source switching and combining will be addressed along with design choices related to
specifying optics and targets of sufficient quality and construction to provide performance to cover the full spectral
region. Test methodology controlled by a single software suite will be summarized including modulation transfer
function, signal to noise ratio, uniformity, focus, distortion, intrascene dynamic range, and sensitivity. Examples of
results obtained by these test systems will be presented.