Signal and Image Processing ,
Computer Vision ,
Pattern Recognition and Deep Learning ,
Infrared, Polarimetric, and Hyperspectral Imaging ,
Scene-based Algorithm Development ,
Remote Sensing
Publications (32)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Our group previously presented an empirical approach for measuring the polarimetric bidirectional reflectance distribution function (pBRDF) using a visible linear imaging polarimeter from 3D painted geometric objects with well-characterized surface facets. The initial results obtained from this approach were validated against physics-based models and demonstrated good agreement with data collected under outdoor, full-sun conditions. In this work, we conduct similar measurements on the same faceted objects in a laboratory environment. The Applied Sensing Lab at the University of Dayton has constructed a solar simulation laboratory that allows for highly accurate and repeatable positioning of light sources, sensors, and objects. The laboratory contains both collimated (direct sun) and diffuse (downwelling) light sources that we have spectrally tuned to match expected solar irradiance under a range of outdoor conditions.
KEYWORDS: RGB color model, Pose estimation, Polarimetry, Polarization, Light sources and illumination, Network architectures, Data modeling, Feature extraction, Deep learning
Object pose estimation is an important problem in the field of remote sensing that provides valuable information for target identification tasks. Polarization is a fundamental property of light that contains useful information about the physical properties of an object, such as shape and surface material properties. Polarization imaging has been shown to have advantages over conventional imaging techniques for object detection and feature extraction in a variety of challenging scenarios, including low light, high background clutter, and low visibility conditions. In this work, we investigate using polarimetric imaging to improve the performance of deep learning approaches to object pose estimation on a range of model target vehicles. We collect polarimetric imaging data and labeled ground truth pose data on the target vehicles in a controlled solar simulation laboratory environment under precise sensor, object, and solar source geometries. We first establish baseline performance of our approach by training our network using conventional visible RGB s0 images under favorable lighting conditions. We then make use of the full linear Stokes images for each color channel in various configurations, retrain our network, and compare performance. We furthermore propose an ensemble method to combine features obtained from convolutional neural networks trained on both conventional RGB and Stokes-vector images. These obtained ensemble features are then used to train a multi-layer perceptron. Experimental results demonstrate that combining polarization imaging with conventional imaging can improve feature extraction and the accuracy of deep learning-based approaches to pose estimation.
The Applied Sensing Laboratory at the University of Dayton has constructed a solar simulation laboratory to support polarimetric remote sensing research. The laboratory contains a number of highly accurate and repeatable motion stages that allow for automated positioning and control of imaging sensors, light sources, and object geometries. The laboratory contains both collimated (direct sun) and diffuse (downwelling) light sources that we have spectrally tuned to match expected solar irradiance under a range of outdoor conditions. In this work we describe the capabilities of the laboratory and the measures that have been taken to date for calibrating the laboratory environment to mimic outdoor solar conditions. The laboratory can support complex, automated experiments that can precisely control the dominant parameters of interest in polarimetric remote sensing. We demonstrate example data for laboratory use cases of interest to the polarimetric imaging community that include polarimetric bidirectional reflectance distribution function (pBRDF) measurements and generation of curated datasets to support polarimetric phenomenology studies and deep learning algorithm training for a host of polarimetric remote sensing applications.
Over the past decade, a large body of work has demonstrated improved designs over the conventional 2×2 modulation scheme (based upon polarizer orientation angles of {0, 90, 45, 135}o) used in the manufacture of DoFP polarimeters. These designs show better usage of bandwidth and reduction of crosstalk across the various linear Stokes vector channels in the frequency domain. While much focus has been on the development of optimal modulation schemes for these devices, little attention has been given to the development of corresponding demosaicing strategies for these modulators. In this work, we adapt a recent demosaicing strategy based upon a conditional generative adversarial network (cGAN) developed for conventional 2×2 DoFP sensors to accommodate alternative modulation schemes. We collect full-resolution polarized intensity data at non-conventional polarizer angles using a division-of-time (DoT) visible imaging polarimeter that we use to simulate DoFP data from alternative modulators for training and testing purposes. We then assess performance across these alternative modulation strategies and compare results against the conventional modulation scheme.
We recently presented a deep learning approach to demosaic division of focal plane (DoFP) imaging polarimeter data based upon a conditional generative adversarial network (cGAN). The approach was developed and demonstrated using visible DoFP polarimeter data and showed a notable ability to reduce false edge artifacts, aliasing, and temporal noise. Here we retrain and apply this algorithm to emissive-band polarimetric data acquired with a LWIR DoFP imaging polarimeter to investigate performance. We then adapt the baseline cGAN architecture to perform simultaneous demosaicing and resolution enhancement of LWIR DoFP data. We collect full-resolution polarized intensity data using a division-of-time (DoT) LWIR imaging polarimeter that we use to simulate decimated DoFP data for training and testing purposes. We then apply the algorithm to data obtained from simulated LWIR DoFP polarimeter data and assess performance.
Polarization phenomenology is particularly complex at electro-optical and infrared wavelengths. The observed polarization state is dependent upon material surface characteristics, shape, chemistry, sensor geometry, relative position of any illumination sources, relative temperatures of scene and background objects, atmospheric conditions, and the presence and temperature of objects within the sensor field of view. First principles physics-based models often require full material characterization in the form of the polarimetric bidirectional reflectance distribution function (pBRDF). While pBRDF measurements can be reliably obtained for small target samples under controlled laboratory conditions, they are more challenging to obtain for many remote sensing targets of interest. Furthermore, in outdoor settings it is difficult to control pBRDF parameters such as atmospheric conditions, solar illumination position, and polarization state. The result is that pBRDF measurements are simply not available for many materials of interest at the level of fidelity required by physics-based models. Moreover, the level of accuracy capable with physics-based modeling tools is often not necessary for many tasks. For experiment or mission planning purposes, it is desirable to have a rough idea of expected polarization signatures for a given material class, time of day, and sensor look angle. Having simplified models that are capable of predicting expected polarimetric signatures, even at a low fidelity, is of high utility for many applications. Here we present the initial framework of such a model based upon empirical data measurements. Results are generated for several material classes with corresponding validation against physics-based models. We show that our measured Stokes vector and DoLP values are within expected physical bounds for 96% of the measured data and generally agree with truth results.
Division of focal plane (DoFP), or integrated microgrid imaging polarimeters, typically consist of a 2x2 mosaic of linear polarization filters overlaid upon a focal plane array sensor and obtain temporally synchronized polarized intensity measurements across a scene, similar in concept to a Bayer color filter array camera. However, the resulting estimated polarimetric images suffer a loss in resolution and can be plagued by aliasing due to the modulated microgrid measurement strategy. Demosaicing strategies have been proposed that attempt to minimize these effects, but result in some level of residual artifacts. In this work, we present a conditional and guided generative adversarial network (GAN) strategy for demosaicing integrated microgrid polarimeter imagery. The GAN is trained using high resolution polarized intensity measurements that contain minimal spatial aliasing artifacts obtained from a division-of-time polarimeter. We apply the algorithm to test data collected from real visible microgrid imagery and compare the results with other state-of-the-art microgrid demosaicing strategies.
KEYWORDS: Target detection, Sensors, Calibration, Detection and tracking algorithms, Latex, Image sensors, Staring arrays, Cameras, Signal to noise ratio, Hyperspectral imaging, Scene based nonuniformity corrections
Hyperspectral imaging sensors suffer from pixel-to-pixel response nonuniformity that manifests as fixed pattern noise (FPN) in collected data. FPN is typically removed by application of flat-field calibration procedures and nonuniformity correction algorithms. Despite application of these techniques, some amount of residual fixed pattern noise (RFPN) may persist in the data, negatively impacting target detection performance. In this paper we examine the conditions under which RFPN can impact detection performance using data collected in the SWIR across a range of target materials. We examine the application of scene-based nonuniformity correction (SBNUC) algorithms and assess their ability to remove RFPN. Moreover, we examine the effect of RFPN after application of these techniques to assess detection performance on a number of target materials that range in inherent separability from the background.
The display of polarimetric imaging data has been a subject of considerable debate. Display strategies range from direct display of the Stokes vector images (or their derivatives) to false color representations. In many cases, direct interpretation of polarimetric image data using traditional display strategies is not intuitive and can at times result in confusion as to what benefit polarimetric information is actually providing. Here we investigate approaches that attempt to augment the s0 image with polarimetric information, rather than directly display it, as a means of enhancing the baseband s0 image. The benefit is that the polarization-enhanced visible or infrared image maintains a familiar look without the need for complex interpretation of the meaning of the polarimetric data, thus keeping the incorporation of polarimetric information transparent to the end user. The method can be applied to monochromatic or multi-band data, which allows color to be used for representing spectral data in multi- or hyper-spectropolarimetric applications. We take a more subjective approach to image enhancement than current techniques employ by simply seeking to improve contrast and shape information for polarized objects within a scene. We find that such approaches provide clear enhancement to the imagery when polarized objects are contained within the scene without the need for complex interpretation of polarization phenomenology.
The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity
measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement
of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method
based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and
low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple
way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed
method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity
and peak signal-to-noise ratio.
Hyperspectral image data suffer from pixel-to-pixel response nonuniformity that degrades the imagery in the form of columnated striping noise. This nonuniformity, or fixed pattern noise (FPN), is typically compensated for through flat-field calibration procedures. FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be periodically recalibrated. Both the rate and severity of the drift depend on a host of factors that result in varying levels of residual calibration error being present within the data at all times. Scene-based nonuniformity correction (SBNUC) algorithms estimate and remove FPN by exploiting content within the scene data and are often necessary to acceptably remove sensor artifacts for subpixel target detection applications. We present results from two SBNUC techniques that reduce residual FPN and improve target signal-to-clutter ratio. We make the observation that temporally reordering the data in conjunction with the use of spatial ratios or differentials results in algorithms that require a low number of temporal data samples to reliably correct for FPN with minimal introduction of image artifacts. Additionally, application of the algorithms within the principal components domain can further improve their correction ability.
Uncorrected or poorly corrected bad pixels reduce the effectiveness of polarimetric clutter suppression. In conventional microgrid processing, bad pixel correction is accomplished as a separate step from Stokes image reconstruction. Here, these two steps are combined to speed processing and provide better estimates of the entire image, including missing samples. A variation on the bilateral filter enables both edge preservation in the Stokes imagery and bad pixel suppression. Understanding the newly presented filter requires two key insights. First, the adaptive nature of the bilateral filter is extended to correct for bad pixels by simply incorporating a bad pixel mask. Second, the bilateral filter for Stokes estimation is the sum of the normalized bilateral filters for estimating each analyzer channel individually. This paper describes the new approach and compares it to our legacy method using simulated imagery.
Previous work with the Bobcat 2013 data set1 showed that spatial-spectral feature extraction on visible to near infrared (VNIR) hyperspectral imagery (HSI) led to better target detection and discrimination than spectral-only techniques; however, the aforementioned study could not consider the possible benefits of the shortwaveinfrared (SWIR) portion of the spectrum due to data limitations. In addition, the spatial resolution of the Bobcat 2013 imagery was fixed at 8cm without exploring lower spatial resolutions. In this work, we evaluate the tradeoffs in spatial and spectral resolution and spectral coverage between for a common set of targets in terms of their effects on spatial-spectral target detection performance. We show that for our spatial-spectral target detection scheme and data sets, the adaptive cosine estimator (ACE) applied to S-DAISY and pseudo Zernike moment (PZM) spatial-spectral features can distinguish between targets better than ACE applied only to the spectral imagery. In particular, S-DAISY operating on bands uniformly selected from the SWIR portion of ProSpecTIR-VS sensor imagery in conjunction with bands closely corresponding to the Airborne Real-time Cueing Hyperspectral Reconnaissance (ARCHER) sensor's VNIR bands (80 total) led to the best overall average performance in both target detection and discrimination.
Nighttime active SWIR imaging has resolution, size, weight, and power consumption advantages over passive MWIR and LWIR imagers for applications involving target identification. We propose that the target discrimination capability of active SWIR systems can be extended further by exerting polarization control over the illumination source and imager, i.e. through active polarization imaging. In this work, we construct a partial Mueller matrix imager and use laboratory derived signatures to uniquely identify target materials in outdoor scenes. This paper includes a description of the camera and laser systems as well as discussion of the reduction and analysis techniques used for material identification.
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors.
This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected
data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is
a particularly challenging problem because the detector responsivities drift relative to one another in time,
requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation
and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between
calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration
error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to
FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques
exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the
problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique
for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under
a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the
spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using
real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
The LWIR microgrid Polarized InfraRed Advanced Tactical Experiment (PIRATE) sensor was used to image
several types of RC model aircraft at varying ranges and speeds under different background conditions. The data
were calibrated and preprocessed using recently developed microgrid processing algorithms prior to estimation
of the thermal (s0) and polarimetric (s1 and s2) Stokes vector images. The data were then analyzed to assess the
utility of polarimetric information when the thermal s0 data is augmented with s1 and s2 information for several
model aircraft detection and tracking scenarios. Multi-variate analysis tools were applied in conjunction with
multi-hypothesis detection schemes to assess detection performance of the aircraft under different background
clutter conditions. We find that polarization is able to improve detection performance when compared with
the corresponding thermal data in nearly all cases. A tracking algorithm was applied to a sequence of s0 and
corresponding degree of linear polarization (DoLP) images. An initial assessment was performed to determine
whether polarization information can provide additional utility in these tracking scenarios.
For the past several years we have been working on strategies to mitigate the effects of IFOV errors on
LWIR microgrid polarimeters. In this paper we present a detailed, theoretical analysis of the source of
IFOV error in the frequency domain, and show a frequency domain strategy to mitigate those effects.
Microgrid polarimeters are a type of division of focal plane (DoFP) imaging polarimeter that contains a mosaic
of pixel-wise micropolarizing elements superimposed upon an FPA sensor. Such a device measures a slightly
different polarized state at each pixel. These measurements are combined to estimate the Stokes vector at each
pixel in the image. DoFP devices have the advantage that they can obtain Stokes vector image estimates for
an entire scene from a single frame capture. However, they suffer from the disadvantage that the neighboring
measurements that are used to estimate the Stokes vector images are acquired at differing instantaneous fields of
view (IFOV). This IFOV issue leads to false polarization signatures that significantly degrade the Stokes vector
images. Interpolation and other image processing strategies can be employed to reduce IFOV artifacts; however
these techniques have a limit to the amount of enhancement they can provide on a single microgrid image.
Here we investigate algorithms that use multiple microgrid images that contain frame-to-frame global motion
to further enhance the Stokes vector image estimates. Motion-based imagery provides additional redundancy
that can be exploited to recover information that is "missing" from a single microgrid frame capture. We have
found that IFOV and aliasing artifacts can be defeated entirely when these types of algorithms are applied to the
data prior to Stokes vector estimation. We demonstrate results on real LWIR microgrid data using a particular
resolution enhancement technique from the literature.
Recent developments for long-wave infrared (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array. Inherent advantages over other classes of polarimeters include rugged packaging, inherent alignment of the optomechanical system, and temporal synchronization that facilitates instantaneous acquisition of both thermal and polarimetric information. On the other hand, the pixel-to-pixel instantaneous field-of-view error that is inherent in the microgrid strategy leads to false polarization signatures. Because of this error, residual pixel-to-pixel variations in the gain-corrected responsivity, the noise-equivalent input, and variations in the pixel-to-pixel micropolarizer performance are extremely important. The degree of linear polarization is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. We explore the unpolarized calibration issues associated with this class of LWIR polarimeters and discuss the resulting false polarization signature for thermally flat test scenes.
Division of focal plane (DoFP) polarimeters are a particular class of imaging device that consists of an array
of micropolarizers integrated upon a focal plane array sensor (FPA). Such devices are also called microgrid
polarimeters and have been studied over the past decade with systems being designed and built in all regions
of the optical spectrum. These systems are advantageous due to their rugged, compact design and ability to
obtain a complete set of polarimetric measurements during a single frame capture. One inherent disadvantage
of DoFP systems is that each pixel of the FPA sensor makes a polarized intensity measurement of a different
scene point. These spatial measurements are then used to estimate the Stokes vectors across the scene. Since
each polarized intensity measurement has a different instantaneous field-of-view (IFOV), artifacts are introduced
that can degrade the quality of estimated polarization imagery. Here we develop and demonstrate a visual
enhancement technique that is able to reduce false polarization caused by IFOV error while preserving true
polarization content within the Stokes parameter images. The technique is straight-forward conceptually and is
computationally efficient. All results are presented using data acquired from an actual LWIR microgrid sensor.
Microgrid polarimeters, also known as division of focal plane (DoFP) polarimeters, are composed of an integrated
array of micropolarizing elements that immediately precedes the FPA. The result of the DoFP device is that
neighboring pixels sense different polarization states. The measurements made at each pixel can be combined to
estimate the Stokes vector at every reconstruction point in a scene. DoFP devices have the advantage that they
are mechanically rugged and inherently optically aligned. However, they suffer from the severe disadvantage
that the neighboring pixels that make up the Stokes vector estimates have different instantaneous fields of view
(IFOV). This IFOV error leads to spatial differencing that causes false polarization signatures, especially in
regions of the image where the scene changes rapidly in space. Furthermore, when the polarimeter is operating
in the LWIR, the FPA has inherent response problems such as nonuniformity and dead pixels that make the
false polarization problem that much worse. In this paper, we present methods that use spatial information from
the scene to mitigate two of the biggest problems that confront DoFP devices. The first is a polarimetric dead
pixel replacement (DPR) scheme, and the second is a reconstruction method that chooses the most appropriate
polarimetric interpolation scheme for each particular pixel in the image based on the scene properties. We have
found that these two methods can greatly improve both the visual appearance of polarization products as well
as the accuracy of the polarization estimates, and can be implemented with minimal computational cost.
Division of Focal Plane polarimeters (DoFP) operate by integrating an array of micropolarizer elements with a
focal plane array. These devices have been investigated for over a decade, and example systems have been built in
all regions of the optical spectrum. DoFP devices have the distinct advantage that they are mechanically rugged,
inherently temporally synchronized, and optically aligned. They have the concomitant disadvantage that each
pixel in the FPA has a different instantaneous field of view (IFOV), meaning that the polarization component
measurements that go into estimating the Stokes vector across the image come from four different points in
the field. In addition to IFOV errors, microgrid camera systems operating in the LWIR have the additional
problem that FPA nonuniformity (NU) noise can be quite severe. The spatial differencing nature of a DoFP
system exacerbates the residual NU noise that is remaining after calibration, and is often the largest source
of false polarization signatures away from regions where IFOV error dominates. We have recently presented a
scene based algorithm that uses frame-to-frame motion to compensate for NU noise in unpolarized IR imagers.
In this paper, we have extended that algorithm so that it can be used to compensate for NU noise on a DoFP
polarimeter. Furthermore, the additional information provided by the scene motion can be used to significantly
reduce the IFOV error. We have found a reduction of IFOV error by a factor of 10 if the scene motion is known
exactly. Performance is reduced when the motion must be estimated from the scene, but still shows a marked
improvement over static DoFP images.
Recent developments for Long Wave InfraRed (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array (FPA). Inherent advantages over typical polarimeters include packaging and instantaneous acquisition of thermal and polarimetric information. This allows for real time video of thermal and polarimetric products. The microgrid approach has inherent polarization measurement error due to the spatial sampling of a non-uniform scene, residual pixel to pixel variations in the gain corrected responsivity and in the noise equivalent input (NEI), and variations in the pixel to pixel micro-polarizer performance. The Degree of Linear Polarization (DoLP) is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. Image processing and fusion techniques are used to take advantage of the inherent thermal and polarimetric sensing capability of this FPA, providing additional scene information in real time. Optimal operating conditions are employed to improve FPA uniformity and sensitivity. Data from two DRS Infrared Technologies, L.P. (DRS) microgrid polarizer HgCdTe FPAs are presented. One FPA resides in a liquid nitrogen (LN2) pour filled dewar with a 80°K nominal operating temperature. The other FPA resides in a cryogenic (cryo) dewar with a 60° K nominal operating temperature.
Long-wave infrared imaging Stokes vector polarimeters are used in many remote sensing applications. Imaging polarimeters require that several measurements be made under optically different conditions in order to estimate the polarization signature at a given scene point. This multiple-measurement requirement introduces error in the signature estimates, and the errors differ depending upon the type of measurement scheme used. Here, we investigate a LWIR linear microgrid polarimeter. This type of instrument consists of a mosaic of micropolarizers at different orientations that are masked directly onto a focal plane array sensor. In this scheme, each polarization measurement is acquired spatially and hence each is made at a different point in the scene. This is a significant source of error, as it violates the requirement that each polarization measurement have the same instantaneous field-of-view (IFOV). In this paper, we first study the amount of error introduced by the IFOV handicap in microgrid instruments. We then proceed to investigate means for mitigating the effects of these errors to improve the quality of polarimetric imagery. In particular, we examine different interpolation schemes and gauge their performance. These studies are completed through the use of both real instrumental and modeled data.
One of the most significant challenges in performing infrared (IR) polarimetery is the focal plane array (FPA) nonuniformity (NU) noise that is inherent in virtually all IR photodetector technologies that operate in the midwave IR (MWIR) or long-wave IR (LWIR). NU noise results from pixel-to-pixel variations in the repsonsivity of the photodetectors. This problem is especially severy in the microengineered IR FPA materials like HgCdTe and InSb, as well as in uncooled IR microbolometer sensors. Such problems are largely absent from Si based visible spectrum FPAs. The pixel response is usually a variable nonlinear response function, and even when the response is linearized over some range of temperatures, the gain and offset of the resulting response is usually highly variable. NU noise is normally corrected by applying a linear calibration to the data, but the resulting imagery still retains residual nonuniformity due to the nonlinearity of the photodetector responses. This residual nonuniformity is particularly troublesome for polarimeters because of the addition and subtraction operations that must be performed on the images in order to construct the Stokes parameters or other polarization products. In this paper we explore the impact of NU noise on full stokes and linear-polarization-only IR polarimeters. We
compare the performance of division of time, division of amplitude, and division of array polarimeters in the presence of both NU and temporal noise, and assess the ability of calibration-based NU correction schemes to clean up the data.
Long-wave infrared (LWIR) imaging is a prominent and useful technique for remote sensing applications. Moreover, polarization imaging has been shown to provide additional information about the imaged scene. However, polarization estimation requires that multiple measurements be made of each observed scene point under optically different conditions. This challenging measurement strategy makes the polarization estimates prone to error. The sources of this error differ depending upon the type of measurement scheme used. In this paper, we examine one particular measurement scheme, namely, a simultaneous multiple-measurement imaging polarimeter (SIP) using a microgrid polarizer array. The imager is composed of a microgrid polarizer masking a LWIR HgCdTe focal plane array (operating at 8.3-9.3 μm), and is able to make simultaneous modulated scene measurements. In this paper we present an analytical model that is used to predict the performance of the system in order to help interpret real results. This model is radiometrically accurate and accounts for the temperature of the camera system optics, spatial nonuniformity and drift, optical resolution and other sources of noise. This model is then used in simulation to validate it against laboratory measurements. The precision and accuracy of the SIP instrument is then studied.
This paper presents an overview of three recently developed scene-based nonuniformity correction techniques, namely, the algebraic scene-based algorithm (ASBA), the extended radiometrically accurate scene-based algorithm (RASBA) and the generalized algebraic scene-based algorithm (GASBA). The ASBA uses pairs of image frames that exhibit one-dimension sub-pixel motion to algebraically extract estimates of bias nonuniformity. The RASBA incorporates arbitrary sub- and super-pixel two-dimensional motion in conjunction with limited perimeter-only absolute calibration to obtain radiometrically accurate estimates of the bias nonuniformity. The RASBA provides the advantage of being able to maintain radiometry in the interior photodetectors without interrupting their operation. The GASBA is a generalized non-radiometric form of the algorithm that uses image pairs with arbitrary two-dimensional motion and encompasses both the ASBA and RASBA algorithms. This generalization is accomplished by initially guaranteeing bias uniformity in the perimeter detectors. This uniformity can be achieved by first applying the ASBA estimates. The generalized algorithm is then able to automatically maintain perimeter uniformity without the need for re-application of the ASBA. Thus, the GASBA is able to operate completely in a non-radiometric mode, alleviating the need for the perimeter calibration system if desired. The generalized algorithm is applied to real infrared imagery obtained from both cooled and uncooled infrared cameras. A hardware implementation of the proposed algorithm will also be discussed along with several ongoing commercial applications of the technology.
The inherent nonuniformity in the photoresponse and readout-circuitry of the individual detectors in infrared focal-plane-array imagers result in the notorious fixed-pattern noise (FPN). FPN generally degrades the performance of infrared imagers and it is particularly problematic in the midwavelength and longwavelength infrared regimes. In many applications, employing signal-processing techniques to combat FPN may be preferred over hard calibration (e.g., two-point calibration), as they are less expensive and, more importantly, do not require halting the operation of the camera. In this paper, a new technique that uses knowledge of global motion in a video sequence to restore the true scene in the presence of FPN is introduced. In the proposed setting, the entire video sequence is regarded as an output of a motion-dependent linear transformation, which acts collectively on the true scene and the unknown bias elements (which represent the FPN) in each detector. The true scene is then estimated from the video sequence according to a minimum
mean-square-error criterion. Two modes of operation are considered. First, we consider non-radiometric restoration, in which case the true scene is estimated by performing a regularized minimization, since the problem is ill-posed. The other mode of operation is radiometric, in which case we assume that only the perimeter detectors have been calibrated. This latter mode does not require regularization and therefore avoids compromising the radiometric accuracy of the restored scene. The algorithm is demonstrated through preliminary results from simulated and real infrared imagery.
Accurately identifying and bounding error sources in imaging spectro-polarimeters is a challenging task. Here we present an error evaluation methodology intended as an organizational tool for both itemizing and quantifying sources of error in polarimetric instruments. Associated with each source of error are both a metric and test by which these errors may be quantified. Using this procedure, we examine the accuracy and precision of a particular imaging Stokes vector hyper-spectral polarimeter. A subset of the identified error sources are selected and propagated through the system. These measured error quantities are then used to put absolute error bounds on the data acquired by our instrument. These measured error quantities are further documented and presented in the form of an error evaluation sheet.
This paper describes a major generalization of a recently reported
radiometrically-accurate algebraic nonuniformity correction (NUC)
algorithm. The original technique was capable of accurately
estimating the bias nonuniformity from a sequence of pairs of
images exhibiting strictly one-dimensional (1D) subpixel shifts.
The new technique relaxes the subpixel 1D shift constraint to
arbitrary two-dimensional (2D) motion, which can be either
sub-pixel or super-pixel. The 2D technique relies on calibrating
only rows and columns on the perimeter of the array, which in
turn, provides the algorithm with the necessary initial conditions
to recursively estimate the bias values in the entire array. In
this way, radiometric NUC can be achieved non-disruptively, as
needed, without disturbing the functionality of the interior array
elements. The 2D algorithm is highly localized in time and space
lending itself to near real-time implementation. Radiometric NUC
can be achieved with a relatively low number of frames (typically
about 10 frame pairs). Moreover, as in its earlier 1D version, the
performance of the 2D algorithm is shown to be insensitive to
spatial diversity in the scene. This paper will address the
performance of the 2D technique using real infrared data.
This paper describes how a limited form of black-body-based
calibration can be integrated into a recently developed algebraic
scene-based algorithm for nonuniformity correction (NUC) in
focal-plane arrays. The result of this integration is a
scene-based NUC algorithm that is radiometrically accurate. By
calibrating only those detectors that are on the array perimeter
and relying on the scene-based algorithm to calibrate the interior
detectors, using the perimeter detectors as a reference,
radiometric accuracy can be achieved without disturbing the
functionality of interior array elements. What makes this possible
is the fact that the scene-based NUC algorithm used here is
algebraic in nature and does not rely on any statistical
assumptions on the scene irradiance in the image sequence. The
algorithm utilizes knowledge of inter-frame motion to 'lock' the
biases of the interior array elements to those on the boundary.
Notably, this can be achieved regardless of the spatial diversity
in the scene and with, typically, a minimal number of frames in an
image sequence. The performance of the technique is demonstrated
using real infrared data.
An algorithm is developed to compensate for the spatial fixed-pattern (nonuniformity) noise in focal-plane arrays, which is a pressing problem, particularly for mid- to far- infrared imaging systems. The proposed algorithm uses pairs of frames from an image sequence exhibiting pure horizontal and vertical sub-pixel shifts. The algorithm assumes a linear irradiance-voltage model for which the nonuniformity is attributed only to variation in the offset of various detectors in the array. Using a modified gradient-based shift estimator, pairs of frames exhibiting the above shift requirements can be identified and used to generate a correction matrix, which will compensate for the offset nonuniformity in all frames. The efficacy of this nonuniformity correction technique is demonstrated by applying it to infrared and simulated data. The strength of this technique is in its simplicity, requiring relatively few frames to generate an acceptable correction matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.