PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696601 (2008) https://doi.org/10.1117/12.801998
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6966, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696602 (2008) https://doi.org/10.1117/12.776252
Subspace methods for hyperspectral imagery enable detection and identification of targets under unknown
environmental conditions (i.e., atmospheric, illumination, surface temperature, etc.) by specifying a subspace of possible
target spectral signatures (and, optionally, a background subspace) and identifying closely fitting spectra in the image.
The subspaces, defined from a set of exemplar spectra, are compactly expanded in singular value decomposition basis
vectors or, less commonly, endmember basis spectra, linear combinations of which are used to fit the image data. In the
present study we compared detection performance in the thermal infrared using several different constrained and
unconstrained basis set expansions of low-dimensional subspaces, including a method based on the Sequential
Maximum Angle Convex Cone (SMACC) endmember algorithm. Constrained expansions were found to provide a
modest improvement in algorithm robustness in our test cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696603 (2008) https://doi.org/10.1117/12.778929
Anomaly detection for hyperspectral imaging is typically based on the Mahalanobis distance. The sample statistics for Mahalanobis distance are not resistant to the anomalies that are present in the sample pixels. Consequently, the sample statistics do not estimate the corresponding population parameters accurately. In this paper, we will present an algorithm for hyperspectral anomaly detection based on the Mahalanobis distance computed using robust statistics which are estimated based on the minimum generalized variance of the sample pixels. Numerical results based on actual hyperspectral images will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696604 (2008) https://doi.org/10.1117/12.773444
This paper describes a new adaptive spectral matched filter and a modified RX-based anomaly detector that incorporates
the idea of regularization (shrinkage). The regularization has the effect of restricting the possible matched filters
(models) to a subset which are more stable and have better performance than the non-regularized adaptive spectral
matched filters. The effect of regularization depends on the form of the regularization term and the amount of
regularization is controlled by so called regularization coefficient. In this paper the sum-of-squares of the filter
coefficients is used as the regularization term and several different values for the regularization coefficient are tested. A
Bayesian-based derivation of the regularized matched filter is also provided. Experimental results for detecting and
recognizing targets in hyperspectral imagery are presented for regularized and non-regularized spectral matched filters
and RX algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696605 (2008) https://doi.org/10.1117/12.782458
An adaptive algorithm is described for deriving constant false alarm rate (CFAR) detection thresholds based on
statistically motivated models of actual spectral detector output distributions. The algorithm dynamically tracks the
distribution of detector observables and fits the observed distribution to a suitable mixture density model function. The
fitted distribution model is used to compute numerical detection thresholds that achieve a constant probability of false
alarm (Pfa) per pixel. Typically gamma mixture densities are used to model outputs of anomaly detectors based on
quadratic decision statistics, while normal mixture densities are used for linear matched filter type detectors. In order to
achieve the computational efficiency required for real-time implementations of the algorithm on mainstream
microprocessors, a robust yet considerably less complex exponential mixture model was recently developed as a general
approximation to common long-tailed detector distributions. Within the region of operational interest, namely between
the primary mode and the far tail, this approximation serves as an accurate model while providing significant reduction
in computational cost. We compare the performance of the exponential approximation against the full-blown gamma
and normal models. We also demonstrate the false alarm regulation performance of the adaptive CFAR algorithm using
anomaly and matched detector outputs derived from actual VNIR-band hyperspectral imagery collected by the Civil Air
Patrol (CAP) Airborne Real time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696606 (2008) https://doi.org/10.1117/12.777758
The paper outlines a new method for band selection derived from a multivariate normal mixture anomaly detection
method. The method consists in evaluating detection performance in terms of false alarm rates for all band
configurations obtainable from an input image by selecting and combining bands according to selection criteria
reflecting sensor physics. We apply the method to a set of hyperspectral images in the visible and near-infrared spectral
domain spanning a range of targets, backgrounds and measurement conditions. We find optimum bands, and investigate
the feasibility of defining a common band set for a range of scenarios. The results suggest that near optimal performance
can be obtained using general configurations with less than 10 bands. This may have implications for the choice of
sensor technology in target detection applications. The study is based on images with high spectral and spatial resolution
from the HySpex hyperspectral sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor Design, Performance, and Data Analysis Methodologies
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696607 (2008) https://doi.org/10.1117/12.780611
The design and performance characteristics of a novel Acousto Optic Tunable Filter
(AOTF) are presented. Particular attention has been paid to the reduction of optical side
lobes, maximising the light throughput and achieving efficient wideband RF matching of
a device for use in hyperspectral imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660B (2008) https://doi.org/10.1117/12.773338
Tomographic spectral imagers owe much of their development to sophisticated numerical processing. In order to reduce
system size and complexity, mechanical detail has often been replaced with ever-increasing algorithm sophistication. In
developing the Field Multiplexed Dispersive Imaging Spectrometer (FMDIS), the processing has been broken down into
two steps; one which deconvolves the solution spatially and a second which deconvolves the solution spectrally. The first
step is characterized by large inversion matrices of a few iterations (typically less than 10), while the second requires
small matrices and a large number of iterations (hundreds to millions). Iterative processing has been employed due to the
physical nature of the data. Inversions must be robust to moderate amounts of noise and calibration uncertainty.
In this paper we present a deterministic pseudo inversion technique to replace the second iterative processing step in
FMDIS datacube generation. It is shown to be within required limits of accuracy and can speed up processing by an
order of magnitude or more. While not intended to replace the iterative solution technique, it provides a fast means of
processing data when speed is more important than accuracy. Implementation of the solution algorithm is discussed
relative to the over-all solvability of the under determined system of equations. Several results are shown from a visible
instrument with 33 colors which contrast the two techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660C (2008) https://doi.org/10.1117/12.777153
We have proposed a new method for illumination suppression in hyperspectral image data. This involves transforming
the data into a hyperspherical coordinate system, segmenting the data cloud into a large number of classes according to
the radius dimension, and then demeaning each class, thereby eliminating the distortion introduced by differential
absorption in shaded regions. This method was evaluated against two other illumination-suppression methods using two
metrics: visual assessment and spectral similarity of similar materials in shaded and fully illuminated regions. The
proposed method shows markedly superior performance by each of these metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660E (2008) https://doi.org/10.1117/12.777737
A 3-D spectral/spatial DFT represents an image region using a dense sampling in the frequency domain. An alternative
approach is to represent a 3-D DFT by its projection onto a set of functions that capture specific orientation, scale, and
spectral attributes of the image data. For this purpose, we have developed a new model for spectral/spatial information
in images based on three-dimensional Gabor filters. This model achieves optimal joint localization in space and
frequency and provides an efficient means of sampling a three-dimensional frequency domain representation of HSI
data. Since 3-D Gabor filters allow for a large number of spectral/spatial quantities to be used to represent an image
region, the performance and efficiency of algorithms that use this representation can be improved if methods are
available to reduce the dimensionality of the model. Thus, we have derived methods for selecting filters that emphasize
the most significant spectral/spatial differences between the various classes in a scene. We demonstrate the utility of the
new model for region classification in AVIRIS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660F (2008) https://doi.org/10.1117/12.779142
In this work, we study unsupervised classification algorithms for hyperspectral images based on band-by-band scalar
histograms and vector-valued generalized histograms, obtained by vector quantization. The corresponding histograms
are compared by dissimilarity metrics such as the chi-square, Kolmogorov-Smirnorv, and earth mover's distances. The
histograms are constructed from homogeneous regions in the images identified by a pre-segmentation algorithm and
distance metrics between pixels. We compare the traditional spectral-only segmentation algorithms C-means and
ISODATA, versus spectral-spatial segmentation algorithms such as unsupervised ECHO and a novel segmentation
algorithm based on scale-space concepts. We also evaluate the use of complex features consisting of the real spectrum
and its derivative as the imaginary part. The comparison between the different segmentation algorithms and distance
metrics is based on their unsupervised classification accuracy using three real hyperspectral images with known
ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660G (2008) https://doi.org/10.1117/12.778222
In this paper, an algorithm that extracts regional texture information by computing spectral difference histograms over
window extents in hyperspectral images is presented. The spectral angle distance is used as the spectral metric and
different window sizes are explored for computing the histogram. The histograms are used in a semi-supervised learning
framework that uses both labeled and unlabeled samples for training the support vector machine classifier, which is then
tested with unlabeled samples. Results are presented with real and synthetic hyperspectral images. The method
performs well with high spatial resolution images. The algorithm performs well under different noise levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660H (2008) https://doi.org/10.1117/12.770298
A processing algorithm to classify hyperspectral images from an imaging spectroscopic sensor is investigated in this
paper. In this research two approaches are followed. First, the feasibility of an analysis scheme consisting of spectral
feature extraction and classification is demonstrated. Principal component analysis (PCA) is used to perform data
dimensionality reduction while the spectral interpretation algorithm for classification is the K nearest neighbour (KNN).
The performance of the KNN method, in terms of accuracy and classification time, is determined as a function of the
compression rate achieved in the PCA pre-processing stage. Potential applications of these hyperspectral sensors for
foreign object detection in industrial scenarios are enormous, for example in raw material quality control. KNN classifier
provides an enormous improvement in this particular case, since as no training is required, new products can be added in
any time. To reduce the high computational load of the KNN classifier, a generalization of the binary tree employed in
sorting and searching, kd-tree, has been implemented in a second approach. Finally, the performance of both strategies,
with or without the inclusion of the kd-tree, has been successfully tested and their properties compared in the raw
material quality control of the tobacco industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660J (2008) https://doi.org/10.1117/12.771633
Thermal Infrared Multispectral Scanner (TIMS) data are processed to yield surface temperatures over the lava tube
system of Kilauea Volcano, Hawaii. TIMS is a 6-band airborne longwave infrared (8 μm to 12 μm) multispectral
imaging system built and operated by the National Aeronautics and Space Administration (NASA). The data analyzed
were collected in 1988 and are part of the Compiled Volcanology Data Set collection of Glaze et al., (1992). The
primary goal of the analyses is to utilize the TIMS-derived surface temperatures to estimate lava tube roof thickness
(LTRT). There is a paucity of studies that have utilized remotely-sensed imaging spectrometry data to estimate LTRT - a component important to understanding (and modeling) the thermal field of lava tube systems. Lava tube systems, in
turn, are important to the emplacement of areally extensive lava flows on earth and on other planets. An in-scene
atmospheric compensation method was applied to the data followed by a normalized emissivity method
temperature/emissivity separation algorithm to obtain surface temperature. Surface temperature measurements are then
compared to modeled temperatures in order to estimate lava tube roof thickness. Modeled temperatures are calculated
via finite element analysis. Boundary conditions of the finite element models are derived from analyses of the TIMS
data, independent knowledge of lava liquidus and solidus temperatures, and crustal heat-flow geophysical data. A TIMS
plus modeling-derived LTRT agrees with estimates based on field observations. The TIMS data are described as are all
processing and analysis methods. The thermal modeling is also described as is an effort to build a lookup table for
LTRTs to be used in conjunction with surface temperature measurements. Archived data such as those exploited here
provide a historical context particularly for terranes which may undergo relatively rapid change - such as the lava flow
fields of Kilauea Volcano.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660K (2008) https://doi.org/10.1117/12.782220
Since Magnetic Resonance (MR) images can be considered as multispectral images where each spectral band image is
acquired by a particular pulse sequence, this paper investigates an application of a technique that is widely used in
multispectral image processing, referred to as Linear Spectral Unmixing (LSU), in MR image analysis where two types
of LSU, unconstrained LSU and constrained LSU are considered. Due to a limited number of MR images acquired by
MR sequences, the ability of the LSU cannot be fully explored and utilized. In order to mitigate this dilemma, a band
expansion process is introduced to expand an original set of MR images to an augmented set of multsipectral images by
including additional spectral band images that can be generated from the original MR images using a set of nonlinear
functions. In order to demonstrate the utility of the LSU in MR image analysis, two sets of MR images, synthetic MR
images available on website and real MR images, are used for experiments. Experimental results show that the LSU can
be a very effective technique in quantifying MR substances to calculate their partial volumes for further MR image
analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660L (2008) https://doi.org/10.1117/12.777642
Electromagnetic signatures of terrain exhibit significant spatial heterogeneity on a range of scales as well as considerable
temporal variability. A statistical characterization of the spatial heterogeneity and spatial scaling algorithms of terrain
electromagnetic signatures are required to extrapolate measurements to larger scales. Basic terrain elements including
bare soil, grass, deciduous, and coniferous trees were studied in a quasi-laboratory setting using instrumented test sites in
Hanover, NH and Yuma, AZ. Observations were made using a visible and near infrared spectroradiometer (350 - 2500
nm) and hyperspectral camera (400 - 1100 nm). Results are reported illustrating: i) several difference scenes; ii) a terrain
scene time series sampled over an annual cycle; and iii) the detection of artifacts in scenes. A principal component
analysis indicated that the first three principal components typically explained between 90 and 99% of the variance of
the 30 to 40-channel hyperspectral images. Higher order principal components of hyperspectral images are useful for
detecting artifacts in scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660O (2008) https://doi.org/10.1117/12.786424
Due to the complex nature of hyper-spectra imaging, there are diversified noises in different bands of hyper-spectra
image. Without proper pre-processing, these noises will lead to false target detection results in application. Furthermore,
because of low signal to noise ratio, some bands, such as bands affected by water vapor in the infrared wavelengths,
cannot be utilized in the target detection task. To improve the performance of hyper-spectra applications, many noise
removal technologies have been developed. Most traditional denoising approaches either take only single band image
into account at a time or only consider spectra shape at one location a time. But these approaches could not deal
effectively with the common noises in hyper-spectra image that change from band to band and from one spatial spot to
another. Also most generalized smooth filters without local adaptation will lead to losses in spatial details at band
images. We propose a denoising approach that is based on bilateral filtering, which takes both spectra and spatial
information into account. By locally adapt to adjacent spectra distribution, this approach will have the advantage of
effective noise removal while keeping the spatial details in the band images. We also proposed parameter estimation
method for hyperspectral image bilateral filtering. The experiment results show that this approach deliver better
performance under various noises than other approach, the low signal to noise ratio in some band images have been
significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660Q (2008) https://doi.org/10.1117/12.767554
An expert system, the "Spectral Expert" has been implemented for identification of materials based on extraction of key
spectral features from visible/near infrared (VNIR) and shortwave infrared (SWIR) reflectance spectra and hyperspectral
imagery (HSI). Spectral absorption features are automatically extracted from a spectral library and each is analyzed to
determine diagnostic features and characteristics - the "rules". An expert optionally analyzes spectral variability and
separability to create refined rules for identification of specific materials. The rules can be used by a non-expert to
identify materials by matching individual feature parameters or with a rule-controlled RMS approach. The result for a
single spectrum is a score between 0.0 (no-match) and 1.0 (perfect-match) for each specific material in the spectral
library, or for hyperspectral data, a classified image showing the predominate material on a per-pixel basis and a score
image for each material. A feature-based-mixture-index (FBMI) score or image is also created, which alerts the analyst
to possible problem spectra and mixing. This can be used to determine iterative expert system processing requirements
for determination of secondary materials and assemblages and to point the analyst towards supplementary analyses using
other non-feature-based methods. A geologic example demonstrates simplest case Spectral Expert analysis - application
to minerals with a laboratory spectral library and well-defined spectral features. An example for an urban site
demonstrates application and results where no previous spectral library exists. The approach, methods, and algorithms
have been implemented in a software plug-in to the popular "ENVI" image processing and analysis software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660R (2008) https://doi.org/10.1117/12.778072
This paper extends the field of hyperspectral anomaly and target detection by introducing a new approach for
preprocessing hyperspectral image data. In this study, we investigate the Median-Spectral-Spatial Transformation as an
approach to draw out the sub-pixel difference characterizations of anomalous spectra. By implementing this
preprocessing step, we have realized a significant improvement in false alarm reduction with increased probability of
detection for sub-pixel targets. Sub-pixel anomalies contain target information consisting of only a small fraction of an
image pixel's surface reflected material content. To demonstrate the efficacy of our approach, we compare results from
RX anomaly detection across multiple HSI images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660U (2008) https://doi.org/10.1117/12.777016
For multi-sensor registration, previous techniques typically use mutual information (MI) rather than the sum-of-the-squared
difference (SSD) as the similarity measure. However, the optimization of MI is much less straightforward than is
the case for SSD-based algorithms. A new technique for image registration has recently been proposed that uses an
information theoretic measure called the Cross-Cumulative Residual Entropy (CCRE). In this paper we show that using
CCRE for multi-sensor registration of remote sensing imagery provides an optimization strategy that converges to a
global maximum with significantly less iterations than existing techniques and is much less sensitive to the initial
geometric disparity between the two images to be registered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660W (2008) https://doi.org/10.1117/12.778431
The variability of panchromatic and multispectral images, vector data (maps) and DEM models is growing. Accordingly,
the requests and challenges are growing to correlate, match, co-register, and fuse them. Data to be integrated may have
inaccurate and contradictory geo-references or not have them at all. Alignment of vector (feature) and raster (image)
geospatial data is a difficult and time-consuming process when transformational relationships between the two are
nonlinear. The robust solutions and commercial software products that address current challenges do not yet exist. In the
proposed approach for Vector-to-Raster Registration (VRR) the candidate features are auto-extracted from imagery,
vectorized, and compared against existing vector layer(s) to be registered. Given that available automated feature
extraction (AFE) methods quite often produce false features and miss some features, we use additional information to
improve AFE. This information is the existing vector data, but the vector data are not perfect as well. To deal with this
problem the VRR process uses an algebraic structural algorithm (ASA), similarity transformation of local features
algorithm (STLF), and a multi-loop process that repeats (AFE-VRR) process several times. The experiments show that it
was successful in registering road vectors to commercial panchromatic and multi-spectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660X (2008) https://doi.org/10.1117/12.777215
Arguably the single greatest confound for change detection algorithms is the misregistration of the two images
in which changes are being sought. On the other hand, since the effects of misregistration are exhibited over
the entire image, there is reason to hope that algorithms which are designed to deal with pervasive effects
(such as illumination differences in the scene, or calibration drifts in the sensor) will be less sensitive to the
inevitable misregistration errors that occur when comparing two images. This work will describe some controlled
experiments in which change detection performance is evaluated as a function of how misregistered the images
are. The performance is observed to degrade quite rapidly with the amount of misregistration (so that any
practical system for automated change detection will require accurate image registration), but algorithms that
are more adaptive to pervasive differences are less sensitive to the effect of misregistration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660Y (2008) https://doi.org/10.1117/12.775435
This paper covers the impact of registration errors between two images on chronochrome and covariance equalization
predictors used for hyperspectral change detection. Hyperspectral change detection involves the comparison of data
collected of the same spatial scene on two different occasions to try to identify anomalous man-made changes. Typical
change detection techniques employ a linear prediction method followed by a subtraction step to identify changes. These
linear predictors rely upon statistics from both scenes to determine a respective gain and offset. Chronochrome and
covariance equalization remain two common predictors used in the change detection process. Chronochrome relies upon
a cross-covariance matrix for prediction whereas covariance equalization relies solely upon the individual covariance
matrices. In theory, chronochrome seems more susceptible to image misregistration issues as joint statistic estimates may
suffer with registration error present. This paper examines the validity of this assumption. Using a push-broom style
imaging spectrometer mounted on a pan and tilt, visible to near infrared data of scenes suitable for change detection
analysis are gathered. The pan and tilt system ensures initial misregistration of the data is minimal. Using simple
translations of the scenes, misregistration impacts upon prediction error and change detection are examined for varying
degrees of shift.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Instrumentation, Measurements, and Forecasting
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69660Z (2008) https://doi.org/10.1117/12.777611
High spatial resolution sounding observations will improve initialization and assimilation into the next generation
forecast models and validation of the next generation of climate models. One such advanced sounder concept for low
earth orbit is the Advanced Remote-sensing Imaging Emission Spectrometer (ARIES) which proposes to provide high
spatial hyperspectral resolution observations in the mid to longwave infrared. This paper explores the effects of spatial
resolution on the errors expected from the combined use of models and observations for representing scene information.
We calculate the frequency response of the instrument and model and determine the error at any given spatial frequency.
The results show that it is vital to have observations match the spatial resolution of models to minimize the uncertainty
in the representation of the scene contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696610 (2008) https://doi.org/10.1117/12.774759
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007
generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many
significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very
significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer
Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on
shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error
estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new
methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology,
longwave CO2 channel observations in the spectral region 700 cm-1 to 750 cm-1 are used exclusively for cloud clearing
purposes, while shortwave CO2 channels in the spectral region 2195 cm-1 to 2395 cm-1 are used for temperature sounding
purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further
improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from
AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface
skin temperature is also derived using only shortwave window channels. This produces improved surface parameters,
both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer
temperatures and retrieved total O3 burden.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696612 (2008) https://doi.org/10.1117/12.775446
The AIRS instrument is currently the best space-based tool to simultaneously monitor the vertical distribution of
key climatically important atmospheric parameters as well as surface properties, and has provided high
quality data for more than 5 years. AIRS analysis results produced at the GODDARD/DAAC, based on
Versions 4 & 5 of the AIRS retrieval algorithm, are currently available for public use. Here, first we present
an assessment of interrelationships of anomalies (proxies of climate variability based on 5 full years, since Sept.
2002) of various climate parameters at different spatial scales. We also present AIRS-retrievals-based global,
regional and 1x1 degree grid-scale "trend"-analyses of important atmospheric parameters for this 5-year period.
Note that here "trend" simply means the linear fit to the anomaly (relative the mean seasonal cycle) time series
of various parameters at the above-mentioned spatial scales, and we present these to illustrate the usefulness of
continuing AIRS-based climate observations. Preliminary validation efforts, in terms of intercomparisons of
interannual variabilities with other available satellite data analysis results, will also be addressed. For example,
we show that the outgoing longwave radiation (OLR) interannual spatial variabilities from the available state-of-the-art CERES measurements and from the AIRS computations are in remarkably good agreement. Version
6 of the AIRS retrieval scheme (currently under development) promises to further improve bias agreements for
the absolute values by implementing a more accurate radiative transfer model for the OLR computations and by
improving surface emissivity retrievals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696613 (2008) https://doi.org/10.1117/12.777920
We apply the method of Vanishing Partial Derivatives (VPD) to AIRS spectra to retrieve daily the global distribution of
CO2 at a nadir geospatial resolution of 90 km x 90 km without requiring a first-guess input beyond the global average.
Our retrievals utilize the 15 μm band radiances, a complex spectral region. This method may be of value in other
applications, in which spectral signatures of multiple species are not well isolated spectrally from one another.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696614 (2008) https://doi.org/10.1117/12.778733
Recent work has demonstrated the feasibility of neural network estimation techniques for atmospheric profiling
in partially cloudy atmospheres using combined microwave (MW) and hyperspectral infrared (IR) sounding data.
In this paper, the retrieval performance in problem areas (over land, near the poles, elevated terrain, etc.) is
examined. Retrieval performance has been improved by stratifying the neural network training data into distinct
groups based on geographical (latitude, for example), geophysical (atmospheric pressure, for example), and sensor
geometrical (scan angle, for example) considerations. The spectral information content of cloud signatures in
Infrared Atmospheric Sounding Interferometer (IASI) data is also explored. A Principal Components Analysis
is presented that indicates that most variability due to clouds is contained in the first two eigenvectors.
A novel statistical method for the retrieval of atmospheric temperature and moisture (relative humidity)
profiles has been developed and evaluated with sounding data from the Atmospheric InfraRed Sounder (AIRS)
and the Advanced Microwave Sounding Unit (AMSU). The present work focuses on the cloud impact on the AIRS
radiances and explores the use of stochastic cloud clearing mechanisms together with neural network estimation.
A stand-alone statistical algorithm will be presented that operates directly on cloud-impacted AIRS/AMSU data,
with no need for a physical cloud clearing process. The algorithm is implemented in three stages. First, the
infrared radiance perturbations due to clouds are estimated and corrected by combined processing of the infrared
and microwave data using a Stochastic Cloud Clearing (SCC) approach. The cloud clearing of the infrared
radiances was performed using principal components analysis of infrared brightness temperature contrasts in
adjacent fields of view and microwave-derived estimates of the infrared clear-column radiances to estimate and
correct the radiance contamination introduced by clouds. Second, a Projected Principal Components (PPC)
transform is used to reduce the dimensionality of and optimally extract geophysical profile information from the
cloud-cleared infrared radiance data. Third, an artificial feedforward neural network (NN) is used to estimate
the desired geophysical parameters from the projected principal components.
The performance of this method was evaluated using global (ascending and descending) EOS-Aqua orbits co-located
with ECMWF fields for a variety of days throughout 2003, 2004, 2005, and 2006. Over 1,000,000 fields of
regard (3x3 arrays of footprints) over ocean and land were used in the study. The method requires significantly less
computation than traditional variational retrieval methods, while achieving comparable performance. Retrieval
accuracy will be evaluated using ECMWF atmospheric fields as ground truth. The accuracy of the neural network
retrieval method will be compared to the accuracy of the AIRS Level 2 (Version 5) retrieval method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696615 (2008) https://doi.org/10.1117/12.776983
More than five years of CO retrievals from the Atmospheric InfraRed Sounder (AIRS) onboard NASA's
Aqua satellite reveal variations in tropospheric CO on timescales from twelve hours to five years and on
spatial scales from local to global. The shorter timescales are invaluable to monitor daily variations in
CO emissions, to enable three-dimensional tracking of atmospheric motions, and to enhance insights into
atmospheric mixing. Previous studies have utilized AIRS CO retrievals over the course of days to weeks
to track plumes from large forest fires. On the local scale, we will present AIRS observations of pollution
from several northern hemisphere Megacities. On the regional scale, we will present AIRS observations
of the Mexico City pollution plume. We will illustrate global scale AIRS CO observations of interannual
variations linked to the influence of large-scale atmospheric perturbations from the El Nino Southern
Oscillation (ENSO). In particular, we observe a quasi-biennial variation in CO emissions from Indonesia
with varying magnitudes in peak emission occurring in 2002, 2004, and 2006. Examining satellite
rainfall measurements over Indonesia, we find the enhanced CO emission correlates with occasions of
less rainfall during the month of October. Continuing this satellite record of tropospheric CO with
measurements from the European IASI instrument will permit construction of a long time-series useful
for further investigations of climatological variations in CO emissions and their impact on the health of
the atmosphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696616 (2008) https://doi.org/10.1117/12.778050
The Spaceborne Infrared Sounder for Geosynchronous Earth Orbit (SIRAS-G) was developed by Ball Aerospace &
Technologies Corp (BATC) under NASA's 2002 Instrument Incubator Program. SIRAS-G was a technology
development program focused on next-generation IR imaging spectrometers for sounding of the atmosphere. SIRAS-G
demonstrated that the dispersive grating spectrometer is a suitable instrument architecture for this application. In
addition to providing atmospheric temperature and water vapor profiles, SIRAS-G can provide trace gases
concentrations, land and ocean surface temperatures and the IR mineral dust aerosol signature from satellite. The 3-year
SIRAS-G IIP development effort included the successful cryogenic testing of the SIRAS-G laboratory demonstration
spectrometer operating in the 2083 to 2994 cm-1 frequency range. The performance of the demonstration instrument has
been quantified including measurement of keystone distortion, spectral smile, MTF, and the spectral response function
(SRF). Development efforts associated with this advanced infrared spectrometer technology provides the basis for
instrumentation to support future Earth science missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
G. P. Anderson, C. B. Schaaf, K. Loukachine, R. S. Stone, E. Andrews, E. P. Shettle, E. G. Dutton, M. O. Roman III, A. Stohl, et al.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696617 (2008) https://doi.org/10.1117/12.782364
Aerosols in the atmosphere affect the Earth's radiation budget in complicated ways, depending on their physical and
optical characteristics and how they interact with solar and terrestrial radiation or affect cloud nucleation. While the
Arctic atmosphere is generally very clean, spring incursions of haze and dust from Eurasia are known to perturb the
surface radiation balance. Recent analyses (based on "Radiative impact of boreal smoke in the Arctic: Observed and
modeled", Stone, et al., to be referred to throughout this ms as Stone2008) also reveal that smoke plumes from
boreal forest fires can have significant effects during summer. Once aloft, upper-level winds can transport this
smoke long distances. In late June and July 2004 fires raged across eastern Alaska and the Yukon and the resulting
smoke was advected across the Arctic, reaching as far as Europe. The long-range transport was tracked using a
dispersion model combined with various in situ measurements along its path, all showing enhancements in aerosol
opacity. The measurements made at Barrow, Alaska, documented just a portion of the transport and the radiative
impact of smoke. The comprehensive measuring systems in place near Barrow (NOAA/GMD and DoE/ARM)
presented a unique opportunity to characterize the smoke aerosol both physically and optically, and therefore permit
quantification of the upwelling radiance (outgoing shortwave radiance - OSR, 0.28 to 4.0 μm) as observed by
NASA satellites: Clouds and the Earth's Radiant Energy System (CERES) 5, coupled with data from Moderate
Resolution Imaging Spectroradiometer (MODIS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696618 (2008) https://doi.org/10.1117/12.774694
The atmosphere is a critical factor in remote sensing. Radiance from a target must pass through the air column
to reach the sensor. The atmosphere alters the radiance reaching the sensor by attenuating the radiance from
the target via scattering and absorption and by introducing an upwelling radiance. In the thermal infrared,
these effects will introduce errors in the derived apparent temperature of the target if not properly accounted
for. The temperature error is defined as the difference between the target leaving apparent temperature and
observed apparent temperature. The effects of the atmosphere must be understood in order to develop methods
to compensate for this error. Different atmospheric components will affect the radiation passing through it in
different ways. Certain components may be more important than others depending on the remote sensing application.
The authors are interested in determining the actual temperature of the superstructure that composes
a mechanical draft cooling tower (MDCT), hence water vapor is the primary constituent of concern. The tower
generates a localized water vapor plume located between the target and sensor. The MODTRAN radiative
transfer code is used to model the effects of a localized exhaust plume from a MDCT in the longwave infrared.
The air temperature and dew point depression of the plume and the thickness of the plume are varied to observe
the effect on the apparent temperature error. In addition, the general atmospheric conditions are varied between
two standard MODTRAN atmospheres to study any effect that ambient conditions have on the apparent temperature
error. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) modeling tool is used to
simulate the radiance reaching a thermal sensor from a target after passing through the water vapor plume. The
DIRSIG results are validated against the MODTRAN results. This study shows that temperature errors of as
much as one Kelvin can be attributed to the presence of a localized water vapor plume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 696619 (2008) https://doi.org/10.1117/12.777572
The Air Force Institute of Technology's Center for Directed Energy (AFIT/CDE) developed the High Energy Laser End-to-End Operational Simulation (HELEEOS) model in part to quantify the performance variance in laser propagation
created by the natural environment during dynamic engagements. As such, HELEEOS includes a fast-calculating, first
principles, worldwide surface-to-100 km, atmospheric propagation and characterization package. This package enables
the creation of profiles of temperature, pressure, water vapor content, optical turbulence, atmospheric particulates and
hydrometeors as they relate to line-by-line layer transmission, path and background radiance at wavelengths from the
ultraviolet to radio frequencies. Physics-based cloud and precipitation characterizations are coupled with a probability of
cloud free line-of-sight algorithm for all possible look angles. HELEEOS was developed under the sponsorship of the
High Energy Laser Joint Technology Office.
In the current paper an example of a unique high fidelity simulation of a bi-static, time-varying five band multispectral
remote observation of laser energy delivered on a test object is presented. The multispectral example emphasizes
atmospheric effects using HELEEOS, the interaction of the laser on target and the observed reflectance and subsequent
hot spot generated. A model of a sensor suite located on the surface is included to collect the diffuse reflected in-band
laser radiation and the emitted radiance of the hot spot in four separate and spatially offset MWIR and LWIR bands.
Particular care is taken in modeling the bidirectional reflectivity distribution function (BRDF) of the laser/target
interaction to account for both the coupling of energy into the target body and the changes in reflectance as a function of
temperature. The architecture supports any platform-target-observer geometry, geographic location, season, and time of
day; and it provides for correct contributions of the sky-earth background. The simulation accurately models the thermal
response, kinetics, turbulence, base disturbance, diffraction, and signal-to-noise ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661A (2008) https://doi.org/10.1117/12.777060
The degrading effect of the atmosphere on hyperspectral imagery has long been recognised as a major issue in applying
techniques such as spectrally-matched filters to hyperspectral data. There are a number of algorithms available in the
literature for the correction of hyperspectral data. However most of these approaches rely either on identifying objects
within a scene (e.g. water whose spectral characteristics are known) or by measuring the relative effects of certain
absorption features and using this to construct a model of the atmosphere which can then be used to correct the image. In
the work presented here, we propose an alternative approach which makes use of the fact that the effective number of
degrees of freedom in the atmosphere (transmission, path radiance and downwelling radiance with respect to
wavelength) is often substantially less than the number of degrees of freedom in the spectra of interest. This allows the
definition of a fixed set of invariant features (which may be linear or non-linear) from which reflectance spectra can be
approximately reconstructed irrespective of the particular atmosphere. The technique is demonstrated on a range of data
across the visible to near infra-red, mid-wave and long-wave infra-red regions, where its performance is quantified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661B (2008) https://doi.org/10.1117/12.782113
We continue previous work that generalizes the traditional linear mixing model from a combination of endmember
vectors to a combination of multi-dimensional affine endmember subspaces. This generalization allows the model to
handle the natural variation that is present is real-world hyperspectral imagery. Once the endmember subspaces have
been defined, the scene may be demixed as usual, allowing for existing post-processing algorithms (classification, etc.)
to proceed as-is. In addition, the endmember subspace model naturally incorporates the use of physics-based modeling
approaches ('target spaces') in order to identify sub-pixel targets. In this paper, we present a modification to our
previous model that uses affine subspaces (as opposed to true linear subspaces) and a new demixing algorithm. We also
include experimental results on both synthetic and real-world data, and include a discussion on how well the model fits
the real-world data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661C (2008) https://doi.org/10.1117/12.779444
This paper presents a full algorithm to compute the solution for the unsupervised unmixing problem based on
the positive matrix factorization. The algorithm estimates the number of endmembers as the rank of the matrix.
The algorithm has an initialization stage using the SVD subset selection algorithm. Testing and validation with
real and simulated data show the effectiveness of the method. Application of the approach to environmental
remote sensing is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Carlos Rivera-Borrero, Samuel Rosario, Shawn Hunt, Carmen Zayas, Adrienne Mundorf, Suhaily Cardona
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661D (2008) https://doi.org/10.1117/12.779209
This paper presents a ground truth data collection effort along with its use in evaluating unmixing
algorithms. Unmixing algorithms are typically evaluated using synthetic data generated by selecting
endmember spectrums and adding them in different amounts and with added noise. Going from synthetic to
real data poses many problems. One of the greatest is the amount of data to be collected. Also, there will be
many unmodeled variations in real data. These include greater variation of the endmembers, additional
endmembers that are a very small percentage of the image, and nonlinear effects in the data that are not
modeled. The data collation effort produced a high resolution class map along with spectral measurements
of 153 different sampling sites to validate the map. The methodology for using this high resolution class
map for generating the ground truth data for use in the unmixing algorithms is presented. Specifically, a 1m
class map is used to generate the endmember abundances for every pixel in a 30m Hyperion image of the
Enrique Reef in Southwest Puerto Rico. The results using two unmixing algorithms, one with a sum to one
constraint and the other with a non-negative constraint are presented. The unmixing results for each
endmember are presented along with a newly developed unmixing parameter called the Correct Unmixing
Index (CUI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661E (2008) https://doi.org/10.1117/12.777890
Spectral unmixing of hyperspectral images is a process by which the constituent's members of a pixel scene
are determined and the fraction of the abundance of the elements is estimated. Several algorithms have been
developed in the past in order to obtain abundance estimation from hyperspectral data, however, most of
them are characterized by being highly computational and time consuming due to the magnitude of the data
involved. In this research we present the use of Graphic Processing Units (GPUs) as a computing platform in
order to reduce computation time related to abundance estimation for hyperspectral images. Our
implementation was developed in C using NVIDIA(R) Compute Unified Device Architecture (CUDATM). The
recently introduced CUDA platform allows developers to directly use a GPU's processing power to perform
arbitrary mathematical computations. We describe our implementation of the Image Space Reconstruction
Algorithm (ISRA) and Expectation Maximization Maximum Likelihood (EMML) algorithm for abundance
estimation and present a performance comparison against implementations using C and Matlab. Results show
that the CUDA technology produced results around 10 times better than the fastest implementation done on
previous platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661F (2008) https://doi.org/10.1117/12.777725
Endmember extraction has received considerable interest in recent years. Many algorithms have been developed for this
purpose and most of them are designed based on convexity geometry such as vertex or endpoint projection and
maximization of simplex volume. This paper develops statistics-based approaches to endmember extraction in the sense
that different orders of statistics are used as criteria to extract endmembers. The idea behind the proposed statistics-based
endmember extraction algorithms (EEAs) is to assume that a set of endmmembers constitute the most un-correlated
sample pool among all the same number of signatures with correlation measured by statistics which include variance
specified by 2nd order statistics, least squares error (LSE) also specified by 2nd order statistics, skewness 3rd order
statistics, kurtosis 4th order statistics, kth moment and statistical independency specified by infinite order of statistics
measured by mutual information. In order to substantiate proposed statistics-based EEAs, experiments using synthetic
and real images are conducted for demonstration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661G (2008) https://doi.org/10.1117/12.776903
The inherent dimensionality of a spectral image can be estimated in a number of ways, primarily based on statistical
measures of the data cloud in the hyperspace. Methods using the eigenvalues from a Principal Components
Analysis, a Minimum Noise Fraction transformation, or the Virtual Dimensionality algorithm are widely used
as applied to entire images typically with the goal of reducing the dimensionality of an image in its entirety.
However, it is desirable to understand the dimensionality of individual components within a hyperspectral scene,
as there is no a priori reason to expect all distinct material classes in the scene to have the same inherent dimensionality.
Additionally, in complex scenes containing non-natural materials, the lack of multivariate normality
of the data set implies that a statistically based estimation is less than optimal. Here, a geometric approach is
developed based on the local estimation of dimensionality in the native data hyperspace. It will be shown that
the dimensionality of a collection of data points (k) in the full n dimensions (where n is the number of spectral
channels measured) can be estimated by calculating the change in point density as a function of distance in the
full n dimensional hyperspace. Simple simulated examples to demonstrate the concept will be shown, as well as
applications to real hyperspectral imagery collected with the HyMAP sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661H (2008) https://doi.org/10.1117/12.778014
Dimensionality Reduction (DR) has found many applications in hyperspectral image processing, e.g., data compression,
endmember extraction. This paper investigates Projection Pursuit (PP)-based data dimensionality reduction where three
approaches are developed. One is to use a Projection Index (PI) to produce projection vectors that can be used to
generate Projection Index Components (PICs). It is a common practice that PP generally uses random initial conditions
to produce PICs. As a result, when the same PP is performed in different times or different users at the same time, the
resulting PICs are generally not the same. In order to resolve this issue, two approaches are proposed. One is referred to
as PI-based PRioritized PP (PI-PRPP) which uses a PI as a criterion to prioritize PICs that are produced by any
component analysis, for example, Principal Components Analysis (PCA) or Independent Component Analysis. The
other approach is called Initialization-Driven PP (ID-PP) which specifies an appropriate set of initial conditions that
allows PP to not only produce PICs in the same order but also the same PICs regardless of how many times PP is run or
who runs the PP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661I (2008) https://doi.org/10.1117/12.777317
In our previous paper, it has been demonstrated that principal component analysis (PCA) can outperform discrete
wavelet transform (DWT) in spectral coding for hyperspectral image compression and a superior rate distortion
performance can be provided in conjunction with 2-dimensional (2D) spatial coding using JPEG2000. The resulting
compression algorithm is denoted as PCA+JPEG2000. In this paper, we further investigate how the data size (i.e., spatial
and spectral size) influences the performance of PCA+JPEG2000 and provide a rule of thumb for PCA+JPEG2000 to
perform appropriately. We will also show that using a subset of principal components (PCs) (the resulting algorithm is
denoted as SubPCA+JPEG2000) can always yield a better rate distortion performance than PCA+JPEG2000 with all the
PCs being preserved for compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661J (2008) https://doi.org/10.1117/12.776900
The Space Dynamics Laboratory (SDL) has developed an FPGA-based hyperspectral demonstration
compression system. The system consists of two boards: the first board performs a decorrelation process
using a 5/3 wavelet; the second board performs the JPEG 2000 image compression. The hardware and
firmware design of this system is described here and data is presented that shows the results of compressed
hyperspectral data cubes containing various types of image content. This paper presents the importance of bit
rate control among the individual spectral bands. Some of the theory for basing bit rate control on JPEG 2000
compression, bit rate control based on the 5/3 wavelet, as well as advantages and disadvantages of each
method are discussed. Concepts for developing hyperspectral image compression technology for systems that
can be used for remote sensing in real applications are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661K (2008) https://doi.org/10.1117/12.782219
Component Analysis (CA) has found many applications in remote sensing image processing. Two major
component analyses are of particular interest, Principal Components Analysis (PCA) and Independent Component
Analysis (ICA) which have been widely used in signal processing. While the PCA de-correlates data samples via 2nd
order statistics in a set of Principal Components (PCs), the ICA represents data samples via statistical independency in a
set of statistically Independent Components (ICs). However, in order to for component analyses to be effective, the
number of components to be generated, p must be sufficient for data analysis. Unfortunately, in MultiSpectral Imagery
(MSI) p seems to be small, while p in HyperSpectral Imagery (HSI) seems too large. Interestingly, very little has been
reported on how to deal with this issue when p is too small or too large. This paper investigates this issue. When p is too
small, two approaches are developed to mitigate the problem. One is Band Expansion Process (BEP) which augments
original data band dimensionality by producing additional bands via a set of nonlinear functions. The other is a kernel-based
approach, referred to as Kernel-based PCA (K-PCA) which maps features in the original data space to a higher
dimensional feature space via a set of nonlinear kernel. While both approaches make attempts to resolve the issue of a
small p using a set of nonlinear functions, their design rationales are completely different, particularly they are not
correlated. As for a large p such as HSI, a recently developed Virtual Dimensionality (VD) can be used for this purpose
where the VD was originally developed to estimate number of spectrally distinct signatures. If we assume one spectrally
distinct signature can be accommodated by one component, the value of p can be actually determined by the VD.
Finally, experiments are conducted to explore and evaluate the utility of component analyses, specifically, PCA and ICA
using BEP and K-PCA for MSI and VD for HSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661L (2008) https://doi.org/10.1117/12.783745
Hyperspectral imagery (HSI) is a relatively new technology capable of relaying intensity information gathered from both
visible and non-visible ranges of the electromagnetic spectrum. HSI images can contain hundreds of bands, which
present a problem when an image analyst must select the most relevant bands from such an image for visualization,
particularly when the bands that are within the range of human vision are either not present or heavily distorted. It is
proposed here that two-dimensional principal component analysis (2DPCA) can aid in the automatic selection of the
bands from an HSI image that would best reflect visual information. The method requires neither prior knowledge of the
image contents nor the association between spectral bands and their center wavelengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661M (2008) https://doi.org/10.1117/12.778177
A traditional linear-mixing model with a structured background used in the hyperspectral imaging literature often
assumes Gaussianity of the error term. This assumption is often questioned, but to the best of our knowledge, we are not
aware of a definite answer on how well such a model may reflect the real hyperspectral images. One difficulty is in the
correct identification of the background signatures. The lack of Gaussianity in the error term might be due to missing
one of the significant background signatures. In this paper, we investigate this issue using an AVIRIS hyperspectral
image. We obtain the projections of the pixel spectra on the orthonormal basis system obtained through the singular
value decomposition, and then we measure their Gaussianity using three different methods. We identify the subspace for
the structured part of the model based on two criteria - the contribution to the image variability and non-Gaussianity of
the marginal distribution. The subspace orthogonal to the structured part of the model forms the subspace of residuals,
which is then investigated for multivariate Gaussianity. The resulting model forms a reasonable approximation of the
hyperspectral image, and can be successfully used in a variety of applications such as unmixing and target detection. At
the same time, it is clear that further improvements are possible by better modeling of the error term distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661N (2008) https://doi.org/10.1117/12.777845
Hyperspectral imaging sensors operating in the visual, near IR, and thermal IR bands are sufficiently advanced to
become a standard component of surveillance sensor suites. The output of these sensors contains a wealth of spectral
and spatial information that can improve target detection and recognition performance. However, the large volume and
complex features of hyperspectral data are challenges to automatic target recognition (ATR) algorithm development, and
a simulation of hyperspectral sensing is therefore essential in evaluating algorithm performance. This paper describes
the Infrared Hyperspectral Scene Simulation (IRHSS), an accurate, non-real-time large-scene simulation tool for
hyperspectral imagers operating in the thermal IR bands. The simulation contains models for target and background
spectral radiance, atmospheric propagation, and sensor processing. It uses a new hyperspectral version of the Multi-service
Electro-optical Signature (MuSES) model to compute scene temperatures and hyperspectral radiances. IRHSS
is able to handle very large terrain and feature databases by selective use of radiation view factors. It provides end-to-end
simulation starting with scene models built from COTS simulation databases with faceted terrain and targets, and
optional overlays of visual high-resolution texture imagery. IRHSS can be run as a standalone application via its
Windows-based graphical user interface (GUI) or as a plug-in to existing software using the IRHSS application
programming interface (API). Some screen images of the IRHSS GUI and example hyperspectral image cubes
generated by IRHSS are included herein.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perry Fuehrer, Glenn Healey, Brian Rauch, David Slater, Anthony Ratkowski
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661O (2008) https://doi.org/10.1117/12.778393
The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical
models to predict measured spectra. Since environmental conditions are often unknown, material
detection algorithms have emerged that utilize predicted spectra over ranges of environmental
conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such
as MODTRANTM. Such techniques require the specification of a set of environmental conditions.
This is particularly challenging in the LWIR for which temperature and atmospheric constituent
profiles are required as inputs for the RT codes. We have developed an automated method
for generating environmental conditions to obtain a desired sampling of spectra in the sensor
radiance domain. Our method provides a way of eliminating the usual problems encountered,
because sensor radiance spectra depend nonlinearly on the environmental parameters, when
model conditions are specified by a uniform sampling of environmental parameters. It uses an
initial set of radiance vectors concatenated over a set of conditions to define the mapping from
environmental conditions to sensor spectral radiance. This approach enables a given number of
model conditions to span the space of desired radiance spectra and improves both the accuracy
and efficiency of detection algorithms that rely upon use of predicted spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661P (2008) https://doi.org/10.1117/12.777717
Many hyperspectral imaging algorithms are available for applications such as spectral unmixing, subpixel detection,
quantification, endmember extraction, classification, compression, etc and many more are yet to come. It is very difficult
to evaluate and validate different algorithms developed and designed for the same application. This paper makes an
attempt to design a set of standardized synthetic images which simulate various scenarios so that different algorithms can
be validated and evaluated on the same ground with completely controllable environments. Two types of scenarios are
developed to simulate how a target can be inserted into the image background. One is called Target Implantation (TI)
which implants a target pixel by removing the background pixel it intends to replace. This type of scenarios is of
particular interest in endmember extraction where pure signatures can be simulated and inserted into the background
with guaranteed 100% purity. The other is called Target Embeddedness (TE) which embeds a target pixel by adding this
target pixel to the background pixel it intends to insert. This type of scenarios can be used to simulate signal detection
models where the noise is additive. For each of both types three scenarios are designed to simulate different levels of
target knowledge by adding a Gaussian noise. In order to make these six scenarios a standardized data set for
experiments, the data used to generate synthetic images can be chosen from a data base or spectral library available in
the public domain or websites and no particular data are required to simulate these synthetic images. By virtue of the
designed six scenarios an algorithm can be assessed objectively and compared fairly to other algorithms on the same
setting. This paper demonstrates how these six scenarios can be used to evaluate various algorithms in applications of
subpixel detection, mixed pixel classification/quantification and endmember extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661Q (2008) https://doi.org/10.1117/12.775159
Hyperspectral ground to ground viewing perspective presents major challenges for autonomous window based detection.
One of these challenges has to do with object scales uncertainty that occur when using a window-based detection
approach. In a previous paper, we introduced a fully autonomous parallel approach to address the scale uncertainty
problem. The proposed approach featured a compact test statistic for anomaly detection, which is based on a principle of
indirect comparison; a random sampling stage, which does not require secondary information (range or size) about the
targets; a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during
random sampling; and a fusion of results at the end. In this paper, we demonstrate the effectiveness and robustness of
this approach on different scenarios using hyperspectral imagery, where for most of these scenarios, the parameter
settings were fixed. We also investigated the performance of this suite over different times of the day, where the spectral
signatures of materials varied with relation to diurnal changes during the course of the day. Both visible to near infrared
and longwave imagery are used in this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661R (2008) https://doi.org/10.1117/12.776982
Most hyperspectral (HS) anomaly detectors in the literature have been evaluated using a few HS imagery sets to
estimate the well-known ROC curve. Although this evaluation approach can be helpful in assessing detectors' rates of
correct detection and false alarm on a limited dataset, it does not shed lights on reasons for these detectors' strengths
and weaknesses using a significantly larger sample size. This paper discusses a more rigorous approach to testing and
comparing HS anomaly detectors, and it is intended to serve as a guide for such a task. Using randomly generated
samples, the approach introduces hypothesis tests for two idealized homogeneous sample experiments, where model
parameters can vary the difficulty level of these tests. These simulation experiments are devised to address a more
generalized concern, i.e., the expected degradation of correct detection as a function of increasing noise in the
alternative hypothesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661S (2008) https://doi.org/10.1117/12.782221
Kernel-based approaches have recently drawn considerable interests in hyperspectral image analysis due to its ability in
expanding features to a higher dimensional space via a nonlinear mapping function. Many well-known detection and
classification techniques such as Orthogonal Subspace Projection (OSP), RX algorithm, linear discriminant analysis,
Principal Components Analysis (PCA), Independent Component Analysis (ICA), have been extended to the
corresponding kernel versions. Interestingly, a target detection method, called Constrained Energy Minimization (CEM)
which has been also widely used in hyperspectral target detection has not been extended to its kernel version. This paper
investigates a kernel-based CEM, called Kernel CEM (K-CEM) which employs various kernels to expand the original
data space to a higher dimensional feature space that CEM can be operated on. Experiments are conducted to perform a
comparative analysis and study between CEM and K-CEM. The results do not show K-CEM provided significant
improvement over CEM in detecting hyperspectral targets but does show significant improvement in detecting targets in
multispectral imagery which provides limited spectral information for the CEM to work well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661T (2008) https://doi.org/10.1117/12.777586
A method for trace gas detection in hyperspectral data is demonstrated using the wavelet packet transform. This new
method, the Wavelet Packet Subspace (WPS), applies the wavelet packet transform and selects a best basis for pattern
matching. The wavelet packet transform is an extension of the wavelet transform, which fully decomposes a signal into a
library of wavelet packet bases. Application of the wavelet packet transform to hyperspectral data for the detection of
trace gases takes advantage of the ability of the wavelet transform to locate spectral features in both scale and location.
By analyzing the wavelet packet tree of specific gas, nodes of the tree are selected which represent an orthogonal best
basis. The best basis represents the significant spectral features of that gas. This is then used to identify pixels in the
scene using existing matching algorithms such as spectral angle or matched filter. Using data from the Airborne
Hyperspectral Imager (AHI), this method is compared to traditional matched filter detection methods. Initial results
demonstrate a promising wavelet packet subspace technique for hyperspectral trace gas detection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661U (2008) https://doi.org/10.1117/12.775826
The long-wave infrared (LWIR) hyperpectral sensing modality is one that is often used for the problem
of detection and identification of chemical warfare agents (CWA) which apply to both military
and civilian situations. The inherent nature and complexity of background clutter dictates a need
for sophisticated and robust statistical models which are then used in the design of optimum signal
processing algorithms that then provide the best exploitation of hyperspectral data to ultimately make
decisions on the absence or presence of potentially harmful CWAs. This paper describes the basic
elements of an automated signal processing pipeline developed at MIT Lincoln Laboratory. In addition
to describing this signal processing architecture in detail, we briefly describe the key signal models
that form the foundation of these algorithms as well as some spatial processing techniques used for
false alarm mitigation. Finally, we apply this processing pipeline to real data measured by the Telops
FIRST hyperspectral (FIRST) sensor to demonstrate its practical utility for the user community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661V (2008) https://doi.org/10.1117/12.770306
A processing methodology based on Support Vector Machines is presented in this paper for the classification of
hyperspectral spectroscopic images. The accurate classification of the images is used to perform on-line material
identification in industrial environments. Each hyperspectral image consists of the diffuse reflectance of the material
under study along all the points of a line of vision. These images are measured through the employment of two imaging
spectrographs operating at Vis-NIR, from 400 to 1000 nm, and NIR, from 1000 to 2400 nm, ranges of the spectrum,
respectively. The aim of this work is to demonstrate the robustness of Support Vector Machines to recognise certain
spectral features of the target. Furthermore, research has been made to find the adequate SVM configuration for this
hyperspectral application. In this way, anomaly detection and material identification can be efficiently performed. A
classifier with a combination of a Gaussian Kernel and a non linear Principal Component Analysis, namely k-PCA is
concluded as the best option in this particular case. Finally, experimental tests have been carried out with materials
typical of the tobacco industry (tobacco leaves mixed with unwanted spurious materials, such as leathers, plastics, etc.)
to demonstrate the suitability of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, 69661Y (2008) https://doi.org/10.1117/12.780080
We describe a novel approach to produce color composite images from hyperspectral data using weighted spectra averages. The weighted average is based on a sequence of numbers (weights) selected using pixel value information and interband distance. Separate sequences of weights are generated for each of the three color bands forming the color composite image. Tuning of the weighting parameters and emphasis on different spectral areas allows for emphasis of one or other feature in the image. The produced image is a distinct approach from a regular color composite result, since all the bands provide information to the final result.
The algorithm was implemented in high level programming language and provided with a user friendly graphical interface. The current design allows for stand-alone usage or for further modifications into a real time visualization module. Experimental results show that the weighted color composition is an extremely fast visualization tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.