PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839001 (2012) https://doi.org/10.1117/12.979120
This PDF file contains the front matter associated with SPIE Proceedings Volume 8390, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839002 (2012) https://doi.org/10.1117/12.917737
The continuum fusion (CF) methodology for producing detection algorithms is generalized to include a new class of
theoretical problems not considered previously by any published methods. The current CF formalism distinguishes two
types of epistemic unknowns for binary composite hypothesis (CH) testing problems. One type is associated with a target
(hypothesis H1), the other with the clutter (hypothesis H0). Older methods, including the GLR and invariant methods,
treat the two types of parameters as independent of each other. Here a new type of parameter is introduced, which is
shared jointly with both hypotheses. In many common applications, this is a distinction imposed by the physical models
that generate the hypothesis test. Furthermore, it is a distinction not recognized by traditional methods, but is treated
naturally in the CF formalism. Examples are described where models with such shared parameters produce optimal detectors,
while the older methods perform at extremely degraded levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839003 (2012) https://doi.org/10.1117/12.918888
This paper discusses issues related to the inherent ambiguity of the composite hypothesis testing problem, a
problem that is central to the detection of target signals in cluttered backgrounds. In particular, the paper
examines the recently proposed method of continuum fusion (which, because it combines an ensemble of clairvoyant
detectors, might also be called clairvoyant fusion), and its relationship to other strategies for composite
hypothesis testing. A specific example involving the affine subspace model adds to the confusion by illustrating
irreconcilable differences between Bayesian and non-Bayesian approaches to target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839004 (2012) https://doi.org/10.1117/12.920835
Anomaly detectors based on subspace models have the dimension of the clutter subspace as the parameter with
a large range of values. An anomaly detector that has a different parameter with fewer values is proposed.
The known pixel from a hyperspectral image is predicted with a linear transformation of the unknown variables
from the clutter subspace and the coefficients of the linear transformation are unknown. The dimension of the
clutter subspace can vary from one spectral component of the pixel to another. The anomaly detector is the
Mahalanobis distance of the error. The experimental results show that the parameter in the anomaly detector
has a significantly reduced number of possible values in comparison with the conventional anomaly detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839005 (2012) https://doi.org/10.1117/12.918722
A nonlinear kernel-based classifier for hyperspectral target detection is proposed in this paper. The proposed approach
relies on sparsely representing a test sample in terms of all the training samples in a high-dimensional feature space induced
by a kernel function. Specifically, the feature representation of a test pixel is assumed to be compactly expressed as a sparse
linear combination of few atoms from a training dictionary consisting of both background and target training samples in the
same feature space. The sparse representation vector is obtained by decomposing the test pixel over the training dictionary
via a kernelized greedy algorithm, which uses the kernel trick to avoid explicit evaluations of the data in the feature space.
The class label is then determined by comparing the reconstruction accuracy with respect to the background and target
sub-dictionaries using the recovered sparse vector. Designing the classifier in a high-dimensional feature subspace will
implicitly exploit the higher-order structure (correlations) within the data which cannot be captured by a linear model.
Therefore, by projecting the pixels into a kernel feature space and kernelizing the linear sparse representation model, the
data separability between the background and target classes will be shown to be improved, leading to a more accurate
detection performance. The effectiveness of the proposed kernel sparsity model for target detection is demonstrated by
experimental results on real hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839006 (2012) https://doi.org/10.1117/12.916867
Hyperspectral Image (HSI) anomaly detectors typically employ local background modeling techniques to facilitate target
detection from surrounding clutter. Global background modeling has been challenging due to the multi-modal content
that must be automatically modeled to enable target/background separation. We have previously developed a support
vector based anomaly detector that does not impose an a priori parametric model on the data and enables multi-modal
modeling of large background regions with inhomogeneous content. Effective application of this support vector
approach requires the setting of a kernel parameter that controls the tightness of the model fit to the background data.
Estimation of the kernel parameter has typically considered Type I / false-positive error optimization due to the
availability of background samples, but this approach has not proven effective for general application since these
methods only control the false alarm level, without any optimization for maximizing detection. Parameter optimization
with respect to Type II / false-negative error has remained elusive due to the lack of sufficient target training exemplars.
We present an approach that optimizes parameter selection based on both Type I and Type II error criteria by
introducing outliers based on existing hypercube content to guide parameter estimation. The approach has been applied
to hyperspectral imagery and has demonstrated automatic estimation of parameters consistent with those that were found
to be optimal, thereby providing an automated method for general anomaly detection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839007 (2012) https://doi.org/10.1117/12.919808
This paper talks about the problem of nding targets in shadows. It discusses, through example and empirical
analysis, why shadowed targets look dierent to a sensor. A forward modeling approach is used to describe
how ground materials (i.e., targets) manifest themselves through the atmosphere and appear to the sensor in
the radiance domain. Changes in illumination can be obtained by processing co-registered LiDAR point cloud
data to obtain solar and sky-loading scaling factors. These scaling factors are then used in the forward model to
better estimate varying illuminated targets in the scene. A target detection application was applied and showed
that the modied or dynamic forward model was able to detect targets in both the open and hard shadow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839009 (2012) https://doi.org/10.1117/12.918646
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to
form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also
developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion
algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data
at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with
either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification
accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900B (2012) https://doi.org/10.1117/12.919431
Recent terrorist attacks have sprung a need for a large scale explosive detector. Our group has developed
differential reflection spectroscopy which can detect explosive residue on surfaces such as parcel, cargo and
luggage. In short, broad band ultra-violet and visible light is shone onto a material (such as a parcel) moving on
a conveyor belt. Upon reflection off the surface, the light intensity is recorded with a spectrograph (spectrometer
in combination with a CCD camera). This reflected light intensity is then subtracted and normalized with
the next data point collected, resulting in differential reflection spectra in the 200-500 nm range. Explosives
show spectral finger-prints at specific wavelengths, for example, the spectrum of 2,4,6, trinitrotoluene (TNT)
shows an absorption edge at 420 nm. Additionally, we have developed an automated software which detects the
characteristic features of explosives. One of the biggest challenges for the algorithm is to reach a practical limit
of detection. In this study, we introduce our automatic detection software which is a combination of principal
component analysis and support vector machines. Finally we present the sensitivity and selectivity response of
our algorithm as a function of the amount of explosive detected on a given surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900C (2012) https://doi.org/10.1117/12.919276
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource
deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map
the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results
from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine
Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely
available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high
spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use
maps from the two image sources were integrated using a probability-based approach. Classification results were
obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as
metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features.
The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data
sets from New York State. The classifications were augmented using this fused approach, with few supplementary
advantages such as correction for cloud cover and independence from time of year. We concluded that this method
would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date,
high spatial resolution, multi-spectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900D (2012) https://doi.org/10.1117/12.915346
In this paper, we consider detecting man-made objects in natural images. We segment the image into tiles; we
consider a variety of statistical metrics and correlate them to the presence of man-made targets. To quantify
the metric, we apply a method of implanting targets and evaluating the resulting ROC (Receiver Operating
Characteristic) curves. We rank previously reported algorithms and develop new ones in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900E (2012) https://doi.org/10.1117/12.920651
A passive remote sensing technique for accurately monitoring the combustion efficiency (CE) of petrochemical flares
is greatly desired. A Phase II DOE-funded SBIR lead by Spectral Sciences, Inc. is underway to develop such a method.
This paper presents an overview of the current progress of the Air Force Institute of Technology's (AFIT) contribution to
this effort. A Telops Hyper-Cam Mid-wave infrared (MWIR 1800-6667cm-1 or 1.5-5.5μm) imaging Fourier-transform
spectrometer (IFTS) is used to examine a laminar calibration flame produced by a Hencken burner. Ethylene fuel
(C2H4) was burned at four different equivalency ratios Φ=0.80, 0.91, 1.0 and 1.25. This work focuses on the qualitative
spectrally-resolved visualization of a Hencken burner flame and the spatial distribution of combustion by-products.
A simple radiative transfer model is then developed and fit to a single-pixel spectrum. The flame spectra were characterized
by structured emissions from CO2, H2O and CO. For the Φ = 0.91 flame, the spectrally-estimated temperature
was T = 2172±28K at a height 10mm above the burner, a favorable result compared to OH-PLIF measurements
(T = 2226±112K) made on an identical flame. H2O and CO2 mole fractions across the flame at the same height of 10
mmwere measured to be 13.7±0.6% and 15.5±0.8%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heikki Salo, Ville Tirronen, Ilkka Pölönen, Sakari Tuominen, Andras Balazs, Jan Heikkilä, Heikki Saari
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900G (2012) https://doi.org/10.1117/12.919085
In this paper we consider methods for estimating forest tree stem volumes by species using images taken from
light unmanned aircraft systems (UAS). Instead of using LiDAR and additional multiband imagery a color
infrared camera mounted to a light UAS is used to acquire both imagery and the DSM of target area. The goal
of this study is to accurately estimate tree stem volumes in three classes. The status of the ongoing work is
described and an initial method for delineating and classifying treetops is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900H (2012) https://doi.org/10.1117/12.919321
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been developed at the Rochester
Institute of Technology (RIT) for over two decades. The last major update of the model, DIRSIG 4, built on an
established, first-principles, multi- and hyper-spectral scene simulation tool. It introduced a modern and flexible
software architecture to support new sensor modalities and more complex and dynamic scenes. Since that time,
the needs of the user community have grown and diversified in tandem with the computational capabilities
of modern hardware. Faced with a desire to model more complex, multi-component systems that are beyond
the original intent and capabilities of an aging software design, a new version of DIRSIG, version 5, is being
introduced to the community.
This paper describes the core of DIRSIG 5 that is responsible for linking the disparate sensor, scene, and
environmental models together, spatially, temporally, and parametrically. The spatial relationships are governed
by a planet-centric universe model encompassing a whole globe digital elevation and optical property model, the
scene model(s), globally varying atmospheric models, and a space model. Temporal relationships are driven by
a formal modeling and simulation architecture based on approaches used in engineering and biological sciences
to model highly dynamic and interactive systems. Finally, the parametric interfaces are described by a universal
data model that facilitates scripting, inter-dependent properties and user interface construction. The design of
these components will be presented along with specific module implementation details. These simulation tools
will be used to demonstrate some of the new capabilities and applications of DIRSIG 5.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900I (2012) https://doi.org/10.1117/12.918821
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a rst principles-based synthetic
image generation model, developed at the Rochester Institute of Technology (RIT) over the past 20+ years. By
calculating the sensor reaching radiance between the bandpass 0.2 to 20mm, it produces multi or hyperspectral
remote sensing images. By integrating independent rst principles based sub-models, such as MODTRAN,
DIRSIG generates a representation of what a sensor would see with high radiometric delity. In order to detect
temporal changes in a process within the scene, currently the eort is devoted to enhance the capacity of
DIRSIG by incorporating process models. The parking lot process model is interesting to many applications.
Therefore, this paper builds a parking lot process model PARKVIEW based on the statistical description of the
parking lot which includes parking lot occupancy, parking duration and parking spot preference. The output of
PARKVIEW could then be fed into DIRSIG to enhance the scene simulation capacity of DIRSIG in terms of
including temporal information of the parking lot. In order to show an accurate and ecient way of extracting
the statistical description of the parking lot, an experiment is set up to record the distribution of cars in several
parking lots on the RIT campus during one weekday by taking photos every ve minutes. The image data are
processed to extract the parking spot status of the parking lot for each frame taken from the experiment. The
parking spot status information is then described in a statistical way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900J (2012) https://doi.org/10.1117/12.919148
A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional
point cloud derived from multi-view depth extraction techniques. By exploring the information
provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the
two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global
positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate
system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute
of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was
collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional
model geometry extracted using modern point-cloud processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900K (2012) https://doi.org/10.1117/12.919319
Phototourism is a burgeoning field that uses collections of ground-based photographs to construct a three-dimensional
model of a tourist site, using computer vision techniques. These techniques capitalize on the extensive overlap generated
by the various visitor-acquired images from which a three-dimensional point cloud can be generated. From there, a
facetized version of the structure can be created. Remotely sensed data tends to focus on nadir or near nadir imagery
while trying to minimize overlap in order to achieve the greatest ground coverage possible during a data collection. A
workflow is being developed at Digital Imaging and Remote Sensing (DIRS) Group at the Rochester Institute of
Technology (RIT) that utilizes these phototourism techniques, which typically use dense coverage of a small object or
region, and applies them to remotely sensed imagery, which involves sparse data coverage of a large area. In addition
to this, RIT has planned and executed a high-overlap image collection, using the RIT WASP system, to study the
requirements needed for such three-dimensional reconstruction efforts. While the collection was extensive, the intention
was to find the minimum number of images and frame overlap needed to generate quality point clouds. This paper will
discuss the image data collection effort and what it means to generate and evaluate a quality point cloud for
reconstruction purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial Spectral Remote Sensing: WorldView-2 and Its Applications I
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900L (2012) https://doi.org/10.1117/12.919756
Over the last decade DigitalGlobe (DG) has built and launched a series of remote sensing satellites with steadily
increasing capabilities: QuickBird, WorldView-1 (WV-1), and WorldView-2 (WV-2). Today, this constellation acquires
over 2.5 million km2 of imagery on a daily basis. This paper presents the configuration and performance capabilities of
each of these satellites, with emphasis on the unique spatial and spectral capabilities of WV-2. WV-2 employs high-precision
star tracker and inertial measurement units to achieve a geolocation accuracy of 5 m Circular Error, 90%
confidence (CE90). The native resolution of WV-2 is 0.5 m GSD in the panchromatic band and 2 m GSD in 8
multispectral bands. Four of the multispectral bands match those of the Landsat series of satellites; four new bands
enable novel and expanded applications. We are rapidly establishing and refreshing a global database of very high
resolution (VHR) 8-band multispectral imagery. Control moment gyroscopes (CMGs) on both WV-1 and WV-2 improve
collection capacity and provide the agility to capture multi-angle sequences in rapid succession. These capabilities result
in a rich combination of image features that can be exploited to develop enhanced monitoring solutions. Algorithms for
interpretation and analysis can leverage: 1) broader and more continuous spectral coverage at 2 m resolution; 2) textural
and morphological information from the 0.5 m panchromatic band; 3) ancillary information from stereo and multi-angle
collects, including high precision digital elevation models; 4) frequent revisits and time-series collects; and 5) the global
reference image archives. We introduce the topic of creative fusion of image attributes, as this provides a unifying theme
for many of the papers in this WV-2 Special Session.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900M (2012) https://doi.org/10.1117/12.919841
The enumeration of the population remains a critical task in the management of refugee/IDP camps. Analysis of very
high spatial resolution satellite data proofed to be an efficient and secure approach for the estimation of dwellings and
the monitoring of the camp over time. In this paper we propose a new methodology for the automated extraction of
features based on differential morphological decomposition segmentation for feature extraction and interactive training
sample selection from the max-tree and min-tree structures. This feature extraction methodology is tested on a
WorldView-2 scene of an IDP camp in Darfur Sudan. Special emphasis is given to the additional available bands of the
WorldView-2 sensor. The results obtained show that the interactive image information tool is performing very well by
tuning the feature extraction to the local conditions. The analysis of different spectral subsets shows that it is possible to
obtain good results already with an RGB combination, but by increasing the number of spectral bands the detection of
dwellings becomes more accurate. Best results were obtained using all eight bands of WorldView-2 satellite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900N (2012) https://doi.org/10.1117/12.917717
Multispectral imagery (MSI) provides information to support decision making across a growing number of private and
industrial applications. Among them, land mapping, terrain classification and feature extraction rank highly in the
interest of those who analyze the data to produce information, reports, and intelligence products. The 8 nominal band
centers of WorldView-2 allow us to use non-traditional means of measuring the differences which exist in the features,
artifacts, and surface materials in the data, and we can determine the most effective method for processing this
information by exploiting the unique response values within those wavelength channels. The difference in responses
across select bands can be sought using normalized difference index ratios to measure moisture content, indicate
vegetation health, and distinguish natural features from man-made objects. The focus of this effort is to develop an
approach to measure, identify and threshold these differences in order to establish an effective land mapping and feature
extraction process germane to WorldView-2 imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900O (2012) https://doi.org/10.1117/12.918655
There are clear indications that densification of built-up areas within cities and new developments in their outskirts, in
conjunction with urban population activities, are at the origin of climate changes at the local level and have a direct
impact on air and water quality. Densification of the vegetation cover is often mentioned as one of the most important
means to mitigate the impacts of climate changes and to improve the quality of the urban environment. Decision making
on vegetation cover densification presupposes that urban planners and managers know exactly the actual situation in
terms of vegetation location, types and biomass. However, in many cities, inventories of vegetation cover are usually
absent. This study examines the feasibility of an automatic system for vegetation cover inventory and mapping in urban
areas based on WorldView-2 imagery. The city of Laval, Canada, was chosen as the experimental site. The principal
conclusions are as follows: a) conversion of digital counts to ground reflectances is a crucial step in order to fully exploit
the potential of WV-2 multispectral images for mapping vegetation cover and recognizing vegetation classes; b) the
combined use of NDVIs computed using the three infrared available bands and the red band provides an accurate means
of differentiating vegetation cover from other land covers; and c) it is possible to separate trees from other vegetation
types and to identify tree species even in dense urban areas using spectral signature characteristics and segmentation
algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900P (2012) https://doi.org/10.1117/12.945940
Nearshore depths for Waimanalo Beach, HI, are extracted from optical imagery, taken by the WorldView-2 (WV-2)
satellite on 31 March 2011, by means of automated Wave Kinematics Bathymetry (WKB). Two sets of three sequential
images taken at intervals of about 10 seconds are used for the analyses herein. Water depths are calculated using a
computer program that registers the images, estimates the currents, and then uses the linear dispersion relationship for
surface gravity waves to estimate depth. Depths are generated from close to shore out to about 20 meters depth.
Comparisons with SHOALS Light Detection and Ranging (LiDAR) bathymetry values show WKB depths are accurate
to about half a meter, with R2 values of 90%, and are frequently in the range of 10 to 20 percent relative error for depths
ranging from 2 to 16 meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Paul G. Lucey, Mark Wood, Sarah T. Crites, Jason Akagi
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900Q (2012) https://doi.org/10.1117/12.918970
LWIR hyperspectral imaging has a wide range of civil and military applications with its ability to sense chemical
compositions at standoff ranges. Most recent implementations of this technology use spectrographs employing varying
degrees of cryogenic cooling to reduce sensor self-emission that can severely limit sensitivity. We have taken an
interferometric approach that promises to reduce the need for cooling while preserving high resolution. Reduced cooling
has multiple benefits including faster system readiness from a power off state, lower mass, and potentially lower cost
owing to lower system complexity. We coupled an uncooled Sagnac interferometer with a 256x320 mercury cadmium
telluride array with an 11 micron cutoff to produce a spatial interferometric LWIR hyperspectral imaging system
operating from 7.5 to 11 microns. The sensor was tested in ground-ground applications, and from a small aircraft
producing spectral imagery including detection of gas emission from high vapor pressure liquids.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900R (2012) https://doi.org/10.1117/12.918974
A prototype long wave infrared Fourier transform spectral imaging system using a wedged Fabry-Perot interferometer
and a microbolometer array was designed and built. The instrument can be used at both short (cm) and long standoff
ranges (infinity focus). Signal to noise ratios are in the several hundred range for 30 C targets. The sensor is compact,
fitting in a volume about 12 x12 x 4 inches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900T (2012) https://doi.org/10.1117/12.918643
The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant
extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of
hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a
single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging
spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in
each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has
300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional
1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral
data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements
every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a
wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea"
mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context
camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data
(e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time
display and analysis. The system concept has a range of applications including biomedical imaging, missile defense,
infrared counter measure (IRCM) threat characterization, and ground based remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900U (2012) https://doi.org/10.1117/12.918449
One of the drawbacks of slit-based dispersive imaging spectrometers is the need for an inertial measurement unit and a
photogrammetric software to register the line images. On the contrary, since they perform instantaneous 2D imaging,
static Fourier transform imaging spectrometers allow image registration from the images themselves. We propose to
merge these two spectral imagers, by cutting the slit in a mirror that reflects the light toward the static Fourier transform
spectral imager. Thus, the two spectrometers can be exactly co-registered. We present the preliminary design of such a
dual-band hyperspectral imager (visible and near infrared), and an evaluation of the expected geometric and radiometric
performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900V (2012) https://doi.org/10.1117/12.918908
Land and ocean data product generation from visible-through-shortwave-infrared multispectral and hyperspectral
imagery requires atmospheric correction or compensation, that is, the removal of atmospheric absorption and scattering
effects that contaminate the measured spectra. We have recently developed a prototype software system for automated,
low-latency, high-accuracy atmospheric correction based on a C++-language version of the Spectral Sciences, Inc.
FLAASH™ code. In this system, pre-calculated look-up tables replace on-the-fly MODTRAN® radiative transfer
calculations, while the portable C++ code enables parallel processing on multicore/multiprocessor computer systems.
The initial software has been installed on the Sensor Web at NASA Goddard Space Flight Center, where it is currently
atmospherically correcting new data from the EO-1 Hyperion and ALI sensors. Computation time is around 10 s per
data cube per processor. Further development will be conducted to implement the new atmospheric correction software
on board the upcoming HyspIRI mission's Intelligent Payload Module, where it would generate data products in nearreal
time for Direct Broadcast to the ground. The rapid turn-around of data products made possible by this software
would benefit a broad range of applications in areas of emergency response, environmental monitoring and national
defense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900W (2012) https://doi.org/10.1117/12.918962
In the present work, the WorldView-2 (WV2) capability for retrieving Case 2 water components is analyzed.
The WV2 sensor characteristics, such as a 11-bit quantization, 8 bands in the VNIR (visible and near infrared)
region and high Signal-to-Noise Ratios (SNR) make WV2 potentially suitable for a retrieval process. In the
Case 2 water problem, the sensor-reaching signal due to water is very small when compared to the signal due
to the atmospheric eects. Therefore, adequate atmospheric compensation becomes an important rst step to
accurately retrieve water parameters. The problem becomes more dicult when using multispectral imagery as
there are typically only a handful of bands suitable for performing atmospheric compensation. In this work, we
test atmospheric compensation techniques for the WV2 satellite, enabling it to be used for water constituent
retrieval in both deep and shallow water. A look-up-table (LUT) methodology is implemented to retrieve the
water parameters chlorophyll, suspended materials, colored dissolved organic matter, bathymetry, bottom type
and water clarity for a simulated case study. The in-water radiative transfer code HydroLight is used to simulate
re
ectance data in this study while the MODTRAN code is used to simulate atmospheric eects. The resulting
modeled sensor-reaching radiance data can be sampled to a WV2 sensor model to simulate WV2 image data.
This data is used to test the proposed methodology. Finally, a sensitivity analysis is performed to evaluate how
sensitive the constituent retrieval process is to adequate atmospheric compensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83900Y (2012) https://doi.org/10.1117/12.918350
In the atmospheric correction of the CASI hyperspectral image, we found that the biggest factor is the downward
scattering of the direct solar beam in the exact direction to be reflected at the water surface to be detected by the sensor.
The downward scattering angle was calculated using navigation data, viewing geometry, and solar ephemeris. One
benefit of this approach is that it is now possible to avoid the limitations posed by the dark-pixel method. Since the
scattering angle is computed using geometry only, it is completely free from the possible trouble met by the dark-pixel
approach. In this paper, we illustrate the computational procedure and show examples of marine remote sensing data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839010 (2012) https://doi.org/10.1117/12.918012
Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature,
humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements
obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS)
or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative
transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme.
The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of
Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also
retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully
tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging
spectrometer operated by the NASA Jet Propulsion Laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Signatures, Spectral Libraries, and Applications I
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839011 (2012) https://doi.org/10.1117/12.919484
Field spectral signatures are commonly collected in conjunction with remote sensing campaigns. Unfortunately, the
lack of sufficient metadata associated with campaign-specific spectral signatures often makes it difficult for others to
utilize them for their own applications. The first step in improving the utility of field collected spectral signatures is
achieved by establishing fully documented procedures that minimize controllable error sources. A major source of error
when collecting field spectral signatures is the variability of solar illumination. By periodically monitoring a static
reference panel it is possible to both characterize the variance in solar illumination during collection as well as to correct
collected spectra. In addition, recent advances in instrument sensitivity greatly reduce the time required to collect high
quality spectra that in turn reduces the magnitude of potential errors associated with changes in solar illumination. Since
libraries of field spectral signatures are commonly used to analyze remotely sensed imagery, it is important that field
collection is performed at a relevant spatial scale and with illumination and view geometry that is similar to that for the
image collection. This is particularly true of vegetation since the observed spectral signature is the result of the complex
interaction of multiple illumination sources (i.e. direct sunlight, sky illumination and light scattered off other elements in
the scene), canopy architecture and the reflectance properties of the individual elements within the canopy. Suggested
field collection approaches that minimize these sources of error are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839012 (2012) https://doi.org/10.1117/12.918676
Fractional abundance maps have been produced from Hyperion hyperspectral data over Oaxaca, Mexico, by applying a
new spatially adaptive spectral unmixing algorithm. The goal of this research is to produce land-use maps for aiding
archaeologists studying the Zapotec civilization. However, to correlate the fractional abundance maps generated from the
HSI image processing, a relationship between the known materials located in Oaxaca, Mexico, and the spectral profiles of
these materials must be established. A field campaign during December 2011, (the dry season in Oaxaca) took place for
the explicit task of obtaining spectral profiles of the most common materials found in the region. Ground-truth information
was collected for three Oaxaca valleys (Tlacolula, Yanhuitlan, and Ycuitla). Common materials and associated regions
were recorded and material samples were collected at many of these locations. Laboratory reflectance spectral profiles
of the collected material samples are measured after the field campaign using a FieldSpec Pro. Wavelength ranges of the
FieldSpec Pro spanned 350-2500nm matching that of the hyperspectral imagery collected from the Hyperion sensor on
board the EO-1 satellite. GIS maps of the three valleys in Oaxaca, Mexico, are used to identify where these samples were
collected and correspond to the laboratory measured material samples. The spectral library entries obtained correspond to
bare soils, senescent agricultural vegetation, senescent natural vegetation, and terra cotta tile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Andrew K. Thorpe, Dar A. Roberts, Philip E. Dennison, Eliza S. Bradley, Christopher C. Funk
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839013 (2012) https://doi.org/10.1117/12.918958
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected solar radiation in the shortwave
infrared and has been used to map methane (CH4) using both a radiative transfer technique [1] and a band ratio method
[2]. However, these methods are best suited to water bodies with high sunglint and are not well suited for terrestrial
scenes. In this study, a cluster-tuned matched filter algorithm originally developed by Funk et al. [3] for synthetic
thermal infrared data was used for gas plume detection over more heterogeneous backgrounds.
This approach permits mapping of CH4, CO2 (carbon dioxide), and N2O (nitrous oxide) trace gas emissions in multiple
AVIRIS scenes for terrestrial and marine targets. At the Coal Oil Point marine seeps offshore of Santa Barbara, CA,
strong CH4 anomalies were detected that closely resemble results obtained using the band ratio index. CO2 anomalies
were mapped for a fossil-fuel power plant, while multiple N2O and CH4 anomalies were present at the Hyperion
wastewater treatment facility in Los Angeles, CA. Nearby, smaller CH4 anomalies were also detected immediately
downwind of hydrocarbon storage tanks and centered on a flaring stack at the Inglewood Gas Plant.
Improving these detection methods might permit gas detection over large search areas, e.g. identifying fugitive CH4
emissions from damaged natural gas pipelines or hydraulic fracturing. Further, this technique could be applied to other
trace gasses with distinct absorption features and to data from planned instruments such as AVIRISng, the NEON
Airborne Observation Platform (AOP), and the visible-shortwave infrared (VSWIR) sensor on the proposed HyspIRI
satellite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839014 (2012) https://doi.org/10.1117/12.919121
Identifying materials by measuring and analyzing their reflectance spectra has been an important procedure in analytical
chemistry for decades. Airborne and space-based imaging spectrometers allow materials to be mapped across the
landscape. With many existing airborne sensors and new satellite-borne sensors planned for the future, robust methods
are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and
mapping materials using spectral feature analyses of reflectance data in an expert-system framework called MICA
(Material Identification and Characterization Algorithm) is described. MICA is a module of the PRISM (Processing
Routines in IDL for Spectroscopic Measurements) software, available to the public from the U.S. Geological Survey
(USGS) at http://pubs.usgs.gov/of/2011/1155/. The core concepts of MICA include continuum removal and linear
regression to compare key diagnostic absorption features in reference laboratory/field spectra and the spectra being
analyzed. The reference spectra, diagnostic features, and threshold constraints are defined within a user-developed
MICA command file (MCF). Building on several decades of experience in mineral mapping, a broadly-applicable MCF
was developed to detect a set of minerals frequently occurring on the Earth's surface and applied to map minerals in the
country-wide coverage of the 2007 Afghanistan HyMap data set. MICA has also been applied to detect sub-pixel oil
contamination in marshes impacted by the Deepwater Horizon incident by discriminating the C-H absorption features in
oil residues from background vegetation. These two recent examples demonstrate the utility of a spectroscopic approach
to remote sensing for identifying and mapping the distributions of materials in imaging spectrometer data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839015 (2012) https://doi.org/10.1117/12.915176
We present a method for partitioning a multispectral image into fixed size look-up tables (LUTs) that are dynamically
updated for presence or absence of simple distribution characterizing features of the sub-frames they represent. If the
features have been previously observed, the sub-frame is recognized and no update occurs, if not the table is updated and
a suitable anomaly reported. Our method enables dynamic change detection to occur at multiple wavelengths
independently by creating suitable LUTs for each wavelength band. Details of our approach are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839016 (2012) https://doi.org/10.1117/12.917053
Various hyperspectral change detection methods exist in the literature. Here prediction-based methods, such as
chronochrome and covariance equalization, are reviewed and compared with a more recently developed model-based
approach. These methods are typically applied for anomalous change detection. Several methods for
extending these algorithms to achieve matched change detection are discussed. The algorithms are then applied
to airborne visible to near infrared hyperspectral data collected recently over Rochester, New York.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839017 (2012) https://doi.org/10.1117/12.920499
The increasing volume of data produced by hyperspectral image sensors have forced researches and developers
to seek out new and more ecient ways of analyzing the data as quick as possible. Medical, scientic, and
military applications present performance requirements for tools that perform operations on hyperspectral sensor
data. By providing a hyperspectral image analysis library, we aim to accelerate hyperspectral image application
development. Development of a cross-platform library, Libdect, with GPU support for hyperspectral image
analysis is presented.
Coupling library development with ecient hyperspectral algorithms escalates into a signicant time invest-
ment in many projects or prototypes. Provided a solution to these issues, developers can implement hyperspectral
image analysis applications in less time. Developers will not be focused on implementing target detection code
and potential issues related to platform or GPU architecture dierences.
Libdect's development team counts with previously implemented detection algorithms. By utilizing proven
tools, such as CMake and CTest, to develop Libdect's infrastructure, we were able to develop and test a prototype
library that provides target detection code with GPU support on Linux platforms. As a whole, Libdect is an
early prototype of an open and documented example of Software Engineering practices and tools. They are
put together in an eort to increase developer productivity and encourage new developers into the eld of
hyperspectral image application development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matthew S. Brown, Eli Glaser, Scott Grassinger, Ambrose Slone, Mark Salvador
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839018 (2012) https://doi.org/10.1117/12.918667
Automated hyperspectral image processing enables rapid detection and identification of important military targets from
hyperspectral surveillance and reconnaissance images. The majority of this processing is done using ground-based
CPUs on hyperspectral data after it has been manually exfiltrated from the mobile sensor platform. However, by
utilizing high-performance, on-board processing hardware, the data can be immediately processed, and the exploitation
results can be distributed over a low-bandwidth downlink, allowing rapid responses to situations as they unfold.
Additionally, transitioning to higher-performance and more-compact processing architectures such as GPUs, DSPs, and
FPGAs will allow the size, weight, and power (SWaP) demands of the system to be reduced. This will allow the next
generation of hyperspectral imaging and processing systems to be deployed on a much wider range of smaller manned
and unmanned vehicles.
In this paper, we present results on the development of an automated, near-real-time hyperspectral processing system
using a commercially available NVIDIA® Telsa™ GPU. The processing chain utilizes GPU-optimized implementations
of well-known atmospheric-correction, anomaly-detection, and target-detection algorithms in order to identify targetmaterial
spectra from a hyperspectral image. We demonstrate that the system can return target-detection results for
HYDICE data with 308×1280 pixels and 145 bands against 30 target spectra in less than four seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839019 (2012) https://doi.org/10.1117/12.917648
We propose a new architecture to accomplish lossy ultraspectral data compression where we particularly focus on AIRS
(Atmospheric Infrared Sounder) images. In general AIRS images are good candidates for compression as they include more
than two thousand spectral bands that account for over 40MB of data per single data cube. In our proposed compression
technique the input image is first preprocessed by means of spatial subband decomposition followed by a spectral band
ordering stage which is applied in order to increase the correlation between contiguous spectral bands. The resulting image
is segmented on a spectral band basis in such a way that spectral bands are scanned to generate a speech-like signal that
exhibits a higher spectral interband than intraband correlation and therefore can be modeled as an AR (autoregressive)
process. As final step the data is processed through a compression stage involving short and long term forward linear
prediction that produces an error signal that is encoded using a CELP (Code Excited Linear Prediction) scheme. The
forward linear prediction filter order and the resolution of the CELP codebooks are adjusted depending on the spatial
subband that originates the signal being predicted. By manipulating several parameters of both the preprocessing and
compression stages different rate-distortion curves are obtained and highly efficient compression is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901A (2012) https://doi.org/10.1117/12.921481
Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique
that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of
additional computational power, and would allow physical implementation at the sensor using spatial light
multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random
measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images
individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or
reconstruction from the measurements does require a higher level of computation. This makes the prospect of
working with the compressed version of the signal in implementations such as detection or classification much more
efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral
image applications are precisely focused on these areas, and would greatly benefit from a compression technique
like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras
while reducing the large amounts of data generated by all the bands. The present paper will show an implementation
of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through
the use of a regular sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Saurabh Vyas, Hien Van Nguyen, Philippe Burlina, Amit Banerjee, Luis Garza, Rama Chellappa
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901B (2012) https://doi.org/10.1117/12.919800
A computational skin re
ectance model is used here to provide the re
ectance, absorption, scattering, and
transmittance based on the constitutive biological components that make up the layers of the skin. The changes
in re
ectance are mapped back to deviations in model parameters, which include melanosome level, collagen
level and blood oxygenation. The computational model implemented in this work is based on the Kubelka-
Munk multi-layer re
ectance model and the Fresnel Equations that describe a generic N-layer model structure.
This assumes the skin as a multi-layered material, with each layer consisting of specic absorption, scattering
coecients, re
ectance spectra and transmittance based on the model parameters. These model parameters
include melanosome level, collagen level, blood oxygenation, blood level, dermal depth, and subcutaneous tissue
re
ectance. We use this model, coupled with support vector machine based regression (SVR), to predict the
biological parameters that make up the layers of the skin. In the proposed approach, the physics-based forward
mapping is used to generate a large set of training exemplars. The samples in this dataset are then used as
training inputs for the SVR algorithm to learn the inverse mapping. This approach was tested on VIS-range
hyperspectral data. Performance validation of the proposed approach was performed by measuring the prediction
error on the skin constitutive parameters and exhibited very promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Signatures, Spectral Libraries, and Applications II
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901C (2012) https://doi.org/10.1117/12.918297
In the hyperspectral imaging community, there is a need for relevant and valid data for testing scientific principles.
Several sets of data exist which support satellite and airborne HSI research applications. However, there is a
need for data sets to support research looking at detecting and classifying smaller targets such as pedestrians.
While many studies capture data on pedestrians, the data sets are often only related to the specific study being
conducted. As a result, these types of data sets do not contain the necessary documentation or ground truth
needed to apply the data in other contexts. This paper reports on a fully ground truthed HSI data set which
was captured in June 2011 over an urban scene with pedestrians present. The imagery was collected using a
modified airborne hyperspectral imager suited for ground level imaging in the 450 - 2500 nm wavelength region.
The data captured are described along with the ground truth information and documentation which are part of
the data package. Preliminary results from an initial study on material spectral separability using the data are
included to demonstrate the utility of this particular data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901D (2012) https://doi.org/10.1117/12.918464
In June 2011, a multi-sensor airborne remote sensing campaign was flown at the Virginia Coast Reserve Long Term
Ecological Research site with coordinated ground and water calibration and validation (cal/val) measurements.
Remote sensing imagery acquired during the ten day exercise included hyperspectral imagery (CASI-1500),
topographic LiDAR, and thermal infra-red imagery, all simultaneously from the same aircraft. Airborne synthetic
aperture radar (SAR) data acquisition for a smaller subset of sites occurred in September 2011 (VCR'11). Focus
areas for VCR'11 were properties of beaches and tidal flats and barrier island vegetation and, in the water column,
shallow water bathymetry. On land, cal/val emphasized tidal flat and beach grain size distributions, density,
moisture content, and other geotechnical properties such as shear and bearing strength (dynamic deflection
modulus), which were related to hyperspectral BRDF measurements taken with the new NRL Goniometer for
Outdoor Portable Hyperspectral Earth Reflectance (GOPHER). This builds on our earlier work at this site in 2007
related to beach properties and shallow water bathymetry. A priority for VCR'11 was to collect and model
relationships between hyperspectral imagery, acquired from the aircraft at a variety of different phase angles, and
geotechnical properties of beaches and tidal flats. One aspect of this effort was a demonstration that sand density
differences are observable and consistent in reflectance spectra from GOPHER data, in CASI hyperspectral imagery,
as well as in hyperspectral goniometer measurements conducted in our laboratory after VCR'11.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901F (2012) https://doi.org/10.1117/12.918233
This research demonstrates the application of spectral-feature-based analysis to identifying and mapping Earth-surface
materials using spectral libraries and imaging spectrometer data. Feature extraction utilizing a continuum-removal and
local minimum detection approach was tested for analysis of both reflectance and emissivity spectral libraries by
extracting and characterizing spectral features of rocks, soils, minerals, and man-made materials. Library-derived
information was then used to illustrate both reflectance- and emissivity-feature-based spectral mapping using imaging
spectrometer data (AVIRIS and SEBASS). An additional spectral library of emission spectra from selected nocturnal
lighting types was used to develop a database of key spectral features that allowed mapping and characterization of night
lights from ProSpecTIR-VS imaging spectrometer data. Results from these case histories demonstrate that the spectralfeature-
based approach can be used with either reflectance or emission spectra and applied to a wide variety of imaging
spectrometer data types for extraction of key surface composition information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial Spectal Remote Sensing: WorldView-2 and Its Applications II
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901G (2012) https://doi.org/10.1117/12.919493
For the flooding seasons of 2011-2012 multiple space assets were used in a "sensorweb" to track major flooding in
Thailand. Worldview-2 multispectral data was used in this effort and provided extremely high spatial resolution (2m /
pixel) multispectral (8 bands at 0.45-1.05 μ m spectra) data from which mostly automated workflows derived surface
water extent and volumetric water information for use by a range of NGO and national authorities. We first describe
how Worldview-2 and its data was integrated into the overall flood tracking sensorweb. We next describe the use of
Support Vector Machine learning techniques that were used to derive surface water extent classifiers. Then we describe
the fusion of surface water extent and digital elevation map (DEM) data to derive volumetric water calculations. Finally
we discuss key future work such as speeding up the workflows and automating the data registration process (the only
portion of the workflow requiring human input).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
David McLaren, David R. Thompson, Ashley G. Davies, Magnus T. Gudmundsson, Steve Chien
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901H (2012) https://doi.org/10.1117/12.919499
We explore the use of machine learning, computer vision, and pattern recognition techniques to automatically
identify volcanic ash plumes and plume shadows, in WorldView-2 imagery. Using information of the
relative position of the sun and spacecraft and terrain information in the form of a digital elevation map,
classification, the height of the ash plume can also be inferred. We present the results from applying this
approach to six scenes acquired on two separate days in April and May of 2010 of the Eyjafjallajökull
eruption in Iceland. These results show rough agreement with ash plume height estimates from visual and
radar based measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901I (2012) https://doi.org/10.1117/12.918716
Multispectral imaging (MSI) data acquired at different view angles provide an analyst with a unique view into shallow
water. Observations from DigitalGlobe's WorldView-2 (WV-2) sensor, acquired in 39 images in one orbital pass on 30
July 2011, are being analyzed to determine bathymetry along the windward side of the Oahu coastline. Satellite azimuth
and elevation range from 18.8 to 185.8 degrees, and 24.9 (forward-looking) to 24.9 (backward-looking) degrees with 90
degrees representing a nadir view (respectively). WV-2's eight multispectral bands provide depth information
(especially using the Blue, Green, and Yellow bands), as well as information about bottom type and surface glint (using
the Red and NIR bands). Bathymetric analyses of the optical data will be compared to LiDAR-derived bathymetry in
future work. This research shows the impact of varying view angle on inferred bathymetry and discusses the differences
between angle acquisitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901J (2012) https://doi.org/10.1117/12.919342
The capability of the WorldView-2 (WV02) satellite is analyzed for bathymetric applications in shallow coastal
waters. We use an Optimal Estimation method, which provides lower bounds on retrievals errors for an idealized
sensor and idealized model of the environment. Retrieval performance is studied over different substrates and
column water properties. We also study effects of increased signal to noise ratio. This analysis is supported by
numerical inversion of imagery, using a variant of a least-square optimization. Results from 4 different study areas
collected across a few sites in clear Case 1 and Case 2 waters show that the water depth can be realistically
measured on a pixel-by-pixel basis with 10% standard deviation of the error, down to nearly 20 meters depth in Case
1 waters over bright sandy bottom. The same accuracy over dark sea grass or coral is valid down to 10 meters,
assuming that reliable a priori substrate albedo is available. Water turbidity has an important effect on retrievals -
Case 2 water with small concentrations of suspended solids allows for 10% accuracy down to 10 meters over bright
targets. The retrieval accuracy is likely to improve with tighter a priori constrains, constraints obtained from the
context of entire image, or independent information from multiple stereo pairs collected in a single WV02 pass.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
James C. Tilton, Douglas C. Comer, Carey E. Priebe, Daniel Sussman, Li Chen
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901K (2012) https://doi.org/10.1117/12.918366
To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches
for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil
chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported
some success in this eort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap
and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an
analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and
median absolute deviations for the pixels in various annuli around each site of interest on the 28 band dierence
ratios. We then use principle components analysis followed by linear discriminant analysis to train a classier
which assigns a posterior probability that a location is an archaeological site. We tested the procedure using
leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset
of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and
trained one classier for lithic sites (n=33) and one classier for habitation sites (n=16). We then analyzed
convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that
that the combined scores had a higher area under the ROC curve than either individual method, indicating that
including WorldView-2 data in analysis improved the predictive power of the provided APM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901L (2012) https://doi.org/10.1117/12.919583
A method of incorporating macroscopic and microscopic reflectance models into hyperspectral pixel unmixing is
presented and discussed. A vast majority of hyperspectral unmixing methods rely on the linear mixture model to
describe pixel spectra resulting from mixtures of endmembers. Methods exist to unmix hyperspectral pixels using
nonlinear models, but rely on severely limiting assumptions or estimations of the nonlinearity. This paper will present a
hyperspectral pixel unmixing method that utilizes the bidirectional reflectance distribution function to model
microscopic mixtures. Using this model, along with the linear mixture model to incorporate macroscopic mixtures, this
method is able to accurately unmix hyperspectral images composed of both macroscopic and microscopic mixtures. The
mixtures are estimated directly from the hyperspectral data without the need for a priori knowledge of the mixture types.
Results are presented using synthetic datasets, of macroscopic and microscopic mixtures, to demonstrate the increased
accuracy in unmixing using this new physics-based method over linear methods. In addition, results are presented using
a well-known laboratory dataset. Using these results, and other published results from this dataset, increased accuracy in
unmixing over other nonlinear methods is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901M (2012) https://doi.org/10.1117/12.918333
Nonnegative matrix factorization and its variants are powerful techniques for the analysis of hyperspectral images
(HSI). Nonnegative matrix underapproximation (NMU) is a recent closely related model that uses additional
underapproximation constraints enabling the extraction of features (e.g., abundance maps in HSI) in a recursive
way while preserving nonnegativity. We propose to further improve NMU by using the spatial information:
we incorporate into the model the fact that neighboring pixels are likely to contain the same materials. This
approach thus incorporates structural and textural information from neighboring pixels. We use an ℓ1-norm
penalty term more suitable to preserving sharp changes, and solve the corresponding optimization problem using
iteratively reweighted least squares. The effectiveness of the approach is illustrated with analysis of the real-world
cuprite dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901N (2012) https://doi.org/10.1117/12.920686
In hyperspectral imaging, the radiation represented by a single pixel rarely comes from the interaction with a single
homogeneous material. However, the high spectral resolution of imaging spectrometers enables the detection,
identification, and classification of subpixel objects from their contribution to the measured spectral signal. Unmixing is
a hyperspectral image processing approach where the measured spectral signature is decomposed into a collection of
constituent spectra, or endmembers, and a set of corresponding fractions or abundances which correspond to the
fractional area occupied by the particular endmember in that pixel. The use of a single spectrum to represent an
endmember class does not take into account the variability of spectral signatures caused by natural factors. Simple
spectral mixture analysis can, by itself, provide suitable accuracies in some relatively homogeneous environments, but
because of the spectral complexity of many landscapes, the use of fixed endmember spectra may results in inaccurate
unmixing analysis for complex regions over large landscapes. This paper addresses the question of how to perform
unsupervised unmixing where local information is used to extract local endmember information and merged at a global
level to extract endmembers classes for developing an accurate description of the scene under study using the nonnegative
matrix factorization. Preliminary results using AVIRIS data are presented. Results show that this approach
better captures local structures that are not possible with global unmixing approach. Furthermore, they show that spatial
information allows the identification of more spectral endmembers than is it possible with just spectral-only methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901O (2012) https://doi.org/10.1117/12.920698
Automated unmixing consists of finding the number of endmembers, their spectral signatures and their abundances from
a hyperspectral image. Most unmixing techniques are pixel-to-pixel procedures that do not take advantage of spatial
information provided by hyperspectral sensor. This paper explores a new approach for unmixing analysis of
hyperspectral imagery based on a multiscale representation for the joint estimation of the number of endmember and
their spectral signatures. Experimental results using an AVIRIS image is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901P (2012) https://doi.org/10.1117/12.919082
Ant Colony Optimization (ACO) is a computational method used for optimization problems. The ACO algorithm
uses virtual ants to create candidate solutions that are represented by paths on a mathematical graph. We develop
an algorithm using ACO that takes a multispectral image as input and outputs a cluster map denoting a cluster
label for each pixel. The algorithm does this through identication of a series of one dimensional manifolds on
the spectral data cloud via the ACO approach, and then associates pixels to these paths based on their spectral
similarity to the paths. We apply the algorithm to multispectral imagery to divide the pixels into clusters based
on their representation by a low dimensional manifold estimated by the best t ant path" through the data
cloud. We present results from application of the algorithm to a multispectral Worldview-2 image and show that
it produces useful cluster maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901Q (2012) https://doi.org/10.1117/12.919743
Spectral graph theory has proven to be a useful tool in the analysis of high-dimensional data sets. Recall that,
mathematically, a graph is a collection of objects (nodes) and connections between them (edges); a weighted graph
additionally assigns numerical values (weights) to the edges. Graphs are represented by their adjacency whose elements
are the weights between the nodes. Spectral graph theory uses the eigendecomposition of the adjacency matrix (or, more
generally, the Laplacian of the graph) to derive information about the underlying graph. In this paper, we develop a
spectral method based on the 'normalized cuts' algorithm to segment hyperspectral image data (HSI). In particular, we
model an image as a weighted graph whose nodes are the image pixels, and edges defined as connecting spatial
neighbors; the edge weights are given by a weighted combination of the spatial and spectral distances between nodes.
We then use the Laplacian of the graph to recursively segment the image. The advantages of our approach are that, first,
the graph structure naturally incorporates both the spatial and spectral information present in HSI; also, by using only
spatial neighbors, the adjacency matrix is highly sparse; as a result, it is possible to apply our technique to much larger
images than previous techniques. In the paper, we present the details of our algorithm, and include experimental results
from a variety of hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901R (2012) https://doi.org/10.1117/12.918338
In this paper, a Support Vector Machine (SVM) based method to jointly exploit spectral and spatial information
from hyperspectral images to improve classication performance is presented. In order to optimally exploit this
joint information, we propose to use a novel idea of embedding a local distribution of input hyperspectral data
into the Reproducing Kernel Hilbert Spaces (RKHS). A Hilbert Space Embedding called mean map is utilized
to map a group of neighboring pixels of a hyperspectral image into the RKHS and then, calculate the empirical
mean of the mapped points in the RKHS. SVM based classication performed on the mean mapped points can
fully exploit the spectral information as well as ensure spatial continuity among neighboring pixels. The proposed
technique showed signicant improvement over the existing composite kernels on two hyperspectral image data
sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901S (2012) https://doi.org/10.1117/12.919039
Typical automatic clustering methods struggle to determine the correct number of clusters to properly characterize the
data. To estimate the number of clusters in a spectral image data cloud explicitly from the data structure, the pairwise
relationships between pixels in the n-dimensional spectral space are exploited. By plotting the average ith co-density
between pixels and neighbors, a monotonically increasing function will emerge that characterizes the clusters in the data.
Large upward steps in the average neighbor distance function represent the well-grouped clusters in the data. This
process can accurately identify the number of clusters in a wide variety of image data automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial Spectal Remote Sensing: WorldView-2 and Its Applications III
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901U (2012) https://doi.org/10.1117/12.919295
We study spatio-spectral feature extraction and image-adaptive anomaly and change detection on 8-band WorldView 2
imagery using a hierarchical polygonal image segmentation scheme. Features are represented as polygons with spectral
and structural attributes, along with neighborhood structure and containment hierarchy for contextual feature
identification. Further, the hierarchical segmentation provides multiple, coarse-scale, sub-backgrounds representing
relatively uniform regions, which localize and simplify the spectral distribution of an image. This paves the way for
facilitating anomaly and change detection when restricted to the contexts of these backgrounds. For example, forestry,
urban areas, and agricultural land have very different spatio-spectral characteristics and their joint contribution to the
image statistics can result in a complex distribution against which detecting anomalies could in general be a challenging
problem. Our segmentation scheme provides sub-regions in the later stages of the hierarchy that correspond to
homogeneous areas of an image while at the same time allowing inclusion of distinctive small features embedded in
these regions. The exclusion of other image areas by focusing on these sub-backgrounds helps discover these outliers
more easily with simpler methods of discrimination.
By selecting appropriate bands in WorldView2 imagery, the above approach can be used to achieve fine spatio-spectral
control in searching and characterizing features, anomalies, and changes of interest. The anomalies and changes are also
polygons, which have spectral and structural attributes associated with them, allowing further characterization in the
larger context of the image. The segmentation and feature detections can be used as multiple layers in a Geospatial
Information System (GIS) for annotating imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901V (2012) https://doi.org/10.1117/12.919581
In this paper, we present a method to analyze the impact of data space normalization on spectral classification model
portability using multi-angle very-high spatial resolution imagery. In-track multi-angle data provide images of a single
scene, from different observation angles, during a very short period of time. This creates a sequence of images with
relatively static atmospheric and illumination conditions. With this data, the only changes in the scene are due to
observation angle and surface reflectance properties. Using this information, we present an analysis of both the impact
of surface anisotropy and data space normalization on spectral classification accuracy and model portability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901W (2012) https://doi.org/10.1117/12.918692
We present an approach for automatically building a road network graph from multispectralWorldView II images
in suburban and urban areas. In this graph, the road parts are represented by edges and their connectivity by
vertices. This approach consists of an image processing chain utilizing both high-resolution spatial features as
well as multiple band spectral signatures from satellite images. Based on an edge-preserving filtered image, a
two-pass spatial-spectral flood fill technique is adopted to extract a road class map. This technique requires
only one pixel as the initial training set and collects spatially adjacent and spectrally similar pixels to the initial
points as a second level training set for a higher accuracy asphalt classification. Based on the road class map, a
road network graph is built after going through a curvilinear detector and a knowledge based system. The graph
projects a logical representation of the road network in an urban image. Rules can be made to filter salient road
parts with different width as well as ruling out parking lots from the asphalt class map. This spatial spectral
joint approach we propose here is capable of building up a road network connectivity graph and this graph lays
a foundation for further road related tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thomas Krauss, Beril Sirmacek, Hossein Arefi, Peter Reinartz
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901X (2012) https://doi.org/10.1117/12.917801
Using the capability of WorldView-2 to acquire very high resolution (VHR) stereo imagery together with as much
as eight spectral channels allows the worldwide monitoring of any built up areas, like cities in evolving states.
In this paper we show the benefit of generating a high resolution digital surface model (DSM) from multi-view
stereo data (PAN) and fusing it with pan sharpened multi-spectral data to arrive at very detailed information
in city areas. The fused data allow accurate object detection and extraction and by this also automated object
oriented classification and future change detection applications. The methods proposed in this paper exploit the
full range of capacities provided by WorldView-2, which are the high agility to acquire a minimum of two but
also more in-orbit-images with small stereo angles, the very high ground sampling distance (GSD) of about 0.5 m
and also the full usage of the standard four multispectral channels blue, green, red and near infrared together
with the additional provided channels special to WorldView-2: coastal blue, yellow, red-edge and a second near
infrared channel. From the very high resolution stereo panchromatic imagery a so called height map is derived
using the semi global matching (SGM) method developed at DLR. This height map fits exactly on one of the
original pan sharpened images. This in turn is used for an advanced rule based fuzzy spectral classification. Using
these classification results the height map is corrected and finally a terrain model and an improved normalized
digital elevation model (nDEM) generated. Fusing the nDEM with the classified multispectral imagery allows
the extraction of urban objects like like buildings or trees. If such datasets from different times are generated
the possibility of an expert object based change detection (in quasi 3D space) and automatic surveillance will
become possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901Y (2012) https://doi.org/10.1117/12.919827
High spectral and high spatial resolution images acquired from new generation satellites have enabled new
applications. However, the increasing amount of detail in these images also necessitates new algorithms for
automatic analysis. This paper describes a new approach to discover compound structures such as different
types of residential, commercial, and industrial areas that are comprised of spatial arrangements of primitive
objects such as buildings, roads, and trees. The proposed approach uses a robust Gaussian mixture model (GMM)
where each Gaussian component models the spectral and shape content of a group of pixels corresponding to a
primitive object. The algorithm can also incorporate spatial constraints on the layout of the primitive objects in
terms of their relative positions. Given example structures of interest, a new learning algorithm fits a GMM to
the image data, and this model can be used to detect other similar structures by grouping pixels that have high
likelihoods of belonging to the Gaussian object models while satisfying the spatial layout constraints without
any requirement for region segmentation. Experiments using WorldView-2 data show that the proposed method
can detect high-level structures that cannot be modeled using traditional techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83901Z (2012) https://doi.org/10.1117/12.918889
Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels
whose material content is incongruous with the background material in the scene. Typically, the application
involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing
these algorithms is determining which pixels initially constitute the background material within an image. The
topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological
model of the background in the image scene, and uses codensity to measure deviation from this background. In
TAD, the initial graph theory structure of the image data is created by connecting an edge between any two
pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of
proximity graph is among the most well-known approaches to building a geometric graph based on a given set of
data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative
test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor
graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in
TAD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839020 (2012) https://doi.org/10.1117/12.919638
We describe a novel completely non parametric high-dimension joint density estimation algorithm suited
for anomaly and target detection using hyperspectral imaging.
The new algorithm is compared against linear matched filter detection schemes with different available
sample sizes, background statistics (MVN, GMM and non Gaussian). The new algorithm is shown to be
superior in important cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839021 (2012) https://doi.org/10.1117/12.915310
Accurate covariance matrix estimation for high dimensional data can be a difficult problem. A good approximation of
the covariance matrix needs in most cases a prohibitively large number of pixels, i.e. pixels from a stationary section of
the image whose number is greater than several times the number of bands. Estimating the covariance matrix with a
number of pixels that is on the order of the number of bands or less will cause, not only a bad estimation of the
covariance matrix, but also a singular covariance matrix which cannot be inverted. In this article we will investigate
two methods to give a sufficient approximation for the covariance matrix while only using a small number of
neighboring pixels. The first is the Quasilocal Covariance Matrix (QLRX) that uses the variance of the global
covariance instead of the variances that are too small and cause a singular covariance. The second method is Sparse
Matrix Transform (SMT) that performs a set of K Givens rotations to estimate the covariance matrix. We will compare
results from target acquisition that are based on both of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839022 (2012) https://doi.org/10.1117/12.918751
The hyperspectral/spatial detection of edges (HySPADE) algorithm, originally published in 2004 [1], has
been modified and applied to a wider diversity of hyperspectral imagery (HSI) data. As originally described in [1],
HySPADE operates by converting the naturally two-dimensional edge detection process based on traditional image
analysis methods into a series of one-dimensional edge detections based on spectral angle. The HySPADE algorithm: i)
utilizes spectral signature information to identify edges; ii) requires only the spectral information of the HSI scene data
and does not require a spectral library nor spectral matching against a library; iii) facilitates simultaneous use of all
spectral information; iv) does not require endmember or training data selection; v) generates multiple, independent data
points for statistical analysis of detected edges; vi) is robust in the presence of noise; and vii) may be applied to
radiance, reflectance, and emissivity data--though it is applied to radiance and reflectance spectra (and their principal
components transformation) in this report. HySPADE has recently been modified to use Euclidean distance values as an
alternative to spectral angle. It has also been modified to use an N x N-pixel sliding window in contrast to the 2004
version which operated only on spatial subset image chips. HySPADE results are compared to those obtained using
traditional (Roberts and Sobel) edge-detection methods. Spectral angle and Euclidean distance HySPADE results are
superior to those obtained using the traditional edge detection methods; the best results are obtained by applying
HySPADE to the first few, information-containing bands of principal components transformed data (both radiance and
reflectance). However, in practice, both the Euclidean distance and spectral angle versions of HySPADE should be
applied and their results compared. HySPADE results are shown; extensions of the HySPADE concept are discussed as
are applications for HySPADE in HSI analysis and exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839023 (2012) https://doi.org/10.1117/12.916978
Performance-driven sensing is a promising new concept that relies on sensing, processing, and exploiting only
the most "decision-relevant" sets of target data for the purpose of reducing requirements on data collection,
processing, and communications. An example of a device supporting such a concept is a MEMS-based single pixel
Fabry-Perot spectrometer being developed at the Rochester Institute of Technology, which can record selected
wavelengths on a per-pixel basis throughout an image. This paper presents an autonomous target-dependent
waveband selection approach for performance-driven sensing with an adaptive hyperspectral imaging sensor.
Given a target that is to be tracked, a subset of wavebands is estimated from locally recorded hyperspectral data
that provides optimal target detectability against local background. The waveband selection algorithm relies on
finding a subset of bands that provides maximum separation between a target histogram and local background
histogram constructed from the respective bands. To illustrate the concept, we perform a simulation study for
vehicle tracking in a set of synthetic DIRSIG rendered HSI images. The simulations demonstrate improved
vehicle tracking accuracy when using the adaptively-selected subset of wavebands for tracking by histogram
matching compared to performing tracking by histogram matching with regular (fixed) color bands. We extend
the framework to a dynamic concept where the waveband subset is updated over time as a function of position
estimation accuracy and discuss the full integration of the Feature-Aided Tracking (FAT) component derived
from the selected wavebands within a Multiple Hypothesis Tracking (MHT) framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839024 (2012) https://doi.org/10.1117/12.918456
In the recent past the generation and processing of multispectral data have had an immense impact on optical
characterization systems. A virtual test environment is used to examine which bands provide a high information density.
The photocurrent j = ∫ E(λ)*Sabs(λ)*r(λ) dλ was calculated for different light sources E, spectral response curves Sabs
(bands), and the reflectance r of whitish powder samples that were suspected to be dangerous or illegal. The multivariate
dataset will have to be determined whether we can gain any knowledge from this. The employed factor analysis is a
common method of the group of structure-discovering methods and provides good results in the discovery of connections
between parameters. It is particularly used if a variety of parameters must be reduced for some reason. For the
verification, a dimension of the external separation is defined. To carry out this an n-dimensional vector P must be
assigned to each measurement that is registered in the matrix M to determine the volume V of this dot cloud. The
dimension normalized volume is defined as ΔCL, where n is the quantity of employed bands. The reliability of the
complete measurement system is made by a membership function μ(P) comparable to the definitions from the area of the
fuzzy sets. The parameter μ indicates with which reliability a measured pattern P could be assigned to a sample S from a
dataset. The use of such optimized multispectral photodiodes would simplify and accelerate the identification of
potentially dangerous substances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839025 (2012) https://doi.org/10.1117/12.920291
A new compact representation of dierential morphological prole (DMP) vector elds is presented. It is referred
to as the CSL model and is conceived to radically reduce the dimensionality of the DMP descriptors. The model
maps three characteristic parameters, namely scale, saliency and level, into the RGB space through a HSV
transform. The result is a a medium abstraction semantic layer used for visual exploration, image information
mining and pattern classication. Fused with the PANTEX built-up presence index, the CSL model converges
to an approximate building footprint representation layer in which color represents building class labels. This
process is demonstrated on the rst high resolution (HR) global human settlement layer (GHSL) computed from
multi-modal HR and VHR satellite images. Results of the rst massive processing exercise involving several
thousands of scenes around the globe are reported along with validation gures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839026 (2012) https://doi.org/10.1117/12.919310
An optimized data fusion methodology is presented and makes use of airborne and vessel mounted hyperspectral
and multispectral imagery acquired at littoral zones in Florida and the northern Gulf of Mexico. The results
demonstrate the use of hyperspectral-multispectral data fusion anomaly detection along shorelines and in surface
and subsurface waters. Hyperspectral imagery utilized in the data fusion analysis was collected using a 64-1024
channel, 1376 pixel swath width; temperature stabilized sensing system; an integrated inertial motion unit; and
differential GPS. The imaging system is calibrated using dual 18 inch calibration spheres, spectral line sources, and
custom line targets. Simultaneously collected multispectral three band imagery used in the data fusion analysis was
derived either a 12 inch focal length large format camera using 9 inch high speed AGFA color negative film, a 12.3
megapixel digital camera or dual high speed full definition video cameras. Pushbroom sensor imagery is corrected
using Kalman filtering and smoothing in order to correct images for airborne platform motions or motions of a small
vessel. Custom software developed for the hyperspectral system and the optimized data fusion process allows for
post processing using atmospherically corrected and georeferenced reflectance imagery. The optimized data fusion
approach allows for detecting spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly
detection is demonstrated using simulated embedded targets in actual imagery. The approach allows one to utilize
spectral signature anomalies to identify features and targets that would otherwise not be possible. The optimized
data fusion techniques and software has been developed in order to perform sensitivity analysis of the synthetic
images in order to optimize the singular value decomposition model building process and the 2-D Butterworth cutoff
frequency selection process, using the concept of user defined "feature areas". The data fusion "synthetic imagery"
forms a basis for spectral-spatial resolution enhancement for optimal band selection and remote sensing algorithm
development within "spectral anomaly areas". The methods are applied to imagery intended to support Deepwater
Horizon oil spill remediation and recovery efforts. Sensitivity analysis demonstrates the data fusion methodology is
most sensitive to (a) the pixels and features used in the SVD model building process and (b) the 2-D Butterworth
cutoff frequency optimized by application of K-S nonparametric test. The optimized image fusion approach is
transferable to sensor data acquired from other platforms, including autonomous underwater vehicles using near real
time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839027 (2012) https://doi.org/10.1117/12.919236
As new remote sensing modalities emerge, it becomes increasingly important to nd more suitable algorithms
for fusion and integration of dierent data types for the purposes of target/anomaly detection and classication.
Typical techniques that deal with this problem are based on performing detection/classication/segmentation
separately in chosen modalities, and then integrating the resulting outcomes into a more complete picture. In
this paper we provide a broad analysis of a new approach, based on creating fused representations of the multi-
modal data, which then can be subjected to analysis by means of the state-of-the-art classiers or detectors.
In this scenario we shall consider the hyperspectral imagery combined with spatial information. Our approach
involves machine learning techniques based on analysis of joint data-dependent graphs and their associated
diusion kernels. Then, the signicant eigenvectors of the derived fused graph Laplace operator form the new
representation, which provides integrated features from the heterogeneous input data. We compare these fused
approaches with analysis of integrated outputs of spatial and spectral graph methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839028 (2012) https://doi.org/10.1117/12.919268
A multi-modal (hyperspectral, LiDAR, and multi-spectral) imaging data collection campaign was conducted at
the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, in the Rochester, New York, area
July 26-29, 2010. The campaign was titled the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE)
and collected data in support of nine simultaneous unique experiments, several of which leveraged data from
multiple modalities. Airborne imagery was collected over the city of Rochester with hyperspectral, multispectral,
and Light Detection and Ranging (LiDAR) sensors. Sites for data collection included the Genesee River, sections
of downtown Rochester, and the RIT campus. Experiments included sub-pixel target detection, water quality
monitoring, thermal vehicle tracking and wetlands health assessment. An extensive ground truthing effort was
accomplished in addition to the airborne imagery collected. The ultimate goal of this comprehensive data
collection campaign was to provide a community sharable resource that would support additional experiments.
This paper details the experiments conducted and the corresponding data that were collected in conjunction
with this campaign.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 839029 (2012) https://doi.org/10.1117/12.919908
A number of web-accessible databases, including medical, military or other image data, offer universities and
other users the ability to teach or research new Image Processing techniques on relevant and well-documented
data. However, NASA images have traditionally been difficult for researchers to find, are often only available in
hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist
user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and
Research) database seeks to address these issues. Through a graphically-rich web site for browsing and
downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible
database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms.
As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously
encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a
representative sampling of NASA multispectral and hyperspectral images from several Earth Science
instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud
detection, image registration, and map cover/classification. For each technique, corresponding data are selected
from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite
images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing -1
(EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer
(MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and
raw formats, along with associated benchmark data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902A (2012) https://doi.org/10.1117/12.919327
The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are two new sensors being developed by the
joint USGS-NASA Landsat Data Continuity Mission (LDCM) that will extend nearly 40 years of archived Landsat data
once it achieves orbit in January of 2013. Previous efforts focused on using the DIRSIG (Digital Imaging and Remote
Sensing Image Generation) tool to simulate all the phenomenology that can lead to non-uniformity variations in an
LDCM image. This includes detector-to-detector and array-to-array non-uniformities due to variations in relative
spectral response (RSR), gain, bias, and non-linearities. Synthetic images were generated to predict the LDCM
performance pre-launch and to identify calibration concerns. In support of the calibration effort for LDCM, this work
expands on an on-orbit calibration technique called side-slither. In this technique, a 90 degree yaw maneuver is
performed over a uniform region in an effort to determine a flat-field correction. The first component of this research
uses Landsat 5 radiance images as input to DIRSIG to evaluate potential sites for LDCM to perform side-slither once it
achieves orbit. Relative gains are calculated and compared over desert regions, the Amazon, water bodies, and
Antarctica in an effort to identify suitable sites for the maneuver. The second component of this work uses the DIRSIG
tool to model all the non-uniformity variations from previous efforts and to perform the side-slither technique in an effort
to calibrate the raw data. Synthetic image data is used and presented to measure the potential value of this calibration
technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902B (2012) https://doi.org/10.1117/12.918614
Vicarious techniques are used to provide supplemental radiometric calibration data for sensors with onboard calibration
systems, and are increasingly important for sensors without onboard calibration systems. The Radiometric Calibration
Test Site (RadCaTS) is located at Railroad Valley, Nevada. It is a facility that was developed with the goal of increasing
the amount of ground-based radiometric calibration data that are collected annually while maintaining the current level
of radiometric accuracy produced by traditional manned field campaigns. RadCaTS is based on the reflectance-based
approach, and currently consists of a Cimel sun photometer to measure the atmosphere, a weather station to monitor
meteorological conditions, and ground-viewing radiometers (GVRs) that are used the determine the surface reflectance
throughout the 1 × 1-km area. The data from these instruments are used in MODTRAN5 to determine the at-sensor
spectral radiance at the time of overpass.
This work describes the RadCaTS concept, the instruments used to obtain the data, and the processing method used to
determine the surface reflectance and top-of-atmosphere spectral radiance. A discussion on the design and calibration of
three new eight-channel GVRs is introduced, and the surface reflectance retrievals are compared to in situ
measurements. Radiometric calibration results determined using RadCaTS are compared to Landsat 7 ETM+, MODIS,
and MISR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902C (2012) https://doi.org/10.1117/12.919420
The Thermal Infrared Sensor (TIRS) will continue thermal band measurements of the Earth for the Landsat
Data Continuity Mission (LDCM). The instrument is a dual-channel, push-broom imager that consists of 1850
detector elements per band spanning the 15-degree cross track field of view. The push-broom configuration of
the instrument presents several challenges to ensure that the instrument meets uniformity and linearity requirements
across the field of view. Each detector element may have a slightly different spectral and radiometric
response resulting from variations in pixel-to-pixel gain, bias, and spectral band shape. These differences must
be measured and corrected for in order to provide a radiometrically accurate data product necessary for the
Landsat science mission.
During pre-launch testing, calibration ground support equipment (CGSE) is used to uniformly illuminate the
TIRS field of view with various source radiances. Calibration routines are created to convert the raw detector
signal from these uniform sources into accurate at-sensor radiances. During the on-orbit life of the instrument, vicarious
calibration techniques such as the side-slither method may be used to check the pixel-to-pixel uniformity.
To demonstrate the value of this technique for TIRS, the Digital Imaging and Remote Sensing Image Generation
(DIRSIG) tool is utilized to simulate on-orbit TIRS data. Appropriate sites on the Earth are identified and
side-slither data is generated. The simulated on-orbit data is then compared to pre-launch calibration data to
determine whether this calibration approach is viable to track the calibration of TIRS over its orbital lifetime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902D (2012) https://doi.org/10.1117/12.919359
A subset of the existing NASA and NOAA families of Earth observation imaging platforms currently on orbit
(Landsat 7, Advanced Land Imager (ALI), and the Defense Meteorological Satellite Program Operational Line Scanner)
have a primary mission of imaging the Earth's landforms during the daylight hours. All three systems are capable of
nighttime imaging operations, however this capability of Landsat and ALI is not frequently utilized due to lack of utility
in the resulting data products. Many researchers have published science results on focused problems such as volcanic
eruptions, wildfires, and urban settlement mapping. In this work we present a first-principles based radiometric
framework for quantifying the capability of such imaging platforms for detecting the presence of boats in open waters
taking into consideration the interaction between the boat and water surfaces. The low-level radiometric modeling is
performed using both the DIRSIG software tool and MODTRAN, in conjunction with freely available boat geometric
models, incandescent lamp spectra, and a randomly rough sea surface geometry. The resulting performance metric
represents the minimum wattage of one or more incandescent illuminants that might be detected above the system noise
floor for a variety of imaging geometries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902G (2012) https://doi.org/10.1117/12.918411
Spectral image analysis problems often begin by performing a preprocessing step composed of applying a
transformation that generates an alternative representation of the spectral data. In this paper, a transformation
based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the
random walk using a quantity known as the average commute time distance and find a nonlinear transformation
that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square
root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has
the important characteristic of increasing when the number of paths between two nodes decreases and/or the
lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute
time distance that avoids running an iterative process and is found by simply performing an eigendecomposition
on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the
spectral data for which the commute time distance is then calculated from, an introduction of some important
properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal
variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly
Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902I (2012) https://doi.org/10.1117/12.919557
The passive remote chemical plume quantication problem may be approached from multiple aspects, corresponding
to a variety of physical eects that may be exploited. Accordingly, a diversity of statistical quantication
algorithms has been proposed in the literature. The ultimate performance and algorithmic complexity of each is
in
uenced by the assumptions made about the scene, which may include the presence of ancillary measurements
or particular background / plume features that may or may not be present. In this paper, we evaluate and
compare a number of quantication algorithms that span a variety of such assumptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902J (2012) https://doi.org/10.1117/12.920387
Anomaly detection (AD) in hyperspectral data has received a lot of attention for various applications. The aim
of anomaly detection is to detect pixels in the hyperspectral data cube whose spectra differ significantly from
the background spectra. Many anomaly detectors have been proposed in literature. They differ by the way
the background is characterized and by the method used for determining the difference between the current
pixel and the background. The most well-known anomaly detector is the RX detector that calculates the
Mahalanobis distance between the pixel under test (PUT) and the background. Global RX characterizes the
background of the complete scene by a single multi-variate normal distribution. In many cases this model is not
appropriate for describing the background. For that reason a variety of other anomaly detection methods have
been developed. This paper examines three classes of anomaly detectors: sub-space methods, local methods
and segmentation-based methods. Representative examples of each class are chosen and applied on a set of
hyperspectral data with different backgrounds. The results are evaluated and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902M (2012) https://doi.org/10.1117/12.918386
Military target detection is an important application of hyperspectral remote sensing. It highly demands real-time or near
real-time processing. However, the massive amount of hyperspectral image data seriously limits the processing speed.
Real-time image processing based on hardware platform, such as digital signal processor (DSP), is one of recent
developments in hyperspectral target detection. In hyperspectral target detection algorithms, correlation matrix or
covariance matrix calculation is always used to whiten data, which is a very time-consuming process. In this paper, a
strategy named spatial-spectral information extraction (SSIE) is presented to accelerate the speed of hyperspectral
image processing. The strategy is composed of bands selection and sample covariance matrix estimation. Bands selection
fully utilizes the high-spectral correlation in spectral image, while sample covariance matrix estimation fully utilizes the
high-spatial correlation in remote sensing image. Meanwhile, this strategy is implemented on the hardware platform of
DSP. The hardware implementation of constrained energy minimization (CEM) algorithm is composed of hardware
architecture and software architecture. The hardware architecture contains chips and peripheral interfaces, and software
architecture establishes a data transferring model to accomplish the communication between DSP and PC. In experiments,
the performance on software of ENVI with that on hardware of DSP is compared. Results show that the processing speed
and recognition result on DSP are better than those on ENVI. Detection results demonstrate that the strategy
implemented by DSP is sufficient to enable near real-time supervised target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902N (2012) https://doi.org/10.1117/12.918399
Anomaly detection of hyperspectral image is active topic in the field of remote sensing image processing. Reed-X
Detector (RXD) algorithm developed by Reed and Yu is a Constant False Alarm Rate (CFAR) anomaly detection
method founded on multivariate statistical analysis theory, as the same form with Mahalanobis distance. RX detector
could enable researchers to exploit targets that people particularly want from their surroundings according to the spectral
distinct. So RXD is practicable in real scenes, and then becomes a focus in the field of target detection.
RX detector has two common forms, Global-RX and Local-RX. They have different samples to estimate mean vector
and covariance matrix. PCA is a common preprocessing step for dimension reduction. Interestingly, because it can also
remove noises, performance could be improved by using principle components instead of all data. In addition, people
often assume that RX result values submit the chi-square distribution, which often leads to an unacceptable high false
alarm rate in setting χ2α,p as threshold. So, how to get threshold value has been a difficult problem. This paper proposes a
method based on multivariate statistical probability theory which can segment targets from image automatically. Instead
of a constant threshold value, this segmentation target approach use an initial threshold calculated by RX result value
histogram to separate backgrounds and targets samples, then calculate every pixel's posterior probabilities of
background or target by assuming they all submit multi-dimensional normal distribution. Generally, the higher
probability is considerable. The proposed method has been tested using AVIRIS data and the experimental results reveal
that segmentation target approach has higher detection probability and lower false alarm rate compared with the
traditional manual thresholding way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902O (2012) https://doi.org/10.1117/12.918529
In this paper, we aim to study the detection of vehicles from WorldView-2 satellite imagery. For this purpose, accurate
modeling of vehicle features and signatures and efficient learning of vehicle hypotheses are critical. We present a joint
Gaussian and maximum likelihood based modeling and machine learning approach using SVM and neural network
algorithms to describe the local appearance densities and classify vehicles from non-vehicle buildings, objects, and
backgrounds. Vehicle hypotheses are fitted by elliptical Gaussians and the bottom-up features are grouped by Gabor
orientation filtering based on multi-scale analysis and distance transform. Global contextual information such as road
networks and vehicle distributions can be used to enhance the recognition. In consideration of the problem complexity
the practical vehicle detection task faces due to dense and overlapping vehicle distributions, partial occlusion and clutters
by building, shadows, and trees, we employ a spectral clustering strategy jointly combined with bootstrapped learning to
estimate the parameters of centroid, orientation, and extents for local densities. We demonstrate a high detection rate
94.8%,with a missing rate 5.2% and a false alarm rate 5.3% on the WorldView-2 satellite imagery. Experimental results
show that our method is quite effective to model and detect vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902P (2012) https://doi.org/10.1117/12.918530
Phased array ultrasonic sensing is a well-known non-destructive evaluation approach and a lot of research efforts have
been reported. In this paper, we study flaw identification and localization in coarse-grained steel components. To
improve the detection effectiveness and performance, advanced ultrasonic signal processing plays a key role. We
propose a non-parametric data-driven approach based on ensemble empirical mode decomposition (EEMD), an effective
and powerful method to analyze the nonlinear and non-stationary characteristics of ultrasonic signals. In the EEMD
approach, white noise is added and it will assist the sifting iterations to converge to the truly intrinsic mode functions
(IMF) and cancel out the added noise as long as the iterations are sufficiently large. It is shown that the ultrasonic wavefront
harmonics can be effectively represented by multi-mode IMFs, which have the well-defined local time scales and
instantaneous frequencies. And the sifting iterations adapt to the varying physical process meaningfully. Numerical
experiments are conducted and the presented results validate the effectiveness and advantages of our proposed approach
over conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902Q (2012) https://doi.org/10.1117/12.919091
Technological advancement has lead to an increased demand for up-to-date, high resolution solutions
for Wireless Network Propagation. This research paper serves to highlight how 50cm resolution clutter
classes can be accurately extracted from WorldView-2 Imagery using semi-automated processes. Once
the clutter classes have been extracted, a 3D model of the area is created for visualization purposes. The
study area for this research was Rustenburg, South Africa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, 83902R (2012) https://doi.org/10.1117/12.919163
As acquisition technology progresses, remote sensing data contains an ever increasing amount of information: optical
and radar images, low, high and very high-resolution, multitemporal hyperspectral images, derived images, and physical
or ancillary data (databases, Digital Elevation Model (D.E.M), Geographical Information System (G.I.S.)). Future
projects in remote sensing will give high repeatability of acquisition like Venμs (CNES) which may provide data every 2
days with a resolution of 5.3 meters on 12 bands (420nm-900nm) and Sentinel-2 (ESA) 13 bands, 10-60m resolution and
5 days. With such data, supervised classification gives excellent results in term of accuracy indices (like Overall
Accuracy, Kappa coefficient). In this paper, we present advantages and disadvantages of existing indices and propose a
new index to evaluate supervised classification using all the information available from the confusion matrix. In addition
to accuracy, a new feature is introduced in this index: fidelity. For example, a class could have a high accuracy (low
omission error) but could be over-represented with other classes (high commission error). The new index reflects
accuracy and correct representation of classes (fidelity) using commission and omission errors. Environment applications
are in land cover and land use and the goal is to have the best classification for all classes, whether the biggest (corn,
trees) or the lightest (rivers, hedges). The tests are performed on Formosat-2 images (every 2 days, 8 meters resolution
on 4 bands) in the area of Toulouse (France). Tests used to validate the new index by demonstrating benefits of its use
through various thematical studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.