Over the past 20 years, hyperspectral microscopy has grown into a robust field of analysis for a number of applications. The visible to near-infrared (VNIR; 400 to 1000 nm) region of the spectrum has demonstrated utility for the characterization of healthy and diseased tissue and of biomolecular indicators at the cellular level. Here, we describe the development of a hyperspectral imaging (HSI) microscope that is aimed at material characterization to complement traditional stand-off, earth remote sensing with hyperspectral sensors. We combine commercial off the shelf technology to build an HSI microscope to collect spectral data with illumination provided by a tunable laser. Hyperspectral imaging microscopy (HIM) facilitates detailed examination of target materials at the subcentimeter spatial scale. The custom-built, laser illumination HSI microscope covers the NIR to shortwave infrared (NIR/SWIR; 900 to 2500 nm) solar-reflected spectral range. It is combined with a separate VNIR sensor (400 to 900 nm) that utilizes quartz–tungsten–halogen lamps for illumination. The combined sensors provide a means to collect >10,000 s of spectra in the full VNIR/SWIR spectral range from both pure substances and precisely engineered linear and nonlinear mixtures. The large abundance of spectra allows for a more detailed understanding of the variability and multivariate probability distributions of spectral signatures. This additional information aids in understanding the variability observed in ground truth spectra collected from portable spectrometers, and it greatly enhances sample description and metadata content. In addition, HIM data cubes can serve as proxies, as “microscenes,” for systems engineering applications such as trade studies for HSI acquired by air- and space-borne sensors.
A deep learning convolutional network of fiber filters is investigated for the spectral analysis of hyperspectral imagery for purposes of material classification and identification. Analogous to convolutional neural networks that apply spatial filters to color imagery for purposes of spatial object classification, a network of convolutional filters is applied spectrally to the large numbers of bands in hyperspectral imagery for purposes of material classification and identification. A convolutional filter has a volume NxNxM, where N is the spatial pixel size (length and width) of the filter and M is the filter depth. For spatial convolutional networks, the filter applied to the first (bottom/input) layer is often N = 5 and M = 3 (e.g., corresponding to each layer of RGB color imagery). For the proposed network, the convolution filters in each layer have a volume with the dimension of N = 1 and M > 5, which we refer to as fiber filters. We investigate the ability of this kind of architecture to learn discriminating features for purposes of material classification and identification as compared to a fully connected neural network. The choice of appropriate architecture depth is investigated, which is the number of layers in the network, not to be confused with the filter depth. Various values the filter depth, M, are also investigated. Aerial collections of hyperspectral imagery are used for training and the validation experiment.
Linear mixtures of materials in a scene often occur because the resolution of a sensor is relatively coarse, resulting in pixels containing patches of different materials within them. This phenomenon causes nonoverlapping areal mixing and can be modeled by a linear mixture model. More complex phenomena, such as the multiple scattering in mixtures of vegetation, soils, granular, and microscopic materials within pixels can result in intimate mixing with varying degrees of nonlinear behavior. In such cases, a linear model is not sufficient. This study considers two approaches for unmixing pixels in a scene that may contain linear or intimate (nonlinear) mixtures. The first method is based on earlier studies that indicate nonlinear mixtures in reflectance space are approximately linear in albedo space. The method converts reflectance to single-scattering albedo according to Hapke theory and uses a constrained linear model on the computed albedo values. The second method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in nonlinear mixtures of materials. This study compares the two approaches and pays particular attention to these dependencies. Both laboratory and airborne collections of hyperspectral imagery are used to validate the methods.
This study investigates methods for characterizing materials that are mixtures of granular solids, or mixtures of liquids, which may be linear or non-linear. Linear mixtures of materials in a scene are often the result of areal mixing, where the pixel size of a sensor is relatively large so they contain patches of different materials within them. Non-linear mixtures are likely to occur with microscopic mixtures of solids, such as mixtures of powders, or mixtures of liquids, or wherever complex scattering of light occurs. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear or non-linear. One method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. This method converts reflectance to single-scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The other method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method can be highly dependent on the value of a parameter, gamma, which provides flexibility for the kernel method to respond to both linear and non-linear phenomena. Our study pays particular attention to this parameter for responding to linear and non-linear mixtures. Laboratory experiments on both granular solids and liquid solutions are performed with scenes of hyperspectral data.
A proposed framework using spectral and spatial information is introduced for neural net multisensor data fusion. This consists of a set of independent-sensor neural nets, one for each sensor (type of data), coupled to a fusion net. The neural net of each sensor is trained from a representative data set of the particular sensor to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. During the initial processing, three-dimensional (3-D) point cloud data (PCD) are segmented using a multidimensional mean-shift algorithm into clustered objects. Concurrently, multiband spectral imagery data (multispectral or hyperspectral) are spectrally segmented by the stochastic expectation–maximization into a cluster map containing (spectral-based) pixel classes. For the proposed sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The success of the approach in utilizing sensor synergism for an enhanced classification is demonstrated for the specific case of classifying hyperspectral imagery and PCD extracted from LIDAR, obtained from an airborne data collection over the campus of University of Southern Mississippi, Gulfport, Mississippi.
Linear mixtures of materials in a scene often occur because the pixel size of a sensor is relatively large and consequently they contain patches of different materials within them. This type of mixing can be thought of as areal mixing and modeled by a linear mixture model with certain constraints on the abundances. The solution to these models has received a lot of attention. However, there are more complex situations, such as scattering that occurs in mixtures of vegetation and soil, or intimate mixing of granular materials like soils. Such multiple scattering and microscopic mixtures within pixels have varying degrees of non-linearity. In such cases, a linear model is not sufficient. Furthermore, often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear (areal mixed) or non-linear (intimately mixed). The first method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. The method converts reflectance to singlescattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The second method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method is dependent on the value of a parameter, gamma. Furthermore, both methods are dependent on the choice of endmembers, and also on RMSE (root mean square error) as a performance metric. This study compares the two approaches and pays particular attention to these dependencies. Both laboratory and aerial collections of hyperspectral imagery are used to validate the methods.
Spectral mixing can occur in a number of different ways, which may be linear or non-linear. Perhaps the pixel size of a sensor is just too large so many pixels contain patches of different materials within them resulting in linear mixing of the materials. However, there are more complex situations, such as scattering that occurs in mixtures of vegetation and soil, or intimate mixing of granular materials like soils. Such multiple scattering and microscopic mixtures within pixels have varying degrees of non-linearity. Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. This study compares two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear or non-linear. The first is a kernel-based fully-constrained method for spectral unmixing, which uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The second method directly converts reflectance to single-scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. Multiple scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area.
The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
Nonlinear spectral mixing occurs when materials are intimately mixed. Intimate mixing is a common characteristic of granular materials such as soils. A linear spectral unmixing inversion applied to a nonlinear mixture will yield subpixel abundance estimates that do not equal the true values of the mixture's components. These aspects of spectral mixture analysis theory are well documented. Several methods to invert (and model) nonlinear spectral mixtures have been proposed. Examples include Hapke theory, the extended endmember matrix method, and kernel-based methods. There is, however, a relative paucity of real spectral image data sets that contain well characterized intimate mixtures. To address this, special materials were custom fabricated, mechanically mixed to form intimate mixtures, and measured with a hyperspectral imaging (HSI) microscope. The results of analyses of visible/near-infrared (VNIR; 400 nm to 900 nm) HSI microscopy image cubes (in reflectance) of intimate mixtures of the two materials are presented. The materials are spherical beads of didymium glass and soda-lime glass both ranging in particle size from 63 m to 125 m. Mixtures are generated by volume and thoroughly mixed mechanically. Three binary mixtures (and the two endmembers) are constructed and emplaced in the wells of a 96-well sample plate: 0%/100%, 25%/75%, 50%/50%, 80%/20%, and 100%/0% didymium/soda-lime. Analysis methods are linear spectral unmixing (LSU), LSU applied to reflectance converted to single-scattering albedo (SSA) using Hapke theory, and two kernel-based methods. The first kernel method uses a generalized kernel with a gamma parameter that gauges non-linearity, applying the well-known kernel trick to the least squares formulation of the constrained linear model. This method attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The second method uses 'K-hype' with a polynomial (quadratic) kernel. LSU applied to the reflectance spectra of the mixtures produced poor abundance estimates regardless of the constraints applied in the inversion. The 'K-hype' kernel-based method also produced poor fraction estimates. The best performers are LSU applied to the reflectance spectra converted to SSA using Hapke theory and the gamma parameter kernel-based method.
Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.
Feature-based imaging spectroscopy methods are effective for identifying materials that exhibit specific well-defined
spectral absorption features. As long as a pixel contains a sufficient amount of material so that the absorption retains its
predominant shape, a feature-based method can work well. However, there are occasions when a background material
can mix with a material of interest, and significantly distort and maybe even remove the absorption. In such cases, the
material identification capabilities of these methods are likely to be degraded. This effort proposes an approach to
accommodate these conditions. The parameter values to determine fit of an absorption feature are selected to be more
tolerant of distortions and the signal contributions of any detected sub-pixel backgrounds are removed by making use of
a physically-constrained linear mixing model. This mixing model is used to remove any detected background spectra
from the image spectra within the bounding locations of the spectral features. However, an expected consequence of
loosening the parameter values and performing sub-pixel subtraction is an increase in false alarms. A statistically-based
spectral matched filter is proposed as to reduce these false alarms. We test the individual and combined approaches for
identifying full-pixel and sub-pixel Tyvek panels in an experiment using a HyMAP hyperspectral scene with ground
truth collected over Waimanalo Bay, Oahu, Hawaii.
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of
independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained
(from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from
the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR
data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects.
Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a
classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each
other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The
first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level
consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an
enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne
data collection of LIDAR and HSI over the Gulfport, MS, area.
The Deepwater Horizon oil spill covered a very large geographical area in the Gulf of Mexico creating potentially
serious environmental impacts on both marine life and the coastal shorelines. Knowing the oil's areal extent and
thickness as well as denoting different categories of the oil's physical state is important for assessing these impacts.
High spectral resolution data in hyperspectral imagery (HSI) sensors such as Airborne Visible and Infrared Imaging
Spectrometer (AVIRIS) provide a valuable source of information that can be used for analysis by semi-automatic
methods for tracking an oil spill's areal extent, oil thickness, and oil categories. However, the spectral behavior of oil in
water is inherently a highly non-linear and variable phenomenon that changes depending on oil thickness and oil/water
ratios. For certain oil thicknesses there are well-defined absorption features, whereas for very thin films sometimes there
are almost no observable features. Feature-based imaging spectroscopy methods are particularly effective at classifying
materials that exhibit specific well-defined spectral absorption features. Statistical methods are effective at classifying
materials with spectra that exhibit a considerable amount of variability and that do not necessarily exhibit well-defined
spectral absorption features. This study investigates feature-based and statistical methods for analyzing oil spills using
hyperspectral imagery. The appropriate use of each approach is investigated and a combined feature-based and
statistical method is proposed.
A neurodynamical approach to scene segmentation of hyperspectral imagery is investigated based on oscillatory
correlation theory. A network of relaxation oscillators, which is based on the Locally Excitatory Globally Inhibitory
Oscillator Network (LEGION), is extended to process multiband data and it is implemented to perform unsupervised
scene segmentation using both spatial and spectral information. The nonlinear dynamical network is capable of
achieving segmentation of objects in a scene by the synchronization of oscillators that receive local excitatory inputs
from a collection of local neighbors and desynchronization between oscillators corresponding to different objects. The
original LEGION model was designed for single-band imagery. The proposed multiband version of LEGION is
implemented such that the connections in the oscillator network receive the spectral pixel vectors in the hyperspectral
data as excitatory inputs. Euclidean distances between spectra in local neighborhoods are used as the measure of
closeness in the network. The ability of the proposed approach to perform natural and urban scene segmentation for
geospatial analysis is assessed. Our approach is tested on two hyperspectral datasets with notably different sensor
properties and scene content.
The detection of sub-pixel materials in a hyperspectral scene is often accomplished using spectral matched filters or
subspace projection. These methods rely on estimates of background second order statistics or subspaces in a scene that
are usually based on either on global statistics of the entire scene or on adaptive local statistics. Global statistics have
the disadvantage of including materials of interest in the background estimate and this implies the method assumes these
materials occupy an insignificant portion of the scene. Adaptive methods that use a small number of samples
surrounding the pixel of interest to estimate a background covariance eliminate much of this disadvantage, but this
comes at the cost of significantly increasing computation time and potentially unstable estimates for some backgrounds.
A number of spectral matched filter methods have been developed with increasing sophistication, but experience
indicates that the method used to compute the background statistics may have a greater impact on overall detector
performance. This research investigates the use of a neural network approach to estimate the background statistics
needed for certain spectral matched filters requiring global statistics. The context of the effort is terrain, urban, and
shallow-water mapping using hyperspectral imagery, where the materials of interest inherently occupy a significant
portion of a scene or where certain background classes have problematic second-order statistics. Results of experiments
within this context are shown.
Many hyperspectral imaging algorithms are available for applications such as spectral unmixing, subpixel detection,
quantification, endmember extraction, classification, compression, etc and many more are yet to come. It is very difficult
to evaluate and validate different algorithms developed and designed for the same application. This paper makes an
attempt to design a set of standardized synthetic images which simulate various scenarios so that different algorithms can
be validated and evaluated on the same ground with completely controllable environments. Two types of scenarios are
developed to simulate how a target can be inserted into the image background. One is called Target Implantation (TI)
which implants a target pixel by removing the background pixel it intends to replace. This type of scenarios is of
particular interest in endmember extraction where pure signatures can be simulated and inserted into the background
with guaranteed 100% purity. The other is called Target Embeddedness (TE) which embeds a target pixel by adding this
target pixel to the background pixel it intends to insert. This type of scenarios can be used to simulate signal detection
models where the noise is additive. For each of both types three scenarios are designed to simulate different levels of
target knowledge by adding a Gaussian noise. In order to make these six scenarios a standardized data set for
experiments, the data used to generate synthetic images can be chosen from a data base or spectral library available in
the public domain or websites and no particular data are required to simulate these synthetic images. By virtue of the
designed six scenarios an algorithm can be assessed objectively and compared fairly to other algorithms on the same
setting. This paper demonstrates how these six scenarios can be used to evaluate various algorithms in applications of
subpixel detection, mixed pixel classification/quantification and endmember extraction.
Hyperspectral imagery consists of a large number of spectral bands that is typically modeled in a high dimensional
spectral space by exploitation algorithms. This high dimensional space usually causes no inherent problems with simple
classification methods that use Euclidean distance or spectral angle for a metric of class separability. However,
classification methods that use quadratic metrics of separability, such as Mahalanobis distance, in high dimensional
space are often unstable, and often require dimension reduction methods to be effective. Methods that use supervised
neural networks or manifold learning methods are often very slow to train. Implementations of Adaptive Resonance
Theory, such as fuzzy ARTMAP and distributed ARTMAP have been successfully applied to single band imagery,
multispectral imagery, and other various low dimensional data sets. They also appear to converge quickly during
training. This effort investigates the behavior of ARTMAP methods on high dimensional hyperspectral imagery without
resorting to dimension reduction. Realistic-sized scenes are used and the analysis is supported by ground truth
knowledge of the scenes. ARTMAP methods are compared to a back-propagation neural network, as well as simpler
Euclidean distance and spectral angle methods.
A biologically plausible neurodynamical approach to scene segmentation based on oscillatory correlation theory is investigated. A network of relaxation oscillators, which is based on the Locally Excitatory Globally Inhibitory Oscillator Network (LEGION), is constructed and adapted to geospatial data with varying ranges and precision. This nonlinear dynamical network is capable of achieving segmentation of objects in a scene by the synchronization of oscillators that receive local excitatory inputs from a collection of local neighbors and desynchronization between oscillators corresponding to different objects. The original LEGION model is sensitive to several aspects of the data that are encountered in real imagery, and achieving good performance across these different data types requires the constant adjusting of parameters that control excitatory and inhibitory connections. In this effort, the connections in the oscillator network are modified to reduce this sensitivity with the goal to eliminate the need for parameter adjustment. We assess the ability of the proposed approach to perform natural and urban scene segmentation for geospatial analysis. Our approach is tested on simulated scene data as well as real imagery with varying gray shade ranges and scene complexity.
The effect of assuming and using non-Gaussian attributes of underlying source signals for separating/encoding patterns is investigated, for application to terrain categorization (TERCAT) problems. Our analysis provides transformed data, denoted as "Independent Components," which can be used and interpreted in different ways. The basis vectors of the resulting transformed data are statistically independent and tend to align themselves with source signals. In this effort, we investigate the basic formulation designed to transform signals for subsequent processing or analysis, as well as a more sophisticated model designed specifically for unsupervised classification. Mixes of single band images are used, as well as simulated color infrared and Landsat. A number of experiments are performed. We first validate the basic formulation using a straightforward application of the method to unmix signal data in image space. We next show the advantage of using this transformed data compared to the original data for visually detecting TERCAT targets of interest. Subsequently, we test two methods of performing unsupervised classification on a scene that contain a diverse range of terrain features, showing the benefit of these methods against a control method for TERCAT applications.
There is a branch of radiative transport theory that is customarily expressed with an integrodifferential equation or an integral equation. The new formulation in this article is, without approximation, expressed through partial differential equations in both the frequency and time domains. Its accuracy is demonstrated in the frequency domain by applying it to a problem solved long ago. It was expressed with the conventional integrodifferential equation. Confidence is bolstered in the new method by showing how the new method produces the identical analytical answer. This article also analyses a situation in the time domain in both the appropriate differential and integrodifferential equations and the identical results are again obtained.
Advances in hyperspectral sensor technology increasingly provide higher resolution and higher quality data for the accurate generation of terrain categorization/classification (TERCAT) maps. The generation of TERCAT maps from hyperspectral imagery can be accomplished using a variety of spectral pattern analysis algorithms; however, the algorithms are sometimes complex, and the training of such algorithms can be tedious. Further, hyperspectral imagery contains a voluminous amount of data with contiguous spectral bands being highly correlated. These highly correlated bands tend to provide redundant information for classification/feature extraction computations. In this paper, we introduce the use of wavelets to generate a set of Generalized Difference Feature Index (GDFI) measures, which transforms a hyperspectral image cube into a derived set of GDFI bands. A commonly known special case of the proposed GDFI approach is the Normalized Difference Vegetation Index (NDVI) measure, which seeks to emphasize vegetation in a scene. Numerous other band-ratio measures that emphasize other specific ground features can be shown to be a special case of the proposed GDFI approach. Generating a set of GDFI bands is fast and simple. However, the number of possible bands is capacious and only a few of these “generalized ratios” will be useful. Judicious data mining of the large set of GDFI bands produces a small subset of GDFI bands designed to extract specific TERCAT features. We extract/classify several terrain features and we compare our results with the results of a more sophisticated neural network feature extraction routine.
The effect of using Adaptive Wavelets is investigated for dimension reduction and noise filtering of hyperspectral imagery that is to be subsequently exploited for classification or subpixel analysis. The method is investigated as a possible alternative to the Minimum Noise Fraction (MNF) transform as a preprocessing tool. Unlike the MNF method, the wavelet-transformed method does not require an estimate of the noise covariance matrix that can often be difficult to obtain for complex scenes (such as urban scenes). Another desirable characteristic of the proposed wavelet transformed data is that, unlike Principal Component Analysis (PCA) transformed data, it maintains the same spectral shapes as the original data (the spectra are simply smoothed). In the experiment, an adaptive wavelet image cube is generated using four orthogonal conditions and three vanishing moment conditions. The classification performance of a Derivative Distance Squared (DDS) classifier and a Multilayer Feedforward Network (MLFN) neural network classifier applied to the wavelet cubes is then observed. The performance of the Constrained Energy Minimization (CEM) matched-filter algorithm applied to this data us also observed. HYDICE 210-band imagery containing a moderate amount of noise is used for the analysis so that the noise-filtering properties of the transform can be emphasized. Trials are conducted on a challenging scene with significant locally varying statistics that contains a diverse range of terrain features. The proposed wavelet approach can be automated to require no input from the user.
A physically-constrained localized linear mixing model suitable
to process multi/hyperspectral imagery for Terrain Categorization
(TERCAT) applications is investigated. Unlike the basic spectral
linear mixing model that typically includes all potential endmembers in a set, simultaneously, in the model for each site in an image, the proposed approach restricts the local model at each site to a subset of endmembers, using localized spectral/spatial constraints to narrow the selection process. This approach is used to reduce the observed instability of conventional linear mixture analysis in addressing TERCAT problems for scenes with a large number of endmembers. Experiments are conducted on an 18 channel GERIS scene, airborne-collected over Northern Virginia, that contains a diverse range of terrain features, showing the benefit of this method as compared to the basic linear mixture analysis approach for TERCAT applications.
I present, apparently, a new description of radiative transfer problems in the time domain. It appears that for the first time a simple physical picture emerges of the underlying essence of scattered radiance when dealing with isotropic axially-symmetric scattering in nonconservative linear media as attenuated travelling waves was by analogy. The method used a new differential equation approach. Initially its accuracy in the frequency domain was demonstrated by applying it to a solved problem, where in the literature it is dealt with using the conventional 95-year-old integro-differential equation description. Confidence in the differential equation method was bolstered by showing how this new method produces the same analytical answer. The new technique converts the integro-differential equation formulation of radiative transfer into a “pure” differential equation formulation, consisting here in a mixture of ordinary and partial derivatives, and solves that. This paper analyzes the situation in the time domain using the differential equation description and again yields a travelling wave description. However, this time it is not simply by analogy that such a description is obtained. It is exact. This result of attenuated travelling waves was demonstrated in a prior paper by solving the integro-differential equation for the classic problem of axially-symmetric scalar isotropic scattering in a nonconservative linear medium. In this paper we revisit the problem, this time solving it by the differential equation method and obtain the identical result, once again confirming the method.
A simulated annealing method of partitioning hyperspectral imagery, initialized by a supervised classification method, is investigated to provide spatially smooth class labeling for terrain mapping applications. The method is used to obtain an estimate of the mode a Gibbs distribution defined over a symmetric spatial neighborhood system that is based on an energy function characterizing spectral disparities in Euclidean distance and spectral angle. Experiments are conducted on a 210-band HYDICE scene that contains a diverse range of terrain features and that is supported with ground truth. Both visual and quantitative results demonstrate a clear benefit of this method as compared to spectral-only supervised classification or unsupervised annealing that has been initialized randomly.
We present a new point of view for investigation radiative transfer problems by showing it involves the scattering of traveling evanescent waves. Its accuracy is demonstrated by applying it to a solved problem whose solution was published by Chandrasekhar. He determined it with a conventional method and we bolster confidence in our method by showing how the new method produces the same analytical answer. The new technique converts the 95-year-old, usually difficult to solve, integro-differential equation formulation of radiative transfer into a less formidable 'pure' differential equation formulation, consisting here in a mixture of ordinary and partial derivatives, and solves that. This paper focuses on a single class of cases. It also demonstrates surprising success at solving a narrowly defined class of nonlinear radiative transfer problems initially expressed as a nonlinear integro-differential equation formulation of the radiative transport problem.
We investigate a hyperspectral data reduction technique based on a matrix factorization method using the notion of linear independence instead of information measure, as an alternative to Principal Component Analysis (PCA) or the Karhunen-Loeve Transform. The technique is applied to a hyperspectral database whose spectral samples are known. We proceed to cluster such dimension-reduced databases with an unsupervised second order statistics clustering method and we compare those results to those produced by first order statistics. We illustrate the above methodology by applying it to several spectral databases. Since we know the class to which each sample belongs to in the database, we can effectively assess the algorithms' clustering/classification accuracy. In addition to using unsupervised clustering of data for purposes of image segmentation, we investigate this algorithm as a means for improving the integrity of spectral databases by removing spurious samples.
A Gibbs-based approach to partitioning hyperspectral imagery into homogeneous regions is investigated for terrain mapping applications. The form of Bayesian estimation, Maximum A Posteriori (MAP) estimation, is applied through the use of a Gibbs distribution defined over a neighborhood system and is implemented as a multi-grid process. Appropriate energy functions and neighborhood graph structures are investigated, which model spectral disparities in an image using spectral angle and/or Euclidean distance. Experiments are conducted on a HYDICE scene collected over an area adjacent to Fort Hood, Texas, that contains a diverse range of terrain features and that is supported with ground truth. Suitable parameter ranges are investigated, and the behavior of the algorithm is characterized using individual and combined measures of disparity within the context of a more general framework, one that supports mixed-pixel processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.