We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.
A computational skin re
ectance model is used here to provide the re
ectance, absorption, scattering, and
transmittance based on the constitutive biological components that make up the layers of the skin. The changes
in re
ectance are mapped back to deviations in model parameters, which include melanosome level, collagen
level and blood oxygenation. The computational model implemented in this work is based on the Kubelka-
Munk multi-layer re
ectance model and the Fresnel Equations that describe a generic N-layer model structure.
This assumes the skin as a multi-layered material, with each layer consisting of specic absorption, scattering
coecients, re
ectance spectra and transmittance based on the model parameters. These model parameters
include melanosome level, collagen level, blood oxygenation, blood level, dermal depth, and subcutaneous tissue
re
ectance. We use this model, coupled with support vector machine based regression (SVR), to predict the
biological parameters that make up the layers of the skin. In the proposed approach, the physics-based forward
mapping is used to generate a large set of training exemplars. The samples in this dataset are then used as
training inputs for the SVR algorithm to learn the inverse mapping. This approach was tested on VIS-range
hyperspectral data. Performance validation of the proposed approach was performed by measuring the prediction
error on the skin constitutive parameters and exhibited very promising results.
Hyperspectral Image (HSI) anomaly detectors typically employ local background modeling techniques to facilitate target
detection from surrounding clutter. Global background modeling has been challenging due to the multi-modal content
that must be automatically modeled to enable target/background separation. We have previously developed a support
vector based anomaly detector that does not impose an a priori parametric model on the data and enables multi-modal
modeling of large background regions with inhomogeneous content. Effective application of this support vector
approach requires the setting of a kernel parameter that controls the tightness of the model fit to the background data.
Estimation of the kernel parameter has typically considered Type I / false-positive error optimization due to the
availability of background samples, but this approach has not proven effective for general application since these
methods only control the false alarm level, without any optimization for maximizing detection. Parameter optimization
with respect to Type II / false-negative error has remained elusive due to the lack of sufficient target training exemplars.
We present an approach that optimizes parameter selection based on both Type I and Type II error criteria by
introducing outliers based on existing hypercube content to guide parameter estimation. The approach has been applied
to hyperspectral imagery and has demonstrated automatic estimation of parameters consistent with those that were found
to be optimal, thereby providing an automated method for general anomaly detection applications.
There are a number of challenging estimation, tracking, and decision theoretic problems that require the estimation of
Probability Density Functions (PDFs). When using a traditional parametric approach, the functional model of the PDF is
assumed to be known. However, these models often do not capture the complexity of the underlying distribution.
Furthermore, the problems of validating the model and estimating its parameters are often complicated by the sparsity of
prior examples. The need for exemplars grows exponentially with the dimension of the feature space. These methods
may yield PDFs that do not generalize well to unseen data because these tend to overfit or underfit the training
exemplars. We investigate and compare alternate approaches for estimating a PDF and consider instead kernel based
estimation methods which generalize the Parzen estimator and use a Linear Mixture of Kernels (LMK) model. The
methods reported here are derived from machine learning methods such as the Support Vector Machines and the
Relevance Vector Machines. These PDF estimators provide the following benefits: (a) they are data driven; (b) they do
not overfit the data and consequently have good generalization properties; (c) they can accommodate highly irregular
and multi-modal data distributions; (d) they provide a sparse and succinct description of the underlying data which leads
to efficient computation and communication. Comparative experimental results are provided illustrating these properties
using simulated Mixture of Gaussian-distributed data.
Existing techniques for hyperspectral image (HSI) anomaly detection are computationally intensive precluding real-time
implementation. The high dimensionality of the spatial/spectral hypercube with associated correlations between spectral
bands present significant impediments to real time full hypercube processing that accurately encapsulates the underlying
modeling. Traditional techniques have imposed Gaussian models, but these have suffered from significant
computational requirements to compute large inverse covariance matrices as well as modeling inaccuracies. We have
developed a novel data-driven, non-parametric HSI anomaly detector that has significantly reduced computational
complexity with enhanced HSI modeling, providing the capability for real time performance with detection rates that
match or surpass existing approaches. This detector, based on the Support Vector Data Description (SVDD), provides
accurate, automated modeling of multi-modal data, facilitating effective application of a global background estimation
technique which provides the capability for real time operation on a standard PC platform. We have demonstrated one
second processing time on hypercubes of dimension 256×256×145, along with superior detection performance relative to
alternate detectors. Computation performance analysis has been quantified via processing runtimes on a PC platform,
and detection/false-alarm performance is described via Region Operating Characteristic (ROC) curve analysis for the
SVDD anomaly detector vs. alternate anomaly detectors.
Segmentation and labeling algorithms for foliage penetrating (FOPEN) ultra-wideband Synthetic Aperture Radar (UWB SAR) images are critical components in providing local context in automatic target recognition algorithms. We develop a statistical estimation-theoretic approach to segmenting and labeling the FOPEN images into foliage and non-foliage regions. The labeled maps enable the use of region-adaptive detectors, such as a constant false-alarm rate detector with region-dependent parameters. Segmentation of the images is achieved by performing a maximum a posteriori (MAP) estimate of the pixel labels. By modeling the conditional distribution with a Symmetric Alpha-Stable density and assuming a Markov random field model for the pixel labels, the resulting posterior probability density function is maximized by using simulated annealing to yield the MAP estimate.
We address the application of model-supported exploitation techniques to synthetic aperture radar (SAR) imagery. The emphasis is on monitoring SAR imagery using wide area 2D and/or 3D site models along with contextual information. We consider here the following tasks useful in monitoring: (a) site model construction using segmentation and labeling techniques, (b) target detection, (c) target classification and indexing, and (d) SAR image-site model registration. The 2-D wide area site models used here for SAR image exploitation differ from typical site models developed for RADIUS applications, in that they do not model specific facilities, but constitute wide area site models of cultural features such as urban clutter areas, roads, clearings, fields, etc. These models may be derived directly from existing site models, possibly constructed from electro-optical (EO) observations. When such models are not available, a set of segmentation and labeling techniques described here can be used for the construction of 2D site models. The use of models can potentially yield critical information which can disambiguate target signatures in SAR images. We address registration of SAR and EO images to a common site model. Specific derivations are given for the case of registration within the RCDE platform. We suggest a constant false alarm rate (CFAR) detection scheme and a topographic primal sketch (TPS) based classification scheme for monitoring target occurrences in SAR images. The TPS of an observed target is matched against candidate targets TPSs synthesized for the preferred target orientation, inferred from context (e.g. road or parking lot targets). Experimental results on real and synthetic SAR images are provided.
We examine the potential role of probabilistic analysis in the integration of sensor based trajectory planning and motion/structure estimation. We report in this article three formalisms illustrating this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.