Data from Optech Titan are analyzed here for purposes of terrain classification, adding the spectral data component to the lidar point cloud analysis. Nearest-neighbor sorting techniques are used to create the merged point cloud from the three channels. The merged point cloud is analyzed using spectral analysis techniques that allow for the exploitation of color, derived spectral products (pseudo-NDVI), as well as lidar features such as height values, and return number. Standard spectral image classification techniques are used to train a classifier, and analysis was done with a Maximum Likelihood supervised classification. Terrain classification results show an overall accuracy improvement of 10% and a kappa coefficient increase of 0.07 over a raster-based approach.
A 3-D Monte Carlo ray-tracing simulation of LiDAR propagation models the reflection, transmission and ab- sorption interactions of laser energy with materials in a simulated scene. In this presentation, a model scene consisting of a single Victorian Boxwood (Pittosporum undulatum) tree is generated by the high-fidelity tree voxel model VoxLAD using high-spatial resolution point cloud data from a Riegl VZ-400 terrestrial laser scanner. The VoxLAD model uses terrestrial LiDAR scanner data to determine Leaf Area Density (LAD) measurements for small volume voxels (20 cm sides) of a single tree canopy. VoxLAD is also used in a non-traditional fashion in this case to generate a voxel model of wood density. Information from the VoxLAD model is used within the LiDAR simulation to determine the probability of LiDAR energy interacting with materials at a given voxel location. The LiDAR simulation is defined to replicate the scanning arrangement of the Riegl VZ-400; the resulting simulated full-waveform LiDAR signals compare favorably to those obtained with the Riegl VZ-400 terrestrial laser scanner.
LiDAR and hyperspectral data provide rich and complementary information about the content of a scene. In this work, we examine methods of data fusion, with the goal of minimizing information loss due to point-cloud rasterization and spatial-spectral resampling. Two approaches are investigated and compared: 1) a point-cloud approach in which spectral indices such as Normalized Difference Vegetation Index (NDVI) and principal components of the hyperspectral image are calculated and appended as attributes to each LiDAR point falling within the spatial extent of the pixel, and a supervised machine learning approach is used to classify the resulting fused point cloud; and 2) a raster-based approach in which LiDAR raster products (DEMs, DSMs, slope, height, aspect, etc) are created and appended to the hyperspectral image cube, and traditional spectral classification techniques are then used to classify the fused image cube. The methods are compared in terms of classification accuracy. LiDAR data and associated orthophotos of the NPS campus collected during 2012 - 2014 and hyperspectral Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data collected during 2011 are used for this work.
A Monte Carlo ray tracing simulation of LiDAR propagation has been expanded to 3 dimensions, and makes use of the high-fidelity tree voxel model VoxLAD for realistic simulation of a single tree canopy. The VoxLAD model uses terrestrial LiDAR scanner data to determine Leaf Area Density (LAD) measurements for small volume voxels (~5 – 20 cm side length). The LAD measurement, along with material surface normal orientation information, is used within the Monte Carlo LiDAR propagation model to determine the probability of LiDAR energy being absorbed, transmitted or reflected at each voxel location, and the direction of scattering should an interaction occur. The high spatial fidelity of the VoxLAD models enables simulation of small-footprint LiDAR systems. Results are presented demonstrating incorporation of the VoxLAD model for realistic tree canopy simulation, and the full-waveform simulation capability of the Monte Carlo LiDAR code.
Full-waveform LiDAR data from an AHAB Chiroptera I system with 515 nm and 1032 nm lasers (~10 pts/m2), single-photon sensitive data from the Sigma Space HRQLS system with a 532 nm laser (~19 pts/m2), and discrete analog data from an Optech Orion C200 system (~88 pts/m2) were collected from aerial platforms over Monterey, CA, USA in fall 2012 and fall 2013. The study area contains residential neighborhoods, forested regions, inland lakes, and the Pacific Ocean near-shore environment. Significant ground truth in the form of GPS measurements and terrestrial LiDAR scans enable the LiDAR data to be compared in terms of measurement precision and degree of tree canopy penetration, as well as comparisons of derived raster products.
The Naval Postgraduate School (NPS) Remote Sensing Center (RSC) and research partners have completed a remote sensing pilot project in support of California post-earthquake-event emergency response. The project goals were to dovetail emergency management requirements with remote sensing capabilities to develop prototype map products for improved earthquake response. NPS coordinated with emergency management services and first responders to compile information about essential elements of information (EEI) requirements. A wide variety of remote sensing datasets including multispectral imagery (MSI), hyperspectral imagery (HSI), and LiDAR were assembled by NPS for the purpose of building imagery baseline data; and to demonstrate the use of remote sensing to derive ground surface information for use in planning, conducting, and monitoring post-earthquake emergency response. Worldview-2 data were converted to reflectance, orthorectified, and mosaicked for most of Monterey County; CA. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired at two spatial resolutions were atmospherically corrected and analyzed in conjunction with the MSI data. LiDAR data at point densities from 1.4 pts/m2 to over 40 points/ m2 were analyzed to determine digital surface models. The multimodal data were then used to develop change detection approaches and products and other supporting information. Analysis results from these data along with other geographic information were used to identify and generate multi-tiered products tied to the level of post-event communications infrastructure (internet access + cell, cell only, no internet/cell). Technology transfer of these capabilities to local and state emergency response organizations gives emergency responders new tools in support of post-disaster operational scenarios.
Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.
Change detection using remote sensing has become increasingly important for characterization of natural disasters. Pre- and post-event LiDAR data can be used to identify and quantify changes. The main challenge consists of producing reliable change maps that are robust to differences in collection conditions, free of processing artifacts, and that take into account various sources of uncertainty such as different point densities, different acquisition geometries, georeferencing errors and geometric discrepancies. We present a simple and fast technique that accounts for these sources of uncertainty, and enables the creation of statistically significant change detection maps. The technique makes use of Bayesian inference to estimate uncertainty maps from LiDAR point clouds. Incorporation of uncertainties enables a change detection that is robust to noise due to ranging, position and attitude errors, as well as "roughness" in vegetation scans. Validation of the method was done by use of small-scale models scanned with a terrestrial LiDAR in a laboratory setting. The method was then applied to two airborne collects of the Monterey Peninsula, California acquired in 2011 and 2012. These data have significantly different point densities (8 vs. 40 pts/m2) and some misregistration errors. An original point cloud registration technique was developed, first to correct systematic shifts due to GPS and INS errors, and second to help measure large-scale changes in a consistent manner. Sparse changes were detected and interpreted mostly as construction and natural landscape evolution.
When LiDAR data are collected, the intensity information is recorded for each return, and can be used to produce an image resembling those acquired by passive imaging sensors. This research evaluated LiDAR intensity data to determine its potential for use as baseline imagery where optical imagery are unavailable. Two airborne LiDAR datasets collected at different point densities and laser wavelengths were gridded and compared with optical imagery. Optech Orion C200 laser data were compared with a corresponding 1541 nm spectral band from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Optech ALTM Gemini LiDAR data collected at 1064 nm were compared to the WorldView-2 (WV-2) 949 – 1043 nm NIR2 band. Intensity images were georegistered and spatially resampled to match the optical data. The Pearson Product Moment correlation coefficient was calculated between datasets to determine similarity. Comparison for the full LiDAR datasets yielded correlation coefficients of approximately 0.5. Because LiDAR returns from vegetation are known to be highly variable, a Normalized Difference Vegetation Index (NDVI) was calculated utilizing the optical imagery, and intensity and optical imagery were separated into vegetation and nonvegetation categories. Comparison of the LiDAR intensity for non-vegetated areas to the optical imagery yielded coefficients greater than 0.9. These results demonstrate that LiDAR intensity data may be useful in substituting for optical imagery where only LiDAR is available.
LiDAR data are available in a variety of publicly-accessible forums, providing high-resolution, accurate 3- dimensional information about objects at the Earth's surface. Automatic extraction of information from LiDAR point clouds, however, remains a challenging problem. The focus of this research is to develop methods for point cloud classification and object detection which can be customized for specific applications. The methods presented rely on analysis of statistics of local neighborhoods of LiDAR points. A multi-dimensional vector composed of these statistics can be classified using traditional data classification routines. Local neighborhood statistics are defined, and examples are given of the methods for specific applications such as building extraction and vegetation classification. Results indicate the feasibility of the local neighborhood statistics approach and provide a framework for the design of customized classification or object detection routines for LiDAR point clouds.
Multispectral imaging (MSI) data collected at multiple angles over shallow water provide analysts with a unique
perspective of bathymetry in coastal areas. Observations taken by DigitalGlobe’s WorldView-2 (WV-2) sensor acquired
at 39 different view angles on 30 July 2011 were used to determine the effect of acquisition angle on bathymetry
derivation. The site used for this study was Kailua Bay (on the windward side of the island of Oahu). Satellite azimuth
and elevation for these data ranged from 18.8 to 185.8 degrees and 24.9 (forward-looking) to 24.5 (backward-looking)
degrees (respectively) with 90 degrees representing a nadir view. Bathymetry were derived directly from the WV-2
radiance data using a band ratio approach. Comparison of results to LiDAR-derived bathymetry showed that varying
view angle impact the quality of the inferred bathymetry. Derived and reference bathymetry have a higher correlation as
images are acquired closer to nadir. The band combination utilized for depth derivation also has an effect on derived
bathymetry. Four band combinations were compared, and the Blue and Green combination provided the best results.
The goal of this work is to determine methods for detecting trails using statistics of LiDAR point cloud data,
while avoiding reliance on a Digital Elevation Model (DEM). Creation of a DEM is a subjective process that
requires assumptions be made about the density of the data points, the curvature of the ground, and other factors
which can lead to very dierent results in the nal DEM product, with no single correct" result. Exploitation of
point cloud data also lends itself well to automation. A LiDAR point cloud based trail detection scheme has been
designed in which statistical measures of local neighborhoods of LiDAR points are calculated, image processing
techniques employed to mask non-trail areas, and a constrained region growing scheme used to determine a nal
trails map. Results of the LiDAR point cloud based trail detection scheme are presented and compared to a
DEM-based trail detection scheme. Large trails are detected fairly reliably with some missing gaps, while smaller
trails are detected less reliably. Overall results of the LiDAR point cloud based methods are comparable to the
DEM-based results, with fewer false alarms.
Full-waveform LIDAR is a technology which enables the analysis of the 3-D structure and arrangement of
objects. An in-depth understanding of the factors that affect the shape of the full-waveform signal is required
in order to extract as much information as possible from the signal. A simple model of LIDAR propagation has
been created which simulates the interaction of LIDAR energy with objects in a scene. A 2-dimensional model
tree allows controlled manipulation of the geometric arrangement of branches and leaves with varying spectral
properties. Results suggest complex interactions of the LIDAR energy with the tree canopy, including the
occurrence of multiple bounces for energy reaching the ground under the canopy. Idealized sensor instrument
response functions incorporated in the simulation illustrate a large impact on waveform shape. A waveform
recording laser rangefinder has been built which will allow validation or model results; preliminary collection
results are presented here.
Worldview-2 imagery acquired over Duck, NC and Camp Pendleton, CA were analyzed to extract Bidirectional
Reflectance Distribution Functions (BRDF) for 8 visible/near-infrared spectral bands. Images were acquired
at 15 azimuth/elevation positions at ten-second intervals during the Duck, NC orbit pass. Ten images were
acquired over Camp Pendleton, CA. Orthoready images were coregistered using first-order polynomials for the
two image sequences. BRDF profiles have been created for various scene elements. MODTRAN simulations
are presented to illustrate atmospheric effects under varying collection geometries. Results from analysis of the
Camp Pendleton, CA data are presented here.
Observations taken from DigitalGlobe's WorldView-2 (WV-2) sensor were analyzed for bottom-type and bathymetry for
data taken at Guam and Tinian in late February and early March of 2010. Classification of bottom type was done using
supervised and unsupervised classification techniques. All eight of the multispectral bands were used. The supervised
classification worked well based on ground truth collected on site. Bathymetric analysis was done using LiDAR-derived
bathymetry in comparison with the multispectral imagery (MSI) data. The Red Edge (705-745 nm) band was used to
correct for glint and general surface reflectance in the Blue (450-510 nm), Green (510-580 nm), and Yellow (585-625
nm) bands. For the Guam beach analyzed here, it was determined that the Green and Yellow bands were most effective
for determining depth between 2.5 and 20 m. The Blue band was less effective. Shallow water with coral could also be
Coastal bathymetry near Camp Pendleton, California was measured using wave motion as observed by the
WorldView-2 commercial satellite imaging system. The linear finite depth dispersion relation for surface gravity
waves was used to determine nearshore ocean depth from successive images acquired of the coastal area. Principal
component transformations of co-registered 8-color multispectral images were found to very effectively highlight
wave crests in the surf zone. Time sequential principal component images then contain both spatial and temporal
information. From these change detection images, wave celerity could be determined and depth inversion could
be performed. For waves farther from shore, the principal component transformation no longer highlighted
wave crests, but crests could be resolved within a single RGB composite image with equalization enhancement.
The wavelength of a wave above a point of known depth was measured. The wave period method was used
to determine depth for other waves in the propagation direction of this wave. Depth calculations using these
methods compared favorably to reference bathymetry. The spatial resolution for this method of determining
depth is higher and perhaps more accurate than the reference bathymetry used in this study, particularly in the
A simple Monte Carlo model of laser propagation through a tree is presented which allows the simulation of fullwaveform
LIDAR signatures. The model incorporates a LIDAR system and a 'natural' scene, including an atmosphere,
tree and ground surface. The PROSPECT leaf reflectance model is incorporated to determine leaf radiometric
properties. Changes in the scene such as varying material reflectance properties, sloped vs. flat ground, and comparisons
of tree 'leaf-on' vs. 'leaf-off' conditions have been analyzed. Changes in the LIDAR system have also been studied,
including the effects of changing laser wavelength, shape and length of transmitted pulses, and angle of transmission.
Results of some of these simulations are presented.
Multispectral imagery (MSI) taken with high-spatial resolution systems provides a powerful tool for mapping kelp
in water. MSI are not always available, however, and there are systems which provide only panchromatic imagery
which would be useful to exploit for the purpose of mapping kelp. Kelp mapping with MSI is generally done by use
of the standard Normalized Difference Vegetation Index (NDVI). In broadband panchromatic imagery, the kelp
appears brighter than the water because of the strong response of vegetation in the NIR, and can be reliably detected
by means of a simple threshold; overall brightness is generally proportional to the NDVI. Confusion is caused by
other bright pixels in the image, including sun glint. This research seeks to find ways of mitigating the number of
false alarms using spatial image processing techniques. Methods developed in this research can be applied to other
water target detection tasks.
The SALSA linear Stokes polarization camera from Bossa Nova Technologies (520-550 nm) uses an electronically
rotated polarization filter to measure four states of polarization nearly simultaneously. Some initial imagery results are
presented. Preliminary analysis results indicate that the intensity and degree of linear polarization (DOLP) information
can be used for image classification purposes. The DOLP images also show that the camera has a good ability to
distinguish asphalt patches of different ages. These positive results and the relative simplicity of the camera system
show the camera's potential for field applications.
The SALSA camera from Bossa Nova Technologies uses an electronically rotated polarization filter to measure four
states of polarization nearly simultaneously. Initial imagery results are presented, with an investigation of polarization
invariants, as affected by illumination, sensing geometry, and atmospheric effects. Applications for the system as it is
being developed are in change detection and surface characterization. Preliminary results indicate an ability to
distinguish new from old asphalt and disturbed earth from undisturbed earth with some image processing.
Data density has a crucial impact on the accuracy of Digital Elevation Models (DEMs). In this study, DEMs were
created from a high point-density LIDAR dataset using the bare earth extraction module in Quick Terrain Modeler.
Lower point-density LIDAR collects were simulated by randomly selecting points from the original dataset at a series
of decreasing percentages. The DEMs created from the lower resolution datasets are compared to the original DEM.
Results show a decrease in DEM accuracy as the resolution of the LIDAR dataset is reduced. Some analysis is made
of the types of errors encountered in the lower resolution DEMs. It is also noted that the percentage of points
classified as bare earth decreases as the resolution of the LIDAR dataset is reduced.
The WorldView-2 sensor, to be launched mid-2009, will have 8 MSI bands - 4 standard MSI spectral channels and an
additional 4 non-traditional bands. Hyperspectral data from the AURORA sensor (from the former Advanced Power
Technologies, Inc. (APTI)) was used to simulate the spectral response of the WorldView-2 Sensor and DigitalGlobe's 4-
band QuickBird system. A bandpass filter method was used to simulate the spectral response of the sensors. The
resulting simulated images were analyzed to determine possible uses of the additional bands available with the
WorldView-2 sensor. Particular attention is given to littoral (shallow water) applications. The overall classification
accuracy for the simulated QuickBird scene was 89%, and 94% for the simulated WorldView-2 scene.
LIDAR data taken over the Elkhorn Slough region in central California were analyzed for terrain classification. Data were collected on April 12th, 2005 over a 10 km × 20 km region that is mixed use agriculture and wetlands. LIDAR temporal information (elevation values), intensity of returned light and distribution of point returns (in both vertical and spatial dimensions) were used to distinguish land-cover types. Terrain classification was accomplished using LIDAR data alone, multi-spectral QuickBird data alone and a combination of the two data-types. Results are compared to significant ground truth information.
Robert M. Haralick, et. al., described a technique for computing texture features based on gray-level spatial dependencies using a Gray Level Co-occurrence Matrix (GLCM). The traditional GLCM process quantizes a gray-scale image into a small number of discrete gray-level bins. The number and arrangement of spatially co-occurring gray-levels in an image is then statistically analyzed. The output of the traditional GLCM process is a gray-scale image with values corresponding to the intensity of the statistical measure. A method to calculate Spectral Texture is modeled on Haralick's texture features. This Spectral Texture Method uses spectral-similarity spatial dependencies (rather than gray-level spatial dependencies). In the Spectral Texture Method, a spectral image is quantized based on discrete spectral angle ranges. Each pixel in the image is compared to an exemplar spectrum, and a quantized image is created in which pixel values correspond to a spectral similarity value. Statistics are calculated on spatially co-occurring spectral-similarity values. Comparisons between Haralick Texture Features and the Spectral Texture Method results are made, and possible uses of Spectral Texture features are discussed.
LIDAR data taken over the Elkhorn Slough in Central California were analyzed for terrain classification. The specific
terrain element of interest is vegetation, and in particular, tree type. Data taken on April 12th, 2005, were taken over a
10 km × 20 km region which is mixed use agriculture and wetlands. Time return and intensity were obtained at ~2.5 m
postings. Multi-spectral imagery from QuickBird was used from a 2002 imaging pass to guide analysis. Ground truth
was combined with the orthorectified satellite imagery to determine regions of interest for areas with Eucalyptus, Scrub
Oak, Live Oak, and Monterey Cyprus trees. LIDAR temporal returns could be used to distinguish regions with trees
from cultivated and bare soil areas. Some tree types could be distinguished on the basis of the relationship between
first/last extracted feature returns. The otherwise similar Eucalyptus and Monterey Cyprus could be distinguished by
means of the intensity information from the imaging LIDAR. The combined intensity and temporal data allowed
accurate distinction between the tree types, a task not otherwise practical with the satellite spectral imagery.
This paper addresses the use of multiplicative iterative algorithms to compute the abundances in unmixing of hyperspectral pixels. The advantage of iterative over direct methods is that they allow incorporation of positivity and sum-to-one constraints of the abundances in an easy fashion while also allowing better regularization of the solution for the ill-conditioned case. The derivation of two iterative algorithms based on minimization of least squares and Kulback-Leibler distances are presented. The resulting algorithms are the same as the ISRA and EMML algorithms presented in the emission tomography literature respectively. We show that the ISRA algorithm and not the EMML algorithm computes the maximum likelihood estimate of the abundances under Gaussian assumptions while the EMML algorithm computes a minimum distance solution based on the Kulback-Leibler generalized distance. In emission tomography, the EMML computes the maximum likelihood estimate of the reconstructed image. We also show that, since the unmixing problem is in general overconstrained and has no solutions, acceleration techniques for the EMML algorithm such as the RBI-EMML will not converge.
Proc. SPIE. 4725, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII
KEYWORDS: Statistical analysis, Detection and tracking algorithms, Data modeling, Image processing, Interference (communication), Tomography, Data processing, Signal processing, Multidimensional signal processing, Expectation maximization algorithms
Many imaging techniques commonly involve the extraction of mixed signal information from a pixel. In most mixed pixel cases, this is assumed to be a linear mixture and signal separation routines have been developed with this mixing compositions scheme in mind. One such signal separation routine incorporates the Expectation Maximization Maximum Likelihood (EMML) algorithm for the determination of signal mixtures in a pixel. This routine, however is very inefficient in that it requires large iteration values to converge to a solution. This report is the result of the implementation of a Re-scaled Block Iterative EMML approach, commonly used in the medical field for emission tomography image processing, to perform signal separation, while greatly increasing the efficiency in computation and rate of convergence to a solution.