Multi-modal data fusion for situational awareness is of interest because fusion of data can provide more information than the individual modalities alone. However, many questions remain, including what data is beneficial, what algorithms work the best or are fastest, and where in the processing pipeline should data be fused? In this paper, we explore some of these questions through a processing pipeline designed for multi-modal data fusion in an autonomous UAV landing scenario. In this paper, we assess landing zone identification methods using two data modalities: hyperspectral imagery and LIDAR point clouds. Using hyperspectral image and LIDAR data from two datasets of Maui and a university campus, we assess the accuracies of different landing zone identification methods, compare rule-based and machine learning based classifications, and show that depending on the dataset, fusion does not always increase performance. However, we show that machine learning methods can be used to ascertain the usefulness of individual modalities and their resulting attributes when used to perform classification.
Though many materials behave approximately as greybodies across the long-wave infrared (LWIR) waveband, certain important infrared (IR) scene modeling materials such as brick and galvanized steel exhibit more complex optical properties1. Accurately describing how non-greybody materials interact relies critically on the accurate incorporation of the emissive and reflective properties of the in-scene materials. Typically, measured values are obtained and used. When measured using a non-imaging spectrometer, a given material’s spectral emissivity requires more than one collection episode, as both the sample under test and a standard must be measured separately. In the interval between episodes changes in environment degrade emissivity measurement accuracy. While repeating and averaging measurements of the standard and sample helps mitigate such effects, a simultaneous measurement of both can ensure identical environmental conditions during the measurement process, thus reducing inaccuracies and delivering a temporally accurate determination of background or ‘down-welling’ radiation. We report on a method for minimizing temporal inaccuracies in sample emissivity measurements. Using a LWIR hyperspectral imager, a Telops Hyper-Cam2, an approach permitting hundreds of simultaneous, calibrated spectral radiance measurements of the sample under test as well as a diffuse gold standard is described. In addition, we describe the data reduction technique to exploit these measurements. Following development of the reported method, spectral reflectance data from 10 samples of various materials of interest were collected. These data are presented along with comments on how such data will enhance the fidelity of computer models of IR scenes.
Georgia Tech has developed a new modeling and simulation tool that predicts both radar and electro-optical infrared (EO-IR) lateral range curves (LRCs) and sweep widths (SWs) under the Optimization of Radar and Electro-Optical Sensors (OREOS) program for US Coast Guard Search and Rescue (SAR) applications. In a search scenario when the location of the lost or overdue craft is unknown, the Coast Guard will conduct searches based upon standard procedure, personnel expertise, operational experience, and models. One metric for search planning is the sweep width, or integrated area under a LRC. Because a searching craft is equipped with radar and EO-IR sensor suites, the Coast Guard is interested in accurate predictions of sweep width for the particular search scenario. Here, we will discuss the physical models that make up the EO-IR portion of the OREOS code. First, Georgia Tech SIGnature (GTSIG) generates thermal signatures of search targets based upon the thermal and optical properties of the target and the environment; a renderer then calculates target contrast. Sensor information, atmospheric transmission, and the calculated target contrasts are input into NVESD models to generate probability of detection (PD) vs. slant range data. These PD vs. range values are then converted into LRCs by taking into account a continuous look search from a moving platform; sweep widths are then calculated. The OREOS tool differs from previous methods in that physical models are used to predict the LRCs and sweep widths at every step in the process, whereas heuristic methods were previously employed to generate final predictions.
A Forward Looking Interferometer (FLI) sensor has the potential to be used as a means of detecting aviation hazards in
flight. One of these hazards is mountain wave turbulence. The results from a data acquisition activity at the University
of Colorado's Mountain Research Station will be presented here. Hyperspectral datacubes from a Telops Hyper-Cam
are being studied to determine if evidence of a turbulent event can be identified in the data. These data are then being
compared with D&P TurboFT data, which are collected at a much higher time resolution and broader spectrum.
The use of a hyperspectral imaging system for the detection of gases has been investigated, and algorithms have been
developed for various applications. Of particular interest here is the ability to use these algorithms in the detection of
the wake disturbances trailing an aircraft. A dataset of long wave infrared (LWIR) hyperspectral datacubes taken with a
Telops Hyper-Cam at Hartsfield-Jackson International Airport in Atlanta, Georgia is investigated. The methodology
presented here assumes that the aircraft engine exhaust gases will become entrained in wake vortices that develop;
therefore, if the exhaust can be detected upon exiting the engines, it can be followed through subsequent datacubes until
the vortex disturbance is detected. Gases known to exist in aircraft exhaust are modeled, and the Adaptive
Coherence/Cosine Estimator (ACE) is used to search for these gases. Although wake vortices have not been found in
the data, an unknown disturbance following the passage of the aircraft has been discovered.
Georgia Tech been investigating method for the detection of covert personnel in traditionally difficult environments
(e.g., urban, caves). This program focuses on a detailed phenomenological analysis of human physiology and signatures
with the subsequent identification and characterization of potential observables. Both aspects are needed to support the
development of personnel detection and tracking algorithms. The difficult nature of these personnel-related problems
dictates a multimodal sensing approach. Human signature data of sufficient and accurate quality and quantity do not
exist, thus the development of an accurate signature model for a human is needed. This model should also simulate
various human activities to allow motion-based observables to be exploited. This paper will describe a multimodal
signature modeling approach that incorporates human physiological aspects, thermoregulation, and dynamics into the
signature calculation. This approach permits both passive and active signatures to be modeled. The focus of the current
effort involved the computation of signatures in urban environments. This paper will discuss the development of a
human motion model for use in simulating both electro-optical signatures and radar-based signatures. Video sequences
of humans in a simulated urban environment will also be presented; results using these sequences for personnel tracking
will be presented.
Georgia Tech recently initiated a weathering effects measurement program to monitor the optical properties of several
common building materials. A set of common building materials were placed outdoors and optical property
measurements made over a series of weeks to assess the impact of exposure on these properties. Both reflectivity and
emissivity measurements were made. Materials in this program included aluminum flashing, plastic sheets, bricks, roof
shingles, and tarps. This paper will discuss the measurement approach, experimental setup, and present preliminary
results from the optical property measurements.