Atmospheric optical turbulence can be characterized as refractive index variations along a beam's propagation path due to local fluctuations in temperature and humidity. Turbulence randomly perturbs the wavefront of a beam traveling through the medium, leading to effects such as scintillation and beam wandering. Wave optics simulations use phase screens and Fourier techniques to accurately model phase change of light sources as they travel through turbulence. Georgia Tech Research Institute has enhanced the open-source wave optics toolbox known as WavePy to accurately simulate the propagation of a laser beam over a path length of time- evolving horizontal turbulence. The simulation tool incorporates an optimization routine designed to accept scenario parameters and return receiver and source plane sampling parameters that ensure accuracy and fidelity of the simulation output. This simulation tool is designed to minimize the potential for common faults of wave optics simulations, including: phase-wrapping of the atmospheric phase screens over time, energy loss of the beam over the propagation path, and aliasing of scintillation effects at the receiver plane. This simulator has applications towards informing the design of detectors that can accommodate the changing angle of divergence of the beam as it approaches the detector, which is an important consideration for systems such as laser beam rider missile guidance systems. Initial results towards modeling the effects of varying beam parameters and simulation conditions are presented and analyzed.
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
The well-known Langley extrapolation technique produces measurements of atmospheric optical depth (AOD) by collecting direct sun irradiance at multiple zenith angles. One common application of this technique is used by sun photometers such as in NASA’s AErosol Robotic Network (AERONET). This large, spatially distributed network collects time averaging data from across the globe and applying Beer’s Law, produces hourly estimates of AOD. While this technique has produced excellent data, the dependence on direct sun irradiance requires cloudless skies and line-ofsight to the sun. Atmospheric LIDARs, on the other hand, can operate in the presence of clouds and can also produce range-resolved measurements of AOD by applying the same Langley technique. For aerosol LIDARs, this technique requires that the LIDAR be capable of producing high quality waveforms within the atmospheric coherence time and also be capable of taking measurements off zenith. At least two unique angles are required to produce data, although 3+ are recommended. This paper will present an overview of the Langley technique applied with a 1064 nm atmospheric aerosol LIDAR, an overview of the LIDAR hardware and capabilities, sample data collected by the LIDAR, and challenges associated with this technique. It will be shown that while this technique is useful, it requires measurements at all three angles to be made when the atmosphere is reasonably horizontally homogenous. Furthermore, the system optics, alignment, and laser power must be kept constant (keeping the LIDAR’s system constant the same for all measurements) for the data to be useful in a Langley analysis.
The Georgia Tech Research Institute (GTRI) is developing a transportable multi-lidar instrument known as the Integrated Atmospheric Characterization System (IACS). The system will be housed in two shipping containers that will be transported to remote sites on a low-boy trailer. IACS will comprise three lidars: a 355 nm imaging lidar for profiling refractive turbulence, a 355 nm Raman lidar for profiling water vapor, and an aerosol lidar operating at 355 nm as well as 1.064 and 1.627 µm. All of the lidar transmit/receive optics will be on a common mount, pointable at any elevation angle from 10 degrees below horizontal to vertical. The entire system will be computer controlled to facilitate pointing and automatic data acquisition. The purpose of IACS is to characterize optical propagation paths during outdoor tests of electro-optical systems. The tests are anticipated to include ground-to-ground, air-to-ground, and ground-to-air scenarios, so the system must accommodate arbitrary slant paths through the atmosphere, with maximum measurement ranges of 5-10 km. Elevation angle scans will be used to determine atmospheric extinction profiles at the infrared wavelengths, and data from the three wavelengths will be used to determine the aerosol Angstrom coefficient, enabling interpolation of results to other wavelengths in the 355 nm to 1.627 µm region.
One technique to better utilize existing roadway infrastructure is the use of HOV and HOT lanes. Technology to monitor the use of these lanes would assist managers and planners in efficient roadway operation. There are no available occupancy detection systems that perform at acceptable levels of accuracy in permanent field installations. The main goal of this research effort is to assess the possibility of determining passenger use with imaging technology. This is especially challenging because of recent changes in the glass types used by car manufacturers to reduce the solar heat load on the vehicles. We describe in this research a system to use multi-plane imaging with appropriate wavelength selection for sensing passengers in the front and rear seats of vehicles travelling in HOV/HOT lanes. The process of determining the geometric relationships needed, the choice of illumination wavelengths, and the appropriate sensors are described, taking into account driver safety considerations. The paper will also cover the design and implementation of the software for performing the window detection and people counting utilizing both image processing and machine learning techniques. The integration of the final system prototype will be described along with the performance of the system operating at a representative location.
Many techniques have been proposed for active optical remote sensing of the strength of atmospheric refractive turbulence. The early techniques, based on degradation of laser beams by turbulence, were susceptible to artifacts. In 1999, we began investigating a new idea, based on differential image motion (DIM), which is inherently immune to artifacts. The new lidar technique can be seen as a combination of two astronomical instruments: a laser guide star transmitter/receiver and a DIM monitor. The technique was successfully demonstrated on a horizontal path, with a hard-target analog of a lidar, and then a true lidar was developed. Several investigations were carried out first, including an analysis to predict the system's performance; new hard-target field measurements in the vertical direction; development of a robust inversion technique; and wave optics simulations. A brassboard lidar was then constructed and operated in the field, along with instruments to acquire truth data. The tests revealed many problems and pitfalls that were all solvable with engineering changes, and the results served to verify the new lidar technique for profiling turbulence. The results also enabled accurate performance predictions for future versions of the lidar. A transportable turbulence lidar system is currently being developed to support field tests of high-energy lasers.
The Georgia Tech Research Institute (GTRI) is developing a transportable multi-lidar instrument known as the
Integrated Atmospheric Characterization System (IACS). The system will be housed in standard shipping containers that
will be transported to remote sites by tractor-trailer. IACS will comprise three lidars: a 355 nm imaging lidar for
profiling refractive turbulence, a 355 nm Raman lidar for profiling water vapor, and an aerosol lidar operating at both
1.06 and 1.625 microns. All of the lidar transmit/receive optics will be co-aligned on a common mount, pointable at any
elevation angle from horizontal to vertical. The entire system will be computer controlled to facilitate pointing and
automatic data acquisition. The purpose of IACS is to characterize optical propagation paths during outdoor tests of
electro-optical systems. The tests are anticipated to include ground-to-ground, air-to-ground, and ground-to-air scenarios,
so the system must accommodate arbitrary slant paths through the atmosphere with maximum measurement ranges of
5-10 km. Elevation angle scans will be used to determine atmospheric extinction profiles at the infrared wavelengths, and
data from the three wavelengths will be used to determine the aerosol Angstrom coefficient, enabling interpolation of
results to other wavelengths in the 355 nm to 1.6 micron region. The imaging lidar for profiling refractive turbulence is
based on a previously-reported project known as Range Profiles of Turbulence.
The Georgia Tech Research Institute (GTRI) has developed a turbulence profiling lidar system based on the
differential image motion concept. The lidar measures a profile of mean square wave front tilt differences by
focusing a laser guide star at multiple ranges and then computing the differential image motion variance of guide
star images collected through multiple sub-apertures on the receiver. Direct inversion of the resulting integrals
suffers from high noise gain, so several different techniques were investigated to determine the refractive turbulence
profile. The best inversion method uses a non-linear fitting algorithm to fit a collection of functions to the
differential image motion profile. Each of the fitted functions then maps to a profile of refractive turbulence.
Bones continue to be a problem of concern for the poultry industry. Most further processed products begin with the
requirement for raw material with minimal bones. The current process for generating deboned product requires systems
for monitoring and inspecting the output product. The current detection systems are either people palpitating the product
or X-ray systems. The current performance of these inspection techniques are below the desired levels of accuracies and
are costly. We propose a technique for monitoring bones that conduct the inspection operation in the deboning the
process so as to have enough time to take action to reduce the probability that bones will end up in the final product.
This is accomplished by developing active cones with built in illumination to backlight the cage (skeleton) on the
deboning line. If the bones of interest are still on the cage then the bones are not in the associated meat. This approach
also allows for the ability to practice process control on the deboning operation to keep the process under control as
opposed to the current system where the detection is done post production and does not easily present the opportunity to
adjust the process. The proposed approach shows overall accuracies of about 94% for the detection of the clavicle
Ensuring meat is fully cooked is an important food safety issue for operations that produce "ready to eat" products. In
order to kill harmful pathogens like Salmonella, all of the product must reach a minimum threshold temperature.
Producers typically overcook the majority of the product to ensure meat in the most difficult scenario reaches the desired
temperature. A difficult scenario can be caused by an especially thick piece of meat or by a surge of product into the
process. Overcooking wastes energy, degrades product quality, lowers the maximum throughput rate of the production
line and decreases product yield. At typical production rates of 6000lbs/hour, these losses from overcooking can have a
significant cost impact on producers.
A wide area 3D camera coupled with a thermal camera was used to measure the thermal mass variability of chicken
breasts in a cooking process. Several types of variability are considered including time varying thermal mass (mass x
temperature / time), variation in individual product geometry and variation in product temperature. The automatic
identification of product arrangement issues that affect cooking such as overlapping product and folded products is also
addressed. A thermal model is used along with individual product geometry and oven cook profiles to predict the
percentage of product that will be overcooked and to identify products that may not fully cook in a given process.
The Georgia Tech Research Institute (GTRI) has developed a new type of LIDAR system for monitoring profiles of
atmospheric refractive turbulence. The system makes real-time measurements by projecting a laser beam to form a laser
beacon at several successive altitudes. The beacon is observed with a multiple-aperture telescope and the motion of the
beacon images from each altitude is characterized as the differential image motion variance. An inversion algorithm has
been developed to retrieve the turbulence profile. GTRI built a brassboard version of the LIDAR instrument and tested
it in October and December 2007, with truth data from scintillometers and from balloon-borne microthermal probes. The
tests resulted in the first time-height diagram of the strength of turbulence ever recorded by a LIDAR.
We are developing a new type of lidar for measuring range profiles of atmospheric optical turbulence. The lidar is based on a measurement concept that is immune to artifacts caused by effects such as vibration or defocus. Four different types of analysis and experiment have all shown that a turbulence lidar that can be built from commercially available components will attain a demanding set of performance goals. The lidar is currently being built, with testing scheduled for summer 2007.
We are developing a new type of lidar for measuring range profiles of atmospheric optical turbulence. The lidar is based on a measurement concept that is immune to artifacts caused by effects such as vibration or defocus. Four different types of analysis and experiment have all shown that a turbulence lidar that can be built from commercially available components will attain a demanding set of performance goals. The lidar is currently being built, with testing scheduled for August 2006.
Fully cooked, ready-to-eat products represent one of the fastest growing markets in the meat and poultry industries.
Modern meat cooking facilities typically cook chicken strips and nuggets at rates of 6000 lbs per hour, and it is a critical
food safety issue to ensure the products on these lines are indeed fully cooked. Common practice now employs oven
technicians to constantly measure final cook temperature with insertion-type thermocouple probes.
Prior research has demonstrated that thermal imagery of chicken breasts and other products can be used to predict core
temperature of products leaving an oven. In practice, implementation of a system to monitor core temperature can be
difficult for several reasons. First, a wide variety of products are typically produced on the same production line and the
system must adapt to all products. Second, the products can be often hard to find because they often leave the process
in random order and may be touching or even overlapping. Another issue is finite measurement time which is typically
only a few seconds. Finally, the system is subjected to a rigorous sanitation cycle and must hold up under wash down
To address these problems, a calibrated 320x240 micro-bolometer camera was used to monitor the temperature of
formed, breaded poultry products on a fully cooked production line for a period of one year. The study addressed the
installation and operation of the system as well as the development of algorithms used to identify the product on a
cluttered conveyor belt. It also compared the oven tech insertion probe measurements to the non-contact monitoring
Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards.
In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds.
In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck.
Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.
A new type of lidar is under development for measuring profiles of atmospheric optical turbulence. The principle of operation of the lidar is similar to the astronomical seeing instrument known as the Differential Image Motion Monitor, which views natural stars through two or more spatially separated apertures. A series of images is acquired, and the differential motion of the images (which is a measure of the difference in wavefront tilt between the two apertures) is analyzed statistically. The differential image motion variance is then used to find Fried's parameter r0. The lidar operates in a similar manner except that an artificial star is placed at a set of ranges, by focusing the laser beam and range-gating the imager. Sets of images are acquired at each range, and an inversion algorithm is then used to obtain the strength of optical turbulence as a function of range. In order to evaluate the technique in the field and to provide data for inversion algorithm development, a simplified version of the instrument was developed using a CW laser and a hard target carried to various altitudes by a tethered blimp. Truth data were simultaneously acquired with instruments suspended below the blimp. The tests were carried out on a test range at Eglin AFB in November 2004. Some of the resulting data have been analyzed to find the optimum frame rate for ground-based versions of the lidar instrument. Results are consistent with a theory that predicts a maximum rate for statistically independent samples of about 50 per second, for the instrument dimensions and winds speeds of the Eglin tests.
We investigated an edge response of an extended object in a turbulent atmosphere using imagery data acquired with a double-waveband passive imaging system operating in the visible IR wavebands and an actively illuminated optical sensor. We made two findings. We found that the edge response of an extended object is independent of an exposure time, and an atmospheric tilt does not contribute to the image blur of an extended object. In addition, we found that turbulence-induced image blur for an extended object reduces, not increases, with the imager diameter. Therefore, one can reduce the turbulence-induced image blur for an extended object reduces, not increases, with the imager diameter. Therefore, one can reduce the turbulence-induced blur by increasing aperture diameter of an imaging lens. Both findings contradict the predictions of the conventional imaging theory, suggesting that the conventional theory is not applicable to extended anisoplanatic objects. We provided physical interpretation for the results obtained. In addition, we discussed the mitigation techniques that allow us to reduce both turbulence-induced image blur and edge waviness in optical images.
A dual-band imaging system with variable aperture diameter was constructed and horizontal and vertical atmospheric tilt components were measured on a 1-km near-the-ground horizontal path using discrete and extended visible and JR sources. The spatial and temporal tilt statistics were estimated from the recorded data. Tilt structure function, which also characterizes v ariance of the p ointing error caused by anisoplanatism of t he track point to the aim point in the 1 aser projection system, for small angular separation decreases inverse proportionally to the aperture diameter D1 . The tilt structure function is insensitive to sensor vibration. For a point ahead angle of 0.45 mrad the daytime rms pointing enor caused by tilt anisoplanatism is 12 prad for D= 6 cm, and it is 5 prad for D= 40 cm. The tilt power spectral density agrees well with theory. Jt has the "-2/3" power slope, and the ratio of the two knee frequencies is equal to the inverse ratio of the aperture diameters. The tilt temporal conelation increases with the aperture diameter. The temporal conelation scale is 0.25 sec for D=6 cm and it is 1 sec for D= 40 cm. The C measurements made with discrete JR sources and an optical imager agree well with the measurements made with a scintillometer. The structure function for the lateral (Y) tilt exceeds that for the longitudinal (X) tilt, which is inconsistent with the theoretical prediction. We believe that heat-induced turbulence from the JR sources and a wind component parallel to the optical path degraded the measurements of the vertical tilt. Three mitigation techniques were considered including an increase of the aperture diameter, integration of the image edge over the edge angular extent, and averaging of multiple frames. A multi-frame averaging technique is known to be efficient for mitigation of the effects of turbulence induced scintillation and laser speckle. We found that by averaging multiple image frames one can mitigate the effects of tilt anisoplanatism as well. We also found that the edge response for a multi frame averaged image and a single frame image is the same. This allows us to conclude that a multi frame averaging technique for an extended object does not affect the system angular resolution.
A laboratory prototype of the NEXLASER unattended aerosol and ozone LIDAR was operated in the Atlanta metropolitan area during the ozone season of 2002. An important aspect of an unattended LIDAR system is the ability to automatically assess system problems and correct for them. This paper details the set of tests that have been conducted to verify system performance, discusses how the tests have been incorporated into NEXLASER's operational software, and shows how aerosol and ozone data collected by the system compares to other measurements.
Agnes Scott College and the Georgia Institute of Technology are jointly developing an eye safe atmospheric lidar as a unique hands-on research experience for undergraduates, primarily undergraduate women. Students from both institutions will construct the lidar under the supervision of Agnes Scott and Georgia Tech faculty members. The engineering challenges of making lidar accessible and appropriate for undergraduates are described. The project is intended to serve as a model for other schools.
This paper describes the development of a laboratory prototype unattended LIDAR system to measure aerosol profiles to 10km and ozone profiles to 3km. One consideration in an unattended system is a robust, eye-safe optical design that can provide the necessary signal levels and dynamic range to produce profiles at required height, resolution, and accuracy. An equally important consideration is a set of algorithms to compute aerosol and ozone profiles under a range of atmospheric conditions. NEXLASER employs an atmospheric state model to help identify and adapt to the varied conditions it must encounter. The signal-to-noise requirements of the algorithms are demonstrated and related back to hardware design. Performance of the system is demonstrated with simulated atmospheric conditions.
We have experimentally validated the concept of a differential image motion (DIM) lidar for measuring vertical profiles of the refractive index structure characteristic C by building a hard-target analog of the DIM lidar and testing it against a conventional scintillometer on a 300 m horizontal path, throughout a range of turbulent conditions. The test results supported the concept and confirmed that the structure characteristic C can be accurately measured with this method. Analysis of the effect of scintillation on DIM lidar has been performed. It is shown that the lidar has a scintillation resistant capability. Turbulence and lidar calculations were performed. These calculations confirmed that the DIM lidar is practical.
A theoretical model for the edge image waviness effect is developed for the ground-to-ground scenario and validated y use IR imagery data collected at the White Sands Missile Range. It is shown that angle-of-arrival (AA) angular anisoplanatism causes the phenomenon of edge image waviness and that the AA correlation scale, not the isoplanatic angle, characterizes the edge image waviness scale. The latter scale is determined by the angular size of the imager and a normalized turbulence outer scale, and it does not depend on the strength of turbulence along the path. Spherical divergence of the light waves increases AA correlation. A procedure for estimating the atmospheric and camera-noise components of the edge image motion is developed, and implemented. A technique for mitigation of the edge image waviness that relies on averaging the effects of AA anisoplanatism on the image is experimentally validated. The edge waviness is reduced by a factor of 2-3. The time history and temporal power spectrum of the edge motion are obtained. These data confirm that the observed edge motion is caused by turbulence.
The Georgia Tech Research Institute has developed an integrated suite of software for Visual and Electro-Optical (VISEO) detection analysis, under the sponsorship of the Army Aviation and Troop Command, Aviation Applied Technology Directorate. The VISEO system is a comprehensive workstation-based tool for multi-spectral signature analysis, LO design, and visualization of targets moving through real measured backgrounds. A key component of the VISEO system is a simulation of real measured backgrounds. A key component of the VISEO system is a simulation of human vision, called the Georgia Tech Vision (GTV) simulation. The algorithms used in the simulation are consistent with neurophysiological evidence concerning the functions of the human visual system, from dynamic light adaptation processes in the retinal receptors and ganglia to the processing of motion, color, and edge information in the striate cortex. The simulation accepts images seen by the naked eye or through direct-view optical systems, as well as images viewed on the displays of IR sensors, image intensifiers and night-vision devices. GTV outputs predicted probabilities that the target is fixated (Pfix) during visual search, and detected (Pd), and also identifies specific features of the target that contribute most to successful search and detection performance. This paper outlines the capabilities and structure of the VISEO system, emphasizing GTV. Example results of visible and IR signature reduction on the basis of VISEO will be shown and described.
An ocean surface model for synthetic IR/visible imaging applications has been developed at GTRI based upon the Pierson-Moskowitz wave spectrum. The model calculates a 2D grid of height values describing a given snapshot of the sea surface. This surface is a function of wind speed and direction as well as elapsed time into the simulation. The time parameter permits the animation of the sea surface during a simulated movie sequence. Sea signatures are calculated using a combination of models and tools: the GTRI IR signature code GTSIG is used to predict sea temperatures; LOWTRAN7 is used to construct tables of sky radiances; the Fresnel equations are used to construct tables of sea reflectance. These signature components are combined during image rendering with a ray-tracing approach the provides the total radiance (emitted and reflected) from the ocean surface arriving at each sensor image pixel. This ocean model has been integrated into a complete image rendering system called GTRENDER, and is available for use in GTRI applications such as the GTSIMS family of missile simulations.
Development of a computer-based system for generating simulated infrared imagery is described. The system provides realistic representations of infrared targets and backgrounds for training soldiers in combat vehicle identification (CVI). Using Army-supplied lists of desired items, Georgia Tech Research Institute (GTRI) constructed geometric and thermal models of NATO and Warsaw Pact combat vehicles, terrain backgrounds, countermeasures, distractors, and obscurants. Simulations of several fielded infrared sensors enable system users to generate training imagery sets, both snapshots and animated sequences, showing realistic sensor effects. The system is workstation-based and has a user interface that permits a non-expert to generate desired imagery sets from menus of available models and scenarios.