PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8737, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to assist the pilot to safely operate helicopters in degraded visual environment (DVE), an integrated system of
various technologies that will provide improved situation awareness with minimal interpretation whilst controlling the
aircraft appears to be essential.
To determine the most effective and affordable solution set to enhance helicopter operations in DVE at low altitudes and
to help scope the potential range of solutions, the critical tasks of helicopters with the inherent requirements are to be
defined.
This paper will provide an overview on the operational environment, today procedures and the resulting general
requirements for operating helicopters low level in degraded visual environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopter operations at low altitude are to this day only performed under VFR conditions in which safe piloting of the
aircraft relies on the pilot’s visual perception of the outside environment. However, there are situations in which a
deterioration of visibility conditions may cause the pilot to lose important visual cues thereby increasing workload and
compromising flight safety and mission effectiveness. This paper reports on a pilot assistance system for all phases of
flight which is intended to:
• Provide navigational support and mission management
• Support landings/take-offs in unknown environment and in DVE
• Enhance situational awareness in DVE
• Provide obstacle and terrain surface detection and warning
• Provide upload, sensor based update and download of database information for debriefing and later missions.
The system comprises a digital terrain and obstacle database, tactical information, flight plan management combined
with an active 3D sensor enabling the above mentioned functionalities. To support pilots during operations in DVE, an
intuitive 3D/2D cueing through both head-up and head-down means is proposed to retain situational awareness. This
paper further describes the system concept and will elaborate on results of simulator trials in which the functionality was
evaluated by operational pilots in realistic and demanding scenarios such as a SAR mission to be performed in
mountainous area under different visual conditions. The objective of the simulator trials was to evaluate the functional
integration and HMI definition for the NH90 Tactical Transport Helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface operation is currently one of the least technologically equipped phases of aircraft operation. The increased air
traffic congestion necessitates more aircraft operations in degraded weather and at night. The traditional surface
procedures worked well in most cases as airport surfaces have not been congested and airport layouts were less complex.
Despite the best efforts of FAA and other safety agencies, runway incursions continue to occur frequently due to
incorrect surface operation. Several studies conducted by FAA suggest that pilot induced error contributes significantly
to runway incursions. Further, the report attributes pilot’s lack of situational awareness - local (e.g., minimizing lateral
deviation), global (e.g., traffic in the vicinity) and route (e.g., distance to next turn) - to the problem. An Enhanced
Vision System (EVS) is one concept that is being considered to resolve these issues. These systems use on-board
sensors to provide situational awareness under poor visibility conditions. In this paper, we propose the use of an Image
processing based system to estimate the aircraft position and orientation relative to taxiway markings to use as lateral
guidance aid. We estimate aircraft yaw angle and lateral offset from slope of the taxiway centerline and horizontal
position of vanishing line. Unlike automotive applications, several cues such as aircraft maneuvers along assigned route
with minimal deviations, clear ground markings, even taxiway surface, limited aircraft speed are available and enable us
to implement significant algorithm optimizations. We present experimental results to show high precision navigation
accuracy with sensitivity analysis with respect to camera mount, optics, and image processing error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopters experience nearly 10 times the accident rate of fixed wing platforms, due largely to the nature of their
mission, frequently requiring operations in close proximity to terrain and obstacles. Degraded visual environments
(DVE), including brownout or whiteout conditions generated by rotor downwash, result in loss of situational awareness
during the most critical phase of flight, and contribute significantly to this accident rate. Considerable research into
sensor and system solutions to address DVE has been conducted in recent years; however, the promise of a Synthetic
Vision Avionics Backbone (SVAB) extends far beyond DVE, enabling improved situational awareness and mission
effectiveness during all phases of flight and in all visibility conditions. The SVAB fuses sensor information with high
resolution terrain databases and renders it in synthetic vision format for display to the crew. Honeywell was awarded the
DARPA MFRF Technical Area 2 contract in 2011 to develop an SVAB1. This work includes creation of a common
sensor interface, development of SVAB hardware and software, and flight demonstration on a Black Hawk helicopter. A
“sensor agnostic” SVAB allows platform and mission diversity with efficient upgrade path, even while research
continues into new and improved sensors for use in DVE conditions. Through careful integration of multiple sources of
information such as sensors, terrain and obstacle databases, mission planning information, and aircraft state information,
operations in all conditions and phases of flight can be enhanced. This paper describes the SVAB and its functionality
resulting from the DARPA contract as well as Honeywell RD investment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) airfield during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and minification effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the approach of a sensor based landing aid for helicopters in degraded visual conditions. The system
concept presented employs a long range high resolution ladar sensor allowing for identifying obstacles in the flight and
in the approach path as well as measuring landing site conditions like slope, roughness and precise position relative to
the helicopter during long final approach. All these measurements are visualized to the pilot. Cueing is done by 3D
conformal symbology displayed in a head-tracked HMD enhanced by 2D symbols for data which is perceived easier by
2D symbols than by 3D cueing. All 3D conformal symbology is placed on the measured landing site surface which is
further visualized by a grid structure for displaying landing site slope, roughness and small obstacles. Due to the limited
resolution of the employed HMD a specific scheme of blending in the information during the approach is employed. The
interplay between in flight and in approach obstacle warning and CFIT warning symbology with this landing aid
symbology is also investigated and exemplarily evaluated for the NH90 helicopter which has already today implemented
a long range high resolution ladar sensor based obstacle warning and CFIT symbology. The paper further describes the
results of simulator and flight tests performed with this system employing a ladar sensor and a head-tracked head
mounted display system. In the simulator trials a full model of the ladar sensor producing 3D measurement points was
used working with the same algorithms used in flight tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Piloting a rotorcraft in a Degraded Visual Environment (DVE) is a very complex task, and the evolution of the rotorcraft
missions tend to augment the probability of such degraded flight conditions (increase of night flights, all-weather flights,
with brownout or whiteout phenomena…). When the direct view of the external situation is degraded, the avionic system
can be of great help for the crew to recover the lost visual references. TopOwl® Head Mounted Sight and Display
(HMSD) system is particularly adapted to such situations, allowing the pilot to remain "eyes-out" while visualizing on a
large field of view different information: a conformal enhanced image (EVS) coming from an on-board sensor, various
2D and 3D symbologies (flight, navigation, mission specific symbols), a conformal synthetic representation of the terrain
(SVS), a night vision image coming from the integrated Image Intensifier Tubes, or a combination of these data,
depending on the external conditions and the phase of flight, according to the pilot’s choice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major causes for hazardous situations in aviation is the lack of a pilot’s situational awareness. Common
causes for degraded situational awareness are Brownout and Whiteout situations, low level flights, and flights in DVE.
In this paper, we propose Advanced Synthetic Vision (ASV), a modern situational awareness solution. ASV combines
both Synthetic Vision and Enhanced Vision in order to provide the pilot most timeliness information without being
restricted in the spatial coverage of the synthetic representation. The advantages to a common Enhanced Synthetic
Vision System are the following: (1) ASV uses 3D ladar data instead of a 2D sensor. The 3D point cloud is classified in
real-time to distinguish between ground, wires, poles and buildings; (2) the classified sensor data is fused with onboard
data base contents like elevation or obstacles. The entire data fusion is performed in 3D, i.e. output is a merged 3D
scenario instead of a blended 2D image. Once the sensor stopped recording due to occlusion, ASV switches to pure data
base mode; (3) the merged data is passed to a 3D visualization module, which is fully configurable in order to support
synthetic views on head down displays as well as more abstract augmented representations on helmet mounted displays;
(4) the extendable design of ASV supports the graphical linking of functions like 3D landing aid, TAWS, or navigation
aids.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OPAL obscurant penetrating LiDAR was developed by Neptec and characterized in various degraded visual
environments (DVE) over the past five years. Quantitative evaluations of obscurant penetration were performed using
the Defence RD Canada – Valcartier (DRDC Valcartier) instrumented aerosol chamber for obscurants such as dust and
fog. Experiments were done with the sensor both at a standoff distance and totally engulfed in the obscurants. Field
trials were also done to characterize the sensor in snow conditions and in smoke. Finally, the OPAL was also mounted
on a Bell 412 helicopter to characterize its dust penetration capabilities, in environment such as Yuma Proving Ground.
The paper provides a summary of the results of the OPAL evaluations demonstrating it to be a true “see through”
obscurant penetrating LiDAR and explores commercial applications of the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems.
The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship
landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested
in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable
detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision
System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional
94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with
synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is
added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The
combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight
operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and
summarizes the recent flight test results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating
(DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft
Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC)
Helicopter Low-Level Operations (HELO) Product 2 program. Areté’s DUSPEN system captures full lidar waveforms
and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other
obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and
ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out
conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is a process of transforming a data set from one coordinate system into another. There are two
typical approaches for image registration: Feature point match based and Area similarity comparison based. The
feature point match based approach, using points to establish the correspondence between two images, is relatively
fast, but it involves feature extractions and parameter selection to create feature points. Feature extractions involve
derivatives which are ill-posed problems and may lead to robustness issues. The area similarity comparison based
approach compares intensity patterns using a correlation metric such as normalized cross correlation (NCC). Since
it does not require feature extraction, is simple and not sensitive to noise. However its computational cost is high.
Even when some fast techniques like FFT are used to reduce the computational cost, the implementation is still time
consuming.
In this paper, we propose a diffusion equation and normalized cross correlation (NCC) combined method to perform
robust image registration with low computational cost. We first apply the diffusion equation to two images received
from two sensors (or the same sensor) and allow these two images to evolve by this diffusion equation. Based on
the characteristics of evolutions, we select a very small percentage of stable points in the first image and perform the
normalized cross correlation to the second image at each transformation point. The highest NCC point provides the
transformation parameters for registering these two images. This new method is resistant to noise since the
evolution of the diffusion equation reduces noise and it chooses only stable points for the NCC computation.
Furthermore, the new method is computationally efficient since only a small percentage of pixels involve in the
transformation estimation. Finally, the experiments for video motion estimation and image registration are provided
to demonstrate that the new method is able to estimate the registration transformation reliably in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an algorithm for obtaining an original image which becomes blurred due to the size of
the aperture of the CMOS camera module in capturing images of the objects near the camera. We introduce the
mathematical property of a circulant matrix which can be used to describe the PSF and propose a new algorithm
based on this matrix. We suggest new algorithms for both one-dimensional and two-dimensional signal processing
case. These proposed algorithms were validated by the results of computer simulation for two-dimensional images
synthesized by a CMOS camera model which was based on a pinhole camera model previously proposed by our
research group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel spatial domain image enhancement algorithm, in which dynamic range of the scene illumination is compressed from the human visual perspective to improve the visual quality and visibility in digital images captured under degraded visual conditions, is proposed. The proposed algorithm employs an adaptive approach so that local image statics, namely the local standard deviation and the local mean in the image are modified simultaneously utilizing an intensity transform based on a “S” shape curve, the curvature parameter of which is determined adaptively followed by a local contrast enhancement process. The performance of the algorithm is evaluated by a statistical visual measure, along with visual comparisons of the proposed method with state-of-the-art enhancement algorithms are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges
within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types
of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR’s research helicopter FHS (flying
helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster
architecture acquires and fuses all the information to get one single comprehensive description of the outside situation.
While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced
sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential
moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is
very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter’s mission
timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted
features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper
describes applied feature extraction methods for moving object detection, as well as data fusion techniques for
combining features from TV/IR and Ladar data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.