PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings volume 6559, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Vision Systems (SVS) create images for display in the cockpit from the information contained in databases of
terrain, obstacles and cultural features like runways and taxiways, and the known own-ship position in space. Displays
are rendered egocentrically, from the point of view of the pilot. Certified synthetic vision systems, however, do not yet
qualify for operational credit in any domain, other than to provide enhanced situation awareness. It is not known at this
time whether the information provided by the system is sufficiently robust to substitute for natural vision in a specific
application. In this paper an operations concept is described for the use of SVS information during a precision instrument
approach in lieu of visual contact with a runway approach light system. It proposes an operation within the existing
framework of regulations, and identifies specific areas that may require additional research data to support certification
of the proposed operational credit. The larger purpose is to set out an example application and intended function which
will require the elaboration and resolution of operational and human performance concerns. To this end, issues in several
categories are identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude,
high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight
information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the
validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide
traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS
technologies for commercial, business, and general aviation aircraft which have been shown to provide significant
improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents /
accidents compared to current generation cockpit technologies.
It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in
Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions
(VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and
SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or
Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The
"operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport
equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the
present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in
situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems,
and decision altitudes tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes flight tests of a Honeywell Synthetic Vision (SV) Primary Flight Display prototype system integrated with Enhanced Ground Proximity Warning System (EGPWS). The terrain threat information from EGPWS system is displayed with synthetic vision terrain background in a coordinated 3D perspective-view and 2D lateral map format for improved situation awareness. The flight path based display symbology provides unambiguous information to flight crews for recovery and avoidance with respect to the threat areas. The flight tests further demonstrate that the SV based display is an effective situation awareness tool to prevent crew blunder in low visibility situations and further backup the accident chain that typically proceeds control flight into terrain (CFIT) accidents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic vision systems (SVS) are studied for some time to improve pilot's situational awareness and lower their
workload. Early systems just displayed a virtual outside view of terrain, obstacles or airport elements as it could also be
perceived through the cockpit windows in absence of haze, fog or any other factors impairing visibility. Required digital
terrain, obstacle and airport databases have been developed and standardized by Jeppesen as part of the NASA Aviation
Safety Program.
Newer SVS displays also introduced different kinds of flight guidance symbology to help pilots to improve the overall
flight precision. The method studied in this paper is to display navigation procedures in the form of guidance channels.
First releases of the described system used static channels, generated once at the startup at the system or even offline.
While this approach is very resource friendly for the avionics hardware, it does not consider the users, which want the
system to respond to the current flight conditions dynamically.
Therefore, a new application has been developed which generates both the general channel trajectory as well as the
channel depiction in a fully dynamic way while the pilot flies a navigation procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an Integrated Multisensor Synthetic Imaging System (IMSIS) developed for low-visibility, low-level
operations, tailored to Army rotorcraft. IMSIS optimally avails itself of a variety of image-information
sources: FLIR, mm-Wave RADAR and synthetic imagery are all presented to the flying crew
in real time, on a fused display. Synthetic imagery is validated in real time by a 3D terrain sensing radar, to
ensure that the a priori stored database is valid, and to eliminate any possible aircraft positioning errors
with respect to the database. Extensive human factor evaluations were performed on the fused display
concept. All pilots agreed that IMSIS would be a valuable asset in reduced visibility conditions and that the
validated SVS display was rated nearly as flyable as a good FLIR display. The pilots also indicated that the
ability to select and fuse terrain information sources was an important feature.
IMSIS increases situational awareness at night and in all weather conditions, while considerably reducing
pilot workload compared to separately monitoring each sensor and enhancing low-level flight safety by
updating the terrain in real time.
While specifically designed for helicopter low-level flight and navigation, it can aid hover and touchdown
and landing for both fixed and rotary wing platforms as well as aid navigation even in non-airborne
domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Preventing runway incursions is considered a top safety priority for the National Transportation Safety Board and is a
growing problem among commercial air traffic at controlled airfields. This problem only increases in difficulty when the
weather and airfield conditions become severely degraded. Such is the case in this Air Force Research Laboratory
(AFRL) work, which focused on the decision making process of aircrew landing under near zero-zero weather at an
unimproved airfield. This research is a part of a larger demonstration effort using sensor technology to land in near zero-zero
weather at airfields that offer no or unreliable approach guidance. Using various head-up (HUD) and head-down
(HDD) display combinations that included the sensor technology, pilot participants worked through the decision of
whether the airfield was safe to land on or required a go-around. The runway was considered unsafe only if the boundary
of the runway was broken by an obstacle causing an incursion. A correct decision is one that allowed the aircrew to land
on a safe runway and to go-around when an incursion was present. While going around is usually considered a safe
decision, in this case a false positive could have a negative mission impact by preventing subsequent landing attempts. In
this study we found a combination of display formats that provided the greatest performance without making significant
changes to an existing avionics suite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military helicopter operations are often constrained by environmental conditions including low light levels and poor
weather. Recent operational experience has also shown the difficulty presented by certain terrain when operating at low
altitude by day and night. For example, poor visual cues when flying over featureless terrain with low scene contrast, or
obscuration of vision caused by wind blown and re-circulated dust at low level (brown out). These types of conditions
can result in loss of spatial awareness and loss of precise control of the aircraft. Atmospheric obscurants such as fog,
cloud, rain and snow can similarly lead to hazardous situations. Day Night All Weather (DNAW) systems applied
research, sponsored by UK MOD, has developed a systematic, human centred approach, to understanding and
developing pilotage display systems for challenging environments. A prototype DNAW system has been developed
using an incremental flight test programme, leading to the flight assessment of a fully integrated pilotage display
solution, trial HAWKOWL, installed in a Sea King helicopter. The system comprises several sub-systems including; a
multi-spectral sensor suite, image processing and fusion; head down and head-tracked Display Night Vision Goggles;
onboard mission planning and route generation; precision navigation; dynamic flight path guidance; and conformal, task
dependent, symbology. A variety of qualitative and quantitative assessment techniques have been developed and applied
to determine the performance of the system and the capability it provides. This paper describes the approach taken in the
design, implementation and assessment of the system and identifies key results from the flight trial.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To enable safe use of Synthetic Vision Systems (SVS) at lower altitudes, real-time sensor measurements
are required to ensure the integrity of terrain and obstacle models stored in the onboard SVS and to detect
hazards that may have been omitted from the stored models. This paper discusses various aspects of using
X-band weather radar for terrain database integrity monitoring and terrain referenced navigation. Feature
extraction methods will be addressed to support the correlation process between the weather radar
measurements and the stored terrain databases. Furthermore, improved weather radar antenna models will
be discussed to more reliably perform the shadow detection and extraction (SHADE) functionality. In
support of the navigation function, methods will be introduced to estimate aircraft state information, such
as velocity, from the geometrical changes in the observed terrain imagery. The outputs of these methods
will be compared to the state estimates derived from Global Positioning System (GPS) and Inertial
Navigation System (INS) measurements. All methods discussed in this paper will be evaluated using flight
test data collected with a Gulfstream V in Reno, NV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement − prototype,
2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic
Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground.
The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is
organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines
the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without
using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data
and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor,
Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of
situation and respective HMI for the pilot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a system for UAV automated landing that requires minimal landing site preparation, no additional
electronics, and no additional aircraft equipment of any kind. This is a Joint-UAV solution that will work equally well
for land-based aircraft and for shipboard recoveries. Our proposed system requires only a simple target that can be
permanently painted on a runway, laid out in a roll-up mat, or potentially denoted with chemical lights for night
operations. Its appearance is unique when seen from the optimal approach path, and from other angles its perspective
distortion indicates the necessary correction. By making continual adjustments based on this feedback, the plane can
land in a small area at the desired angle. We time the pre-touchdown flare using only a 2D visual reference. Assuming a
constant closing speed, we can estimate the time to contact and initiate a controlled flare at a predetermined interval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The guidance information that is available to the UAV operator typically suffers from limitations of data update rate and
system latency. Even when using a flight director command display, the manual control task is considerably more
difficult compared to piloting a manned aircraft. Results from earlier research into perspective guidance displays show
that these displays provide performance benefits and suggest a reduction of the negative effects of system latency. The
current study has shown that in case of limitations of data update rate and system latency the use of a conformal sensor
overlay showing a perspective presentation of the trajectory constraints is consistently superior to the flight director
command display. The superiority becomes more pronounced with an increase in data latency and a decrease in update
rate. The fact that the perspective pathway overlay as used in this study can be implemented on any graphics system that
is capable of rendering a set of 2-D vectors makes it a viable candidate for upgrades to current systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experiments and flight tests have shown that a Head-Up Display (HUD) and a head-down, electronic moving map
(EMM) can be enhanced with Synthetic Vision for airport surface operations. While great success in ground operations
was demonstrated with a HUD, the research noted that two major HUD limitations during ground operations were their
monochrome form and limited, fixed field of regard. A potential solution to these limitations found with HUDs may be
emerging Head Worn Displays (HWDs). HWDs are small, lightweight full color display devices that may be worn
without significant encumbrance to the user. By coupling the HWD with a head tracker, unlimited field-of-regard may
be realized for commercial aviation applications. In the proposed paper, the results of two ground simulation
experiments conducted at NASA Langley are summarized. The experiments evaluated the efficacy of head-worn
display applications of Synthetic Vision and Enhanced Vision technology to enhance transport aircraft surface
operations. The two studies tested a combined six display concepts: (1) paper charts with existing cockpit displays, (2)
baseline consisting of existing cockpit displays including a Class III electronic flight bag display of the airport surface;
(3) an advanced baseline that also included displayed traffic and routing information, (4) a modified version of a HUD
and EMM display demonstrated in previous research; (5) an unlimited field-of-regard, full color, head-tracked HWD
with a conformal 3-D synthetic vision surface view; and (6) a fully integrated HWD concept. The fully integrated HWD
concept is a head-tracked, color, unlimited field-of-regard concept that provides a 3-D conformal synthetic view of the
airport surface integrated with advanced taxi route clearance, taxi precision guidance, and data-link capability. The
results of the experiments showed that the fully integrated HWD provided greater path performance compared to using
paper charts alone. Further, when comparing the HWD with the HUD concept, there were no differences in path
performance. In addition, the HWD and HUD concepts were rated via paired-comparisons the same in terms of
situational awareness and workload. However, there were over twice as many taxi incursion events with the HUD than
the HWD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Displaying video on a head-up display from an Enhanced Vision System camera or Synthetic Vision System engine
presents some unique challenges not seen on conventional head-down flight deck displays. All information displayed on
the HUD has to be seen against a background that can vary from bright sunlight to a dark night sky. The video has to
include enough grayshade information to support visual identification of runway features and the image shown on the
HUD has to be visually aligned to the real world accurately enough to support low-visibility operations at airports. The
pilot needs to clearly see the image on the HUD but also needs to see the real world through the display when it can be
seen with the naked eye. In addition, the video display cannot interfere with the display of existing flight information
symbology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed aperture sensor (DAS) systems can enhance the situational awareness of operators in both manned and
unmanned platforms. In such a system, images from multiple sensors must be registered and fused into a seamless
panoramic mosaic in real time, whilst being displayed with very low latency to an operator. This paper describes an
algorithm for solving the multiple-image alignment problem and an architecture that leverages the power of consumer
graphics processing units (GPU) to provide a live panoramic mosaic display. We also describe other developments
aimed at integrating high resolution imagery from an independently steerable fused TV/IR sensor into the mosaic,
panorama stabilisation and automatic target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research has shown that the use of configural displays allows people to more easily detect changes in dynamic
processes for integration tasks thereby enhancing operator performance, yet the benefit of configural displays on
operator situation awareness (SA) has yet to be assessed. To test whether or not the use of configural displays impacts
the formation of pilot SA, a computer-based study was undertaken using two presentation rates (500ms and 1000ms)
and three configural display formats (Mil-Std-1787 HUD, Dual-articulated (DA) HUD, and the Arc Segment Attitude
Reference (ASAR)) to present aircraft flight reference information to pilots. One of five questions were possible
following the removal of the display from the screen, a query about aircraft airspeed, altitude, flight path angle (climb or
dive) or bank angle. The aim of the study was to demonstrate the ability to provide an increase in operator SA by
utilizing emergent features in configural displays to increase cue saliency and thereby increase operator SA. The
analysis of pilots' recall of aircraft flight path angle (percent correct) showed that pilots were significantly more aware
of aircraft attitude with the ASAR than with either the MIL-STD 1787 or DA HUD formats. There was no difference
among displays for recall of actual flight path angle (RMS error). The results are discussed in terms of the use of
configural displays as a design approach in representing task goals to facilitate operator SA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Of all incidents on the aerodrome surface, Runway Incursions, i.e. the incorrect presence of an aircraft on a runway, are
the by far most safety-critical, resulting in many fatalities if they lead to an accident. A lack of flight crew situational
awareness is almost always a causal factor in these occurrences, and like any Runway Incursion, the special case of
choosing a closed or unsuitable runway - including mistaking a taxiway for a runway - may have catastrophic consequences,
as the Singapore Airlines Flight SQ006 accident at Taipei in 2000 and, most recently, Comair Flight 5191,
tragically show. In other incidents, such as UPS Flight 896 at Denver in 2001 departing from a closed runway or China
Airlines Flight 11 taking off from a taxiway at Anchorage in 2002, a disaster was only avoided by mere luck.
This paper describes how the concept for an onboard Surface Movement Awareness and Alerting System (SMAAS) can
be applied to this special case and might help to prevent flight crews from taking off or landing on closed runways, unsuitable
runways or taxiways, and presents initial evaluation results. An airport moving map based on an ED-99A/DO-
272A compliant Aerodrome Mapping Database (AMDB) is used to visualize runway closures and other applicable airport
restrictions, based on NOTAM and D-ATIS data, to provide the crew with enhanced situational awareness in terms
of position and operational environment. If this is not sufficient to prevent a hazardous situation, e.g. in case the crew is
distracted, a tailored alerting concept consisting of both visual and aural alerts consistent with existing warning systems
catches the crew's attention.
For runway closures and restrictions, particularly those of temporary nature, the key issue for both extended situational
awareness and alerting is how to get the corresponding data to the aircraft's avionics. Therefore, this paper also develops
the concept of a machine-readable electronic Pre-flight Information Bulletin (ePIB) to bring relevant NOTAM information
to the flight deck prior to the flight, with a possibility to receive updates via data link while the aircraft is airborne.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision
System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window
visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple
bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR
systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in
the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels,
blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery
are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the
system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View,
resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of
the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification
authorities; this level of realism will become necessary as operational experience with EVS systems grows.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military helicopter operations are often constrained by environmental conditions, including low light levels and poor
weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by
day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of
vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of
spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can
similarly lead to hazardous situations and reduced situational awareness.
Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed
a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme
investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time
using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A
number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral
density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial
evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a color transfer method to give fused multiband nighttime imagery a natural daytime color appearance in a
simple and efficient way. Instead of using traditional &rhookl;&agr;&bgr; space, the proposed method transfers the color distribution of
the target image (daylight color image) to the source image (fused multiband nighttime imagery) in a linear color space
named IUV. The transformation between RGB and IUV spaces is simpler than that between RGB and &rhookl;&agr;&bgr; spaces,
moreover, the IUV space is more suitable for image fusion. The IUV transform can be extended into a general formalism.
We prove that color spaces conforming to this general IUV framework can produce same recoloring results as IUV
space. Our experiments on infrared and visual images show that the IUV based color transfer method works surprisingly
well for transferring natural color characteristics of daylight color images to false color fused multiband nighttime
imagery. We also demonstrate that this method can be successfully applied to a variety of images. The images generated
indicate the potential utility of IUV space in color image processing domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While there are many good ways to map sensual reality to two dimensional displays, mapping non-physical and
possibilistic information can be challenging. The advent of faster-than-real-time systems allow the predictive and
possibilistic exploration of important factors that can affect the decision maker. Visualizing a compressed picture of the
past and possible factors can assist the decision maker summarizing information in a cognitive based model thereby
reducing clutter and perhaps related decision times. Our proposed semantic bifurcated importance field visualization uses
saccadic eye motion models to partition the display into a possibilistic and sensed data vertically and spatial and
semantic data horizontally. Saccadic eye movement precedes and prepares decision makers before nearly every directed
action. Cognitive models for saccadic eye movement show that people prefer lateral to vertical saccadic movement.
Studies have suggested that saccades may be coupled to momentary problem solving strategies. Also, the central 1.5
degrees of the visual field represents 100 times greater resolution that then peripheral field so concentrating factors can
reduce unnecessary saccades. By packing information according to saccadic models, we can relate important decision
factors reduce factor dimensionality and present the dense summary dimensions of semantic and importance. Inter and
intra ballistics of the SBIFV provide important clues on how semantic packing assists in decision making. Future
directions of SBIFV are to make the visualization reactive and conformal to saccades specializing targets to ballistics,
such as dynamically filtering and highlighting verbal targets for left saccades and spatial targets for right saccades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional daylight video, and other forms of motion imagery, have an expanded role in communication and decision
making as sensor platforms (e.g., unoccupied aerial vehicles [UAVs]) proliferate. Video, of course, enables more
persons to become observers than does direct viewing, and presents a rapidly growing volume of content for those
observers to understand and integrate. However, knowing the identity of objects and gaining an awareness of situations
depicted in video can be challenging as the number of camera feeds increases, or as multiple decision makers rely on the
same content. Graphic additions to streaming video, spatially registered and appearing to be parts of the observed scene,
can draw attention to specific content, reduce uncertainty, increase awareness of evolving situations, and ultimately
produce a type of image-based communication that reduces the need for verbal interaction among observers. This paper
describes how streaming video can be enhanced for decision support using feature recognition and tracking; object
identification, graphic retrieval and positioning; and collaborative capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.