PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 804205 (2011) https://doi.org/10.1117/12.884634
Night vision technology has experienced significant advances in the last two decades. Night vision goggles
(NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies.
Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than
incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the
warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state
indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging
systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are;
(1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave
infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems
to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the
technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a
cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and
SWIR imaging systems.
Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned"
concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in
specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout
industry to produce NVG-compatible illuminated instruments and displays for both military and civilian
applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project
evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this
evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory
(AFRL).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 804206 (2011) https://doi.org/10.1117/12.885536
This proposed display constructs different images within three layers viewable around a 180 degree arc. The display
contains special particles suspended within its image space, that when excited by two different infrared lasers, illuminate
to generate images. It consists of a first projector that launches a first infrared laser forming sequential slices of a 2D
image along the length and width of the image space, and a second projector that launches a second infrared laser
creating translational layers across the depth of the image space. This display can be utilized in a variety of applications
such as gaming and air traffic control applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 804208 (2011) https://doi.org/10.1117/12.888082
Combat Land vehicles are small relative to the systems that they carry, yet these systems are increasing
rapidly in complexity to provide needed improvements to situational awareness, vehicle management and
weapons systems. Processing loads have increased rapidly driven by vehicle health, weapons and selfprotection
requirements and there are more display functions than ever. All must be accommodated in a
limited space where electronics competes with weapons, ammunition and crew comfort. In this paper we will
examine a unique system solution for vehicle computing and associated data display that provides system
level advantages from a compact COTS base at a cost that is compatible with Army vehicles. We will
examine the packaging, operational environment, processing, operator interface and display design options
with a special focus on the trade-offs. Finally, we project current solutions into a future with expanded
applications that exploits new display, materials and processing technologies into a new, more flexible
vehicle display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 804209 (2011) https://doi.org/10.1117/12.886940
Photonica has developed a new technology display system - a hybrid combination of technologies - and demonstrated
key desired industry performance. Photonica innovation treats the display as a network of devices, scaling-up optics
rather than electronics and using multiple small electronic components in parallel to drive sectors of a composite fiberoptic
screen, adding more pixels by combining many lower-resolution components. Video performance is improved by
combining multiple sources per final pixel ("pixel signal processing") to increase final frame rate, contrast, brightness,
digital 3D image process, and color gamut, instead of exclusively trying to improve the performance of a single source
per sub-pixel or pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420A (2011) https://doi.org/10.1117/12.884597
A 3-D imaging technique is presented which pairs high-resolution night-vision cameras with GPS to increase the
capabilities of passive imaging surveillance. Camera models and GPS are used to derive a registered point cloud from
multiple night-vision images. These point clouds are used to generate 3-D scene models and extract real-world positions
of mission critical objects. Analysis shows accuracies rivaling laser scanning even in near-total darkness. The technique
has been tested on stereoscopic 3-D video collections as well. Because this technique does not rely on active laser
emissions it is more portable, less complex, less costly, and less detectable than laser scanning. This study investigates
close-range photogrammetry under night-vision lighting conditions using practical use-case examples of terrain
modeling, covert facility surveillance, and stand-off facial recognition. The examples serve as the context for discussion
of a standard processing workflow. Results include completed, geo-referenced 3-D models, assessments of related
accuracy and precision, and a discussion of future activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420B (2011) https://doi.org/10.1117/12.886520
AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal
for a broad range of near-eye applications such as night vision, simulation and training, situational awareness,
augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an
HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane
to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86"
diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower,
low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as
electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth-
modulation (PWM) scheme that together with the standard analog control provides a total dimming range of
0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of
operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a
full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic
range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to
85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED
WUXGA resolution microdisplay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420C (2011) https://doi.org/10.1117/12.887079
Space is a premium in vehicle turrets. Reducing the footprint of displays inside turrets frees up space for the warfighter.
Traditional military ruggedized flat panel displays cannot reside flush with the curved turret wall and consumes more
space than their advertized size. The lack of turret space also makes balancing human factors difficult. To better meet the
Warfighter needs, alternatives and incremental upgrades to the flat panel displays in turrets were compiled. Each
alternative technology was assessed against the constraints of a turret. Benefits, issues, and predictions to implementation
are summarized. Viable alternatives are being developed into suitable options.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420D (2011) https://doi.org/10.1117/12.887580
Today's modeling software for infrared and fused systems ignores display performance characteristics and their impact
on overall system performance. Although the implications of sensor performance and image processing with respect to
system performance are well understood, the impacts of display image quality and their effects on man portable system
performance are neglected in system level analysis software such as NVTherm. In addition, production test
methodologies for fielded thermal systems often utilize a composite video output signal to characterize thermal camera
performance but fail to characterize the impacts of display performance at room temperature and over the complete
operating temperature range.
This paper characterizes several key display parameters of active matrix liquid crystal displays (AMLCD) and organic
light emitting diode (OLED) microdisplays that are in volume production for night vision application, and examine their
effects on the performance of infrared and fused imaging systems. We present test data of contrast, gray scale rendition
and fixed-pattern noise of these displays over ranges of luminance and temperature, evaluating the impacts on system
level Minimum Resolvable Temperature (MRT). We conclude that the performance of thermal and fused systems can
be significantly degraded based upon the display technology implemented and the system impacts of display
performance can no longer be ignored by the community at large. The data indicates that modeling software such as
NVTherm should be upgraded to include display performance parameters and their impacts on overall system level
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420E (2011) https://doi.org/10.1117/12.883726
This article has the purpose to analyze, based on the current technology, the factors that have been implemented in HUD
systems, with the purpose to establish which its functions are and which are the advantages and disadvantages of these
systems, in order to compare their relevance at the moment the system is implemented in an automobile. To fulfill this
objective an optical and perception analysis was proposed through an instrumental set up, with common characteristics
to any automobile, making possible the implementation of a large amount of theoretical and practical considerations.
Finally, some recommendations, considerations and conclusions were made, all focused in a proposal of the way that
that these systems can be approached.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human Factors Considerations for Display Systems Engineering
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420F (2011) https://doi.org/10.1117/12.885051
The current state and trajectory of development for display technologies supporting information acquisition, analysis and
dissemination lends a broad informational infrastructure to operators of complex systems. The amount of information
available threatens to outstrip the perceptual-cognitive capacities of operators, thus limiting their ability to effectively
interact with targeted technologies. Therefore, a critical step in designing complex display systems is to find an
appropriate match between capabilities, operational needs, and human ability to utilize complex information. The present
work examines a set of evaluation parameters that were developed to facilitate the design of systems to support a specific
military need; that is, the capacity to support the achievement and maintenance of real-time 360° situational awareness
(SA) across a range of complex military environments. The focal point of this evaluation is on the reciprocity native to
advanced engineering and human factors practices, with a specific emphasis on aligning the operator-systemenvironment
fit. That is, the objective is to assess parameters for evaluation of 360° SA display systems that are suitable
for military operations in tactical platforms across a broad range of current and potential operational environments. The
approach is centered on five "families" of parameters, including vehicle sensors, data transmission, in-vehicle displays,
intelligent automation, and neuroergonomic considerations. Parameters are examined under the assumption that displays
designed to conform to natural neurocognitive processing will enhance and stabilize Soldier-system performance and,
ultimately, unleash the human's potential to actively achieve and maintain the awareness necessary to enhance lethality
and survivability within modern and future operational contexts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420G (2011) https://doi.org/10.1117/12.885134
In the process of developing new technologies for displaying 360° visual data supporting Local Area Awareness (LAA)
in complex environments (e.g. tactical military environments), one important, though often overlooked, area is system
evaluation. Without an accurate and reliable evaluation, it is impossible to determine which elements of the new display
are useful and which need further development. Evaluating a system properly requires two types of tests: one for testing
capabilities (e.g. given a display, what types of threats can be detected and identified?), and another for probing whether
a given display configuration is useful (e.g. will the human operator use this more complex interface appropriately in the
real world?). While established methodologies exist for the former, the latter often appears as a much less tractable
problem. This is primarily because of the difficulties with modeling the complexity of the real world in a simulated
environment. This paper presents a methodology for architecting a distributed simulation to support evaluation of a 360°
LAA display system for usefulness to human participants within virtual environments. The evaluation that leveraged the
methodology ultimately reported several unexpected results due to the effectiveness of the evaluation; for example, the
experiment discovered a much greater "keyhole effect" than expected, where participants focused almost entirely on the
forward 180°, even when presented with imagery covering the full 360°. Such results demonstrate the utility of the
methodology, particularly for developing evaluations that discover unexpected aspects of operational use in complex
environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420I (2011) https://doi.org/10.1117/12.887102
Providing the warfighter with Head or Helmet Mounted Displays (HMDs) while in tracked vehicles provides a means to
visually maintain access to systems information while in a high vibration environment. The high vibration and unique
environment of military tracked and turreted vehicles impact the ability to distinctly see certain information on an HMD,
especially small font size or graphics and information that requires long fixation (staring), rather than a brief or
peripheral glance. The military and commercial use of HMDs was compiled from market research, market trends, and
user feedback. Lessons learned from previous military and commercial use of HMD products were derived to determine
the feasibility of HMDs use in the high vibration and the unique environments of tracked vehicles. The results are
summarized into factors that determine HMD features which must be specified for successful implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420J (2011) https://doi.org/10.1117/12.883641
Providing the warfighter with Head or Helmet Mounted Displays (HMDs) while in tracked vehicles provides a means to
visually maintain access to systems information while in a high vibration environment. The high vibration and unique
environment of military tracked and turreted vehicles impact the ability to distinctly see certain information on an HMD,
especially small font size or graphics and information that requires long fixation (staring), rather than a brief or
peripheral glance. The military and commercial use of HMDs was compiled from market research, market trends, and
user feedback. Lessons learned from previous military and commercial use of HMD products were derived to determine
the feasibility of HMDs use in the high vibration and the unique environments of tracked vehicles. The results are
summarized into factors that determine HMD features which must be specified for successful implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420K (2011) https://doi.org/10.1117/12.884102
Light-emitting diodes (LEDs) have realized substantial advancements over the past twelve years since Rockwell Collins
began designing LED backlights, resulting in performance improvement and cost reduction for avionics displays.
Display and backlight packaging approaches have evolved to address the challenges associated with these LED
improvements and the backlight emitter count reductions that followed. The objective of this paper is to discuss the
backlight and display packaging design adaptations that helped avionics displays benefit from these LED improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420L (2011) https://doi.org/10.1117/12.884261
ARINC 818 is defined as a point-to-point video link that is used to drive cockpit displays both in military and
commercial aerospace applications. In addition to the ARINC 818 video link to a display, a command and control
link such as MIL-STD-1553 is often needed to carry bezel button or other configuration and control data. Although
ARINC 818 was envisioned as a video link, its high speed, low latency and high reliability make it ideal to carry
both video and data. A bi-directional implementation of ARINC 818 provides ample bandwidth and messaging
capability to eliminate the MIL-STD-1553 or ARINC 429 data interface with a single fiber or fiber pair. This paper
examines the architecture and messaging structure required to include an ARINC 818 return path so that a separate
data path is eliminated.
A bi-directional ARINC 818 architecture is developed that maintains the 100% quality of service required for the
video path and includes sufficient bandwidth to replace the low speed, copper data interface. Details are provided to
on how to utilize networking capabilities inherent in ARINC 818 to easily enable bi-directional command and
control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420M (2011) https://doi.org/10.1117/12.884910
In addition to military radios, modern warfighters carry cell phones, GPS devices, computers, and night-vision aids, all
of which require electrical cables and connectors for data and power transmission. Currently each electrical device
operates via independent cables using conventional cable and connector technology. Conventional cables are stiff and
difficult to integrate into a soldier-worn garment. Conventional connectors are tall and heavy, as they were designed to
ensure secure connections to bulkhead-type panels, and being tall, represent significant snag-hazards in soldier-worn
applications. Physical Optics Corporation has designed a new, lightweight and low-profile electrical connector that is
more suitable for body-worn applications and operates much like a standard garment snap. When these connectors are
mated, the combined height is <0.3 in. - a significant reduction from the 2.5 in. average height of conventional
connectors. Electrical connections can be made with one hand (gloved or bare) and blindly (without looking).
Furthermore, POC's connectors are integrated into systems that distribute data or power from a central location on the
soldier's vest, reducing the length and weight of the cables necessary to interconnect various mission-critical electronic
systems. The result is a lightweight power/data distribution system offering significant advantages over conventional
electrical connectors in soldier-worn applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Steven P. Scheiner, Dina A. Khan, Alexander L. Marecki, David A. Berman, Dana Carberry
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420N (2011) https://doi.org/10.1117/12.887015
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and
maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic
tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density
array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time
(i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply
large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the
advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that
is capable of processing these large amounts of video and data from separate sensor systems and distributing it
simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and
software engineering efforts required to overcome these challenges and will address development of an affordable,
integrated digital video architecture. The approaches evaluated will enable both current and future ground combat
vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter
Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew
working in closed-hatch systems during complex ground combat operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420O (2011) https://doi.org/10.1117/12.887108
With the ever-increasing dependency on technology in theater, the amount of information that the Warfighter must
monitor and process increases as well. With such a large surge of data, the method of information portrayal is critical.
Gaps exist between the Electro-Optic sensor information and the optimal display to view that information. An
assessment was completed to capture the military display technology gaps by Naval Surface Warfare Center (NSWC)
Crane Division's Electro-Optic Technology Division for many DoD Electro-Optic (EO) systems. The results of these
gaps have been compiled along with predictions of when or if these gaps will be filled based on commercial market
trends.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420P (2011) https://doi.org/10.1117/12.887278
State of the art mobile computing is designed to withstand variable rugged environments. Specific platforms including mobile phones, GPS devices, tablets, Netbooks and laptops that are used by the general public and increasingly by dismounted military users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420Q (2011) https://doi.org/10.1117/12.883036
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to
provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that
is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates
with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological
Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and
enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display
(HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated
the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO)
compared to a visual concept and a head-down display concept. A second experiment evaluated symbology
variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport
(airport identifier: KORD).
Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and
the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn
display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots
rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited
visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision
system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and
the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a
factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an
unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in
pilot ratings of efficacy and usability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420S (2011) https://doi.org/10.1117/12.883797
The increased prevalence of Closed Circuit Television systems has resulted in the necessity to view multiple
simultaneous camera feeds. However in many cases, a single sensor unit with a wide field of view can ensure that the
system operator's situational awareness can be greatly enhanced through the provision of a single continuous panoramic
imaging system. This paper reports on advances that Waterfall Solutions Ltd (WS) has made in the field of wide area
surveillance systems and introduces a low profile, wide field of view sensor system, and associated processing, which
provides solutions to both of these problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420T (2011) https://doi.org/10.1117/12.885902
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide
additional margins of safety for aircrew performance and enable operational improvements for low visibility operations
in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is
needed for effective technology development and implementation of regulatory and design guidance to support
introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System
(NextGen) operations.
A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use
of SVS/EFVS in NextGen low visibility ground (taxi) operations and approach/landing operations. Twelve crews flew
approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the
potential for EFVS for operations in visibility as low as 1000 ft runway visibility range (RVR) and SVS to enable lower
decision heights (DH) than can currently be flown today. Expanding the EFVS visual segment from DH to the runway
in visibilities as low as 1000 RVR appears to be viable as touchdown performance was excellent without any workload
penalties noted for the EFVS concept tested. A lower DH to 150 ft and/or possibly reduced visibility minima by virtue of
SVS equipage appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study
for head-down implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420U (2011) https://doi.org/10.1117/12.885960
Today pilots have to obtain required information from a number of different sources like airport/SID/STAR/approach or
enroute charts (respectively their electronic representations), printouts like the flight plan or a weather briefing, and
updates via voice communications. The flight crew is required to mentally combine all this information. This situation
will become even more difficult to cope with in the SESAR/NextGen world with dynamic changes of the trajectory
(flight plan), and more frequent updates of weather, NOTAMs and other information requiring a higher degree of
automation and better information presentation.
To address these issues, lower the pilot's workload, and increase his situational awareness, a concept is presented where
all required information is provided through one application. Depending on the phase of flight (taxi-in/taxi-out,
departure, enroute, arrival, approach) the application will select the currently required information and provide a
seamless representation for the crew. The challenge is to provide the right information at the right time to the crew (e.g.
significant weather moving into the direction of the flight plan).
The focus of this paper will be on the components of the new application related to ground operations. This includes an
enhanced, AMM-like view with integrated taxi-routing support, graphical and textual display of chart notes (e.g.
wingspan restrictions, taxiway closures etc.), and updates of such information by automatic inclusion of digital
NOTAMs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420V (2011) https://doi.org/10.1117/12.882721
Resolution is one of the key parameters addressing the quality capability of a sensor. One approach to determining the
resolution of a sensor/display system is to use a resolution target pattern to find the smallest "resolved" element using the
system, which typically requires a human in the loop to make the assessment. This paper compares the results of a
software approach to generate an effective resolution value for a sensor with human vision results using the same
images. Landolt Cs were selected as the resolution target, which were imaged at multiple distances from multiple
sensors. The images were analyzed using the software to determine the orientation of the C at each distance, resulting in
a probability of correct orientation detection curve as a function of distance. Probability of correct orientation detection
as a function of distance was also obtained directly from subjects that viewed the imagery. These curves were then used
to generate "resolution" values for the sensor using the software results and the subject results. Resolution results for
both the software and the participants were obtained for four different spectral band sensors as well as for fused images
from two pairs of sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420W (2011) https://doi.org/10.1117/12.883683
The DLR project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is devoted to
demonstrating and evaluating the characteristics of sensors for helicopter operations in degraded visual environments.
Millimeter wave radar is one of the many sensors considered for use in brown-out. It delivers a lower angular resolution
compared to other sensors, however it may provide the best dust penetration capabilities. In cooperation with the NRC,
flight tests on a Bell 205 were conducted to gather sensor data from a 35 GHz pencil beam radar for terrain mapping,
obstacle detection and dust penetration. In this paper preliminary results from the flight trials at NRC are presented and a
description of the radars general capability is shown. Furthermore, insight is provided into the concept of multi-sensor
fusion as attempted in the ALLFlight project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420X (2011) https://doi.org/10.1117/12.883799
Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological
principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect
anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes
to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged
movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise
awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a
neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a
mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of
a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to
automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results
of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using
background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as
little as 3 pixels wide while filtering small-scale noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420Y (2011) https://doi.org/10.1117/12.883807
Real-time Image Registration is a key processing requirement of Waterfall Solutions' image fusion system, Ad-FIRE,
which combines the attributes of high resolution visible imagery with the spectral response of low resolution thermal
sensors in a single composite image. Implementing image fusion at video frame rates typically requires a high bandwidth
video processing capability which, within a standard CPU-type processing architecture, necessitates bulky, high power
components. Field Programmable Gate Arrays (FPGAs) offer the prospect of low power/heat dissipation combined with
highly efficient processing architectures for use in portable, battery-powered, passively cooled applications, such as
Waterfall Solutions' hand-held or helmet-mounted Ad-FIRE system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 80420Z (2011) https://doi.org/10.1117/12.886714
Although attentional tunneling as a phenomenon is at least known since the late 1970ies, it is still an area of high
research interest, since it bears connections to current and future applications in head-up and head-down displays. For
example, it is still not fully answered to what degree highly dynamic scenarios influence the pilot's ability to keep up
with routine tasks, and vice versa, when and whether dynamic scene changes stay unnoticed under high workload. In
order to further investigate attentional tunneling a generic experimentation environment was set up. The core of the
environment is DLR's flexible sensor simulation suite (F3S). This simulation software can be installed on specialized
simulation platforms, for example a Vision Station, as well as on standard workstations and can be tuned to a simple
view simulation with different levels of realism. It allows for a full and dynamic control of experimental scenarios, for
example possible changes in the environment. For larger scenarios several platforms can be coupled to enable the
investigation of team situations. As one of its key features the set-up includes a full eye-tracking solution that is further
capable of recording dynamic areas of interest. Within a first experiment with a student sample F3S was used as a simple
view simulation combined with synthetic approach scenarios. Subjects were asked to detect changes whilst flying
highway-in-the-sky approaches with a head-up display. At the same time eye gaze positions where tracked. This novel
approach to the investigation of attentional tunneling can prove that an environmental change, even though visually
perceived, is not necessarily cognitively processed at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Display Technologies and Applications for Defense, Security, and Avionics V; and Enhanced and Synthetic Vision 2011, 804210 (2011) https://doi.org/10.1117/12.887672
Passive millimeter wavelength (PMMW) video holds great promise given its ability to see targets and obstacles through
fog, smoke and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast
shot (snow like) noise and a slower forming circular fixed pattern. Shot noise can be removed by a simple gain style
filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian
surprise in videos. Bayesian surprise is feature change in time which is abrupt, but cannot be accounted for as shot noise.
Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is
very salient to observers, this reduces blurring particularly in places where people visually attend. Fixed pattern noise is
removed after the shot noise using a combination of Non-uniformity correction (NUC) and Eigen Image Wavelet
Transformation. The combination allows for online removal of time varying fixed pattern noise even when background
motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. The fixed pattern
and shot noise filters are all efficient allowing for real time video processing of PMMW video. We show several
examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video
clearly shows cars, houses, trees and utility poles at 20 frames per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.