A well-known possibility to develop and test unmanned aerial vehicles is the simulation of vehicles and environment in commercial game engines. The simulation of sensors adds valuable capabilities to these simulations. This paper aims to present a millimeter wave radar implementation for the AirSim plugin in Unreal Engine. To obtain the radar response, we use Unreal Engine and AirSim rendering outputs of surfaces normal components, semantic segmentation of various objects in the scene, and depth distance from the camera. In particular, we calculate the radar cross section for each object present in the scene separately, being thus able to have different material characteristics for different entities. To compute the power return, we take into account atmospheric attenuation of the signal, based on wavelength of the radar wave, the gain of the antenna of the radar, and the transmitted power. For greater realism we add noise in different stages of the simulation. Future works to improve the usability and the performance of the simulator are presented.
Operating a helicopter in offshore wind parks with degraded visual environments from clouds or fog can endanger crew and material due to the presence of unseen obstacles. Utilizing on-board sensors such as LIDAR or radar, one can sense and record obstacles that could be potentially dangerous. One major challenge is to display the resulting raw sensor data in a way that the crew, especially the pilot, can make use of it without distracting them from their actual task. Augmented reality and mixed reality applications play an important role here. By displaying the data in a see-through helmet-mounted display (HMD), the pilot can be made aware of obstacles that are currently obscured by degraded visual conditions or even parts of the helicopter. This can be accomplished in one HMD. No attention sharing between the outside view and a head-down instrument is necessary. The German Aerospace Center (DLR) is continuously aiming at testing and evaluating both flight-proof and consumer grade HMDs. One particular widely known system is the Microsoft HoloLens. DLR will integrate this low-cost HMD into their experimental helicopter. For this, as a first step, a Microsoft HoloLens was integrated into DLR’s Air Vehicle Simulator (AVES). The integration process is detailed. The simulation capabilities are described, especially for conformal, open-loop LIDAR sensor data. Furthermore, first concepts of the display format are shown, and strengths and drawbacks of the HoloLens in a cockpit environment are discussed.
Operating a helicopter in off-shore wind parks in DVE from clouds or fog can endanger crew and material due to the presence of unseen obstacles. Utilizing on-board sensors like Lidar or radar one can sense and record obstacles that could be potentially dangerous. One major challenge is then to display the resulting raw sensor data in a way that the crew, especially the pilot can make use of without distracting them from their actual task. Augmented and mixed reality applications are thought to play an important role here. By displaying the data in a see-through helmet mounted display (HMD) the pilot can be made aware of obstacles that are currently obscured by degraded visual conditions or even parts of the own helicopter. This can be accomplished in one HMD. No attention sharing between the outside view and a headdown instrument is necessary. DLR is continuously aiming at testing and evaluating both flight-proof and consumer grade HMDs. One particular widely known system is the Microsoft HoloLens. DLR will integrate this low-cost HMD into their test helicopter ACT/FHS. For this, as a first step a Microsoft HoloLens was integrated into DLR's simulator AVES. In this paper the integration process is detailed. The simulation capabilities are described, especially for conformal, open-loop Lidar sensor data. Furthermore, first concepts of the display format are shown, and strengths and drawbacks of the HoloLens in a cockpit environment are discussed.
Increasing the pilot’s situational awareness is a major goal for the design of next-generation aircraft cockpits. A fundamental problem is posed by the pilot’s out-the-window view, which is often degraded due to adverse weather, darkness, or the aircraft structure itself. A common approach to this problem is to generate an enhanced model of the surroundings via aircraft-mounted sensors and databases containing terrain and obstacle information. In the helicopter domain, the resulting picture of the environment is then presented to the pilot either via a panel-mounted display or via a see-through head-worn display. We investigate a third method for information display. The concept—called Virtual Cockpit—applies a nonsee-through head-worn display. With such a virtual reality display, advantages of established synthetic and enhanced vision systems can be combined while existing limitations can be overcome. In addition to a theoretical discussion of advantages and drawbacks, two practical implementation examples of this concept are shown for helicopter offshore operations. Two human factors studies were conducted in a simulation environment based on the game engine Unity. They prove the general potential of the Virtual Cockpit to become a candidate for a future cockpit in the long term.
Current augmented reality (AR) and virtual reality (VR) displays are targeted at entertainment and home education purposes. However, these headsets can be used for prototyping and developing display concepts to be used in aviation. In previous papers we have demonstrated the use of helmet mounted enhanced and synthetic vision systems (ESVS) displays that have been implemented on commercially available VR displays. One of the most widely used engines for developing VR and AR applications is the Unity game engine. While it supports a broad range of display hardware it can be challenging to integrate legacy ESVS software, since its main purpose is the fast development of virtual worlds. To avoid a complete re-write of such displays we demonstrate techniques to integrate legacy software in Unity. In detail, we show how render plugins or texture buffers can be used to display existing ESVS output in a Unity project. We show advantages and drawbacks of these different approaches. Further, we detail problems in case the source software is written for a different platform, for example, when integrating OpenGL displays in a DirectX environment. While the demonstrated techniques are implemented and tested with the Unity game engine, they can be used for other target game and render engines, too.
The usage of conformal symbology in color head-worn displays (HWDs) opens up a range of new possibilities on modern flight decks. The capability of color augmentation seems especially useful for low flights in degraded visual environments. Helicopter flights in these conditions, including brownout by swirling dust or sand particles, can often lead to spatial disorientation (SD) and result in a significant amount of controlled flight into terrain (CFIT). While first generation color-capable conformal displays are deployed, practical guidelines for the use of color in these see-through interfaces are yet to be established. A literature survey is carried out to analyze available knowledge of color use in conformal displays and to identify established methodologies for human-factors experimentation in this domain. Firstly the key human factors involved in color HWDs are outlined, including hardware design aspects as well as perceptual and attentional aspects. Secondly research on color perception is mapped out, focusing on investigations of luminance contrast requirements, modeling of color space blending and development of color correction solutions. Thirdly application-based research of colored conformal symbology is reviewed, including several simulations and flight experiments. Analysis shows that established luminance contrast requirements need to be validated and that performance effects of colored HWD symbology need more objective measurements. Finally practical recommendations are made for further research. This literature study has thus established a theoretical framework for future experimental efforts in colored conformal symbology. The Institute of Flight Guidance of the German Aerospace Center (DLR) anticipates conducting experiments within this framework.
The recent evolution of cockpit design has moved from the established glass cockpits into new directions. Among them is the virtual enhancement of cockpits by augmented reality (AR) and virtual reality (VR) displays. Well known in aviation are helmet mounted see-through displays, but opaque VR displays are of increasing interest also. This technology enables the pilot to use virtual instrumentation as an add-on to the real cockpit. Even a totally virtualized instrumentation is possible. Furthermore, VR technology allows the fast prototyping and pilot training in cockpit environments that are still in development before even a single real instrument is built. We show how commercial off-the-shelf VR hardware can be used to build a prototyping environment. We demonstrate advantages and challenges when using software engines usually built for the games industry. We describe our own integration concept, which re-uses as much of our own software as possible and allows integration with minimal parallel development.
In the past couple of years, research on display content for helicopter operations headed in a new direction. The already reached goals could evolve into a paradigm change for information visualization. Technology advancements allow implementing three-dimensional and conformal content on a helmet-mounted see-through device. This superimposed imagery inherits the same optical flow as the environment. It is supposed to ease switching between display information and environmental cues. The concept is neither pathbreaking nor new, but it has not been successfully established in aviation yet. Nevertheless, there are certainly some advantages to expect—at least from perspective of a human-centered system design. Within the following pages, the next generation displays will be presented and discussed with a focus on human factors. Beginning with recalling some human factor related research facts, an experiment comparing the former two-dimensional research displays will be presented. Before introducing the DLR conformal symbol set and the three experiments about an innovative drift, indication related research activities toward conformal symbol sets will be addressed.
A head-worn display combined with accurate head-tracking allows one to show synthetically generated symbols in a way that they appear as a part of the real world. Depending on the specific research context, different terms have been used for the ability to show display elements as parts of the outside world. These include contact analog, scene linked, augmented reality, and outside conformal. While the famous highway in the sky was one of the first applications in avionics, over the years more and more conformal counterparts have been devised for aircraft-related instruments. Among them are routing information, navigation aids, specialized landing displays, obstacle warnings, drift indicators, and many more. Conformal displays have been developed for more than 40 years. We present a review of some results, as well as look ahead to research trends for the next years. We suggest that naturalism is not the best choice for the design of conformal displays. Instead, more abstract representations often yield better pilot acceptance.
Contemporary helmet mounted displays integrate high-resolution display units together with precise head-tracking solutions. This combination oﬀers the opportunity to show symbols in a conformal way. Conformality here means that a hazard symbol is linked to the outside scenery. Thus, a pilot intuitively understands the connection between the symbol and its corresponding terrain feature, even if the feature is not fully visible due to degraded visual conditions. To accomplish this purpose the symbol has to be suﬃciently noticeable in terms of size and brightness. However, this gives rise to the danger that parts of the outside scenery are occluded by the symbol. Furthermore, symbols should not clutter the display, in order not to distract the pilot. We present a solution framework of highlighting obstacles by symbols that balance low occlusion against noticeability. Our concept allows including diﬀerent representations for individual classes of obstacles in a uniﬁed way. We detail the implementation of the display symbols. Finally, we present results of a ﬁrst acceptance test with pilots.
Modern Enhanced and Synthetic Vision Systems (ESVS) usually incorporate complex 3D displays, for example, terrain visualizations with color-coded altitude, obstacle representations that change their level of detail based on distance, semi-transparent overlays, dynamic labels, etc. All of these elements can be conveniently implemented by using a modern scene graph implementation. OpenSceneGraph offers such a data structure. Furthermore, OpenSceneGraph includes a broad support for industry-standard file formats, so 3D data and models from other applications can be used. OpenSceneGraph has a large user community and is driven by open source development. Thus a selection of visualization techniques is available and often solutions for common problems can be found easily in the community’s discussion groups. On the other side, documentation is sometimes outdated or nonexistent. We investigate which ESVS applications can be realized using OpenSceneGraph and on which platforms this is possible. Furthermore, we take a look at technical and license limitations.
The primary usefulness of helicopters shows in missions where regular aircraft cannot be used, especially HEMS
(Helicopter Emergency Medical Services). This might be due to requirements for landing in unprepared areas
without dedicated runway structures, and an extended
y to more than one previously unprepared
target. One example of such missions are search and rescue operations. An important task of such a mission is to
locate a proper landing spot near the mission target. Usually, the pilot would have to evaluate possible landing
sites by himself, which can be time-intensive, fuel-costly, and generally impossible when operating in degraded
visual environments. We present a method for pre-selecting a list of possible landing sites. After specifying the
intended size, orientation and geometry of the site, a choice of possibilities is presented to the pilot that can
be ordered by means of wind direction, terrain constraints like maximal slope and roughness, and proximity
to a mission target. The possible choices are calculated automatically either from a pre-existing terrain data
base, or from sensor data collected during earlier missions, e.g., by collecting data with radar or laser sensors.
Additional data like water-body maps and topological information can be taken into account to avoid landing in
dangerous areas under adverse view conditions. In case of an emergency turnaround the list can be re-ordered
to present alternative sites to the pilot. We outline the principle algorithm for selecting possible landing sites,
and we present examples of calculated lists.
Landing under adverse weather conditions can be challenging, even if the airfields are well known to the pilots. This is true for civil as well as military aviation. Within the scope of this paper we concentrate especially on fog conditions. The work has been conducted within the project ALICIA. ALICIA is a research and development project co-funded by European Commission under the Seventh Framework Programme. ALICIA aims at developing new and scalable cockpit applications which can extend operations of aircraft in degraded conditions: All Conditions Operations. One of the systems developed is a head-up display that can display a generated symbology together with a raster-mode infrared image. We will detail how we implemented a real-time enabled simulation of a combined short-wave and long-wave infrared image for landing. A major challenge was to integrate several already existing simulation solutions, e.g., for visual simulation and sensors with the required data-bases. For the simulations DLRs in-house sensor simulation framework F3S was used, together with a commercially available airport model that had to be heavily modified in order to provide realistic infrared data. Special effort was invested for a realistic impression of runway lighting under foggy conditions. We will present results and sketch further improvements for future simulations.
Recent years have seen a rise in sophisticated navigational positioning techniques. Starting from classical GPS,
differential GPS, ground-based augmentation, and raw data submission have opened possibilities for high precision
lateral positioning for beyond what was thinkable before. This yields new perspectives for technologies like
ACAS/TCAS, by enabling last-minute lateral avoidance as a supplement to the established vertical avoidance
Working together with Ohio University’s Avionics Department, DLR has developed and tested a set of displays for
situational awareness and lateral last-minute avoidance in a collision situation, implementing some state-of-the art ideas
in collision avoidance. The displays include the possibility to foresee the hazard zone of a possible intruder and thus
avoid that zone early. The displays were integrated into Ohio University’s experimental airplane, and a flight experiment
was conducted to make a first evaluation of the applicability. The tests were carried out in fall 2012.
We will present the principal architecture of the displays and detail the implementation into the flight carrier.
Furthermore, we will give first results of the displays’ performance.
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges
within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types
of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR’s research helicopter FHS (flying
helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster
architecture acquires and fuses all the information to get one single comprehensive description of the outside situation.
While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced
sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential
moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is
very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter’s mission
timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted
features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper
describes applied feature extraction methods for moving object detection, as well as data fusion techniques for
combining features from TV/IR and Ladar data.
Flying in degraded visual environment is an extremely challenging task for a helicopter pilot. The loss of the outside visual reference causes impaired situation awareness, high workload and spatial disorientation leading to incidents like obstacle or ground hits. DLR is working on identifying ways to reduce this problem by providing the pilot with additional information from fused sensor data. Therefore, different display design solutions were developed. In a first study, the design focused on the use of a synthetic head-down display, considering different representations for obstacles, color coding and terrain features. Results show a subjective preference for the most detailed obstacle display, while objective results reveal better performance for the little less detailed display. In a second study, symbology for a helmet-mounted display was designed and evaluated in a part-task simulation. Design considerations focused on different obstacle representations as well as attentional and perceptual aspects associated with the use of helmet-mounted displays. Results show consistent findings to the first experiment, indicating that the display subjectively favored does not necessarily contribute to the best performance in detection. However when additional tasks have to be performed the level of clutter seems to impair the ability to respond correctly to secondary tasks. Thus the favored display type nonetheless seems to be the most promising solution since it is accompanied by the overall best objective results integrating both detection of obstacles and the ability to perform additional tasks.
Project ALLFlight is DLR's initiative to diminish the problem of piloting helicopters in degraded visual conditions.
The problem arises whenever dust or snow is stirred up during landing (brownout/whiteout), eectively
blocking the crew's vision of the landing site. A possible solution comprises the use of sensors that are able
to look through the dust cloud. As part of the project display symbologies are being developed to enable the
pilot to make use of the rather abstract and noisy sensor data. In a rst stage sensor data from very dierent
sensors is fused. This step contains a classication of points into ground points and obstacle points. In a second
step the result is augmented with ground data bases and depicted in a synthetic head-down display. Regarding
the design, several variations in symbology are considered, including variations in color coding, continuous or
non-continuous terrain displays and dierent obstacle representations. In this paper we present the basic techniques
used for obstacle and ground separation. We choose a set of possibilities for the pilot display and detail
the implementation. Furthermore, we present a pilot study, including human factors assessment with focus on
usability and pilot acceptance.
Although attentional tunneling as a phenomenon is at least known since the late 1970ies, it is still an area of high
research interest, since it bears connections to current and future applications in head-up and head-down displays. For
example, it is still not fully answered to what degree highly dynamic scenarios influence the pilot's ability to keep up
with routine tasks, and vice versa, when and whether dynamic scene changes stay unnoticed under high workload. In
order to further investigate attentional tunneling a generic experimentation environment was set up. The core of the
environment is DLR's flexible sensor simulation suite (F3S). This simulation software can be installed on specialized
simulation platforms, for example a Vision Station, as well as on standard workstations and can be tuned to a simple
view simulation with different levels of realism. It allows for a full and dynamic control of experimental scenarios, for
example possible changes in the environment. For larger scenarios several platforms can be coupled to enable the
investigation of team situations. As one of its key features the set-up includes a full eye-tracking solution that is further
capable of recording dynamic areas of interest. Within a first experiment with a student sample F3S was used as a simple
view simulation combined with synthetic approach scenarios. Subjects were asked to detect changes whilst flying
highway-in-the-sky approaches with a head-up display. At the same time eye gaze positions where tracked. This novel
approach to the investigation of attentional tunneling can prove that an environmental change, even though visually
perceived, is not necessarily cognitively processed at the same time.
When used in conjunction with helmet mounted displays stereo camera views can provide invaluable advantages
in, for example, aviation uses. One of the most common setups is to mount cameras to both sides of the pilot's
helmet. However, since these cameras posses a larger disparity than the eyes distances to perceived objects are
misinterpreted by the pilot. This may cause irritations, even sickness when combined with enhanced displays.
Even in the best case the magnified disparity may lead to exaggerated distance estimations. In this paper simple
computations are presented that can correct hyperstereopsis "on the fly". With the availability of fast computer
hardware carrying out these computations in real time comes into reach. Furthermore, we sketch a series of
experiments to evaluate the effectiveness of our approach.
DLR's Institute of Flight Guidance is involved in many projects dealing with the development of new concepts
for flight procedures and pilot assistance functions. This includes especially the topic of enhanced vision (EVS),
where processed data from radar and infrared sensors is utilized to augment the pilot's vision. For evaluating
these concepts extensive flight testing has been conducted and results have been published during the last years.
Now, DLR has combined its expertise in the field of high performance sensor simulation on the one hand side,
together with the visual simulation for its generic cockpit simulator, on the other hand. Sensor simulation of
imaging radar, lidar, infrared, etc., is based mainly on the application of high performance functions of modern
computer graphics hardware (vertex and fragment shaders). The direct combination of these functions with the
"outside-vision" software, which is now based on exactly the same terrain and object geometry, delivers sensor
data that perfectly correlate to the visual channel. This combined simulation environment will be the basis
for various evaluation trials within the near future, including simulation trails for fixed-wing and rotary-wing
The paper presents the implemented software and hardware architecture of the cockpit's visual simulator and its
coupling to the sensor simulation test-suite. First results of recently conducted simulation experiments including
the evaluation of new proposed flight procedures, which apply EVS technology, are given.
Since their introduction by Kohonen Self Organizing Maps (SOMs) have been used in various forms for purposes
of surface reconstruction. They offer robust and fast approximations of manifold data from unstructured input
points while being modestly easy to implement. On the other hand SOMs have certain disadvantages when
used in a setup where sparse, reliable and spacial unbounded data occurs. For example, airborne Lidar sensors
generate a continuous stream of point data while flying above terrain. We introduce modifications of the SOM's
data structure to adapt it to unbounded data. Furthermore, we introduce a new variation of the learning rule
called rapid learning that is feasible for sparse but rather reliable data. We demonstrate examples where the
surroundings of an aircraft can be reconstructed in almost real time.
Radar simulation involves the computation of a radar response based on the terrain's normalized radar cross
section (RCS). In the past different models have been proposed for modeling the normalized RCS. While being
accurate in most cases they lack intuitive handling. We present a novel approach for computing the mean
normalized radar cross section for use in millimeter wave radar simulations based on Phong lighting. This allows
us to model radar power return in an intuitive way using categories of diffuse and specular reflections. The
model is computational more efficient than previous approaches while using only few parameters. Furthermore,
we give example setups for different types of terrain. We show that our technique can accurately model data
output from other approaches as well as real world data.
Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar
simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent
technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate
radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular
implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of
radar systems easily.