Displays supporting stereoscopic and head-coupled motion parallax can enhance human perception of containing 3D
surfaces and 3D networks but less for so volumetric data. Volumetric data is characterized by a heavy presence of
transparency, occlusion and highly ambiguous spatial structure. There are many different rendering and visualization
algorithms and interactive techniques that enhance perception of volume data and these techniques‟ effectiveness have
been evaluated. However, how VR display technologies affect perception of volume data is less well studied. Therefore,
we conduct two formal experiments on how various display conditions affect a participant‟s depth perception accuracy
of a volumetric dataset. Our results show effects of VR displays for human depth perception accuracy for volumetric
data. We discuss the implications of these finding for designing volumetric data visualization tools that use VR displays.
In addition, we compare our result to previous works on 3D networks and discuss possible reasons for and implications
of the different results.
This paper presents the concept, working prototype and design space of a two-handed, hybrid spatial user interface for minimally immersive desktop VR targeted at multi-dimensional visualizations. The user interface supports dual button balls (6DOF isotonic controllers with multiple buttons) which automatically switch between 6DOF mode (xyz + yaw,pitch,roll) and planar-3DOF mode (xy + yaw) upon contacting the desktop. The mode switch automatically switches a button ball’s visual representation between a 3D cursor and a mouse-like 2D cursor while also switching the available user interaction techniques (ITs) between 3D and 2D ITs. Further, the small form factor of the button ball allows the user to engage in 2D multi-touch or 3D gestures without releasing and re-acquiring the device. We call the device and hybrid interface the HyFinBall interface which is an abbreviation for ‘Hybrid Finger Ball.’ We describe the user interface (hardware and software), the design space, as well as preliminary results of a formal user study. This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and interactions
In time critical visual analytic environments collaboration between multiple expert users allows for rapid knowledge discovery and facilitates the sharing of insight. New collaborative display technologies, such as multi-touch tables, have shown great promise as the medium for such collaborations to take place. However, under such new technologies, traditional selection techniques, having been developed for mouse and keyboard interfaces, become inconvenient, inefficient, and in some cases, obsolete. We present selection techniques for multi-touch environments that allow for the natural and efficient selection of complex regions-of-interest within a hierarchical geospatial environment, as well as methods for refining and organizing these selections. The intuitive nature of the touch-based interaction permits new users to quickly grasp complex controls, while the consideration for collaboration coordinates the actions of multiple users simultaneously within the same environment. As an example, we apply our simple gestures and actions mimicking real-world tactile behaviors to increase the usefulness and efficacy of an existing urban growth simulation in a traditional GIS-like environment. However, our techniques are general enough to be applied across a wide range of geospatial analytical applications for both domestic security and military use.
We present the framework for a battlefield change detection system that allows military analysts to coordinate and utilize
live collection of airborne LIDAR range data in a highly interactive visual interface. The system consists of three major
components: The adaptive and self-maintaining model of the battlefield selectively incorporates the minority of new
data it deems significant, while discarding the redundant majority. The interactive interface presents the analyst with
only the minute portion of the data the system deems relevant, provides tools to facilitate the decision making process,
and adjusts its behavior to reflect the analyst's objectives. Finally, the cycle is completed by the generation of a goal
map for the LIDAR collection hardware that instructs as to which areas should be sampled next in order to best advance
the change detection task. All together, the system empowers analysts with the ability to make sense of a deluge of
measurements by extracting the salient features and continually refining its definitions of relevancy.
Most terrain models are created based on a sampling of real-world terrain, and are represented using linearly-interpolated
surfaces such as triangulated irregular networks or digital elevation models. The existing methods for the creation of
such models and representations of real-world terrain lack a crucial analytical consideration of factors such as the errors
introduced during sampling and geological variations between sample points. We present a volumetric representation of
real-world terrain in which the volume encapsulates both sampling errors and geological variations and dynamically
changes size based on such errors and variations. We define this volume using an octree, and demonstrate that when
used within applications such as line-of-sight, the calculations are guaranteed to be within a user-defined confidence
level of the real-world terrain.
A data organization, scalable structure, and multiresolution visualization approach is described for precision markup modeling in a global geospatial environment. The global environment supports interactive visual navigation from global overviews to details on the ground at the resolution of inches or less. This is a difference in scale of 10 orders of magnitude or more. To efficiently handle details over this range of scales while providing accurate placement of objects, a set of nested coordinate systems is used, which always refers, through a series of transformations, to the fundamental world coordinate system (with its origin at the center of the earth). This coordinate structure supports multi-resolution models of imagery, terrain, vector data, buildings, moving objects, and other geospatial data. Thus objects that are static or moving on the terrain can be displayed without inaccurate positioning or jumping due to coordinate round-off. Examples of high resolution images, 3D objects, and terrain-following annotations are shown.
Over the past several years there has been a broad effort towards realizing the Digital Earth, which involves the digitization of all earth-related data and the organization of these data into common repositories for wide access. Recently the idea has been proposed to go beyond these first steps and produce a Visual Earth, where a main goal is a comprehensive visual query and data exploration system. Such a system could significantly widen access to Digital Earth data and improve its use. It could provide a common framework and a common picture for the disparate types of data available now and contemplated in the future. In particular mcuh future data will stream in continuously from a variety of ubiquitous, online sensors, such as weather sensors, traffic sensors, pollution gauges, and many others. The Visual Earth will be especially suited to the organization and display of these dynamic data. This paper lays the foundation and discusses first efforts towards building the Visual Earth. It shows that the goal of interactive visualization requires consideration of the whole process including data organization, query, preparation for rendering, and display. Indeed, visual query offers a set of guiding principles for the integrated organization, retrieval, and presentation of all types of geospatial data. These include terrain elevation and imagery data, buildings and urban models, maps and geographic information, geologic features, land cover and vegetation, dynamic atmospheric phenomena, and other types of data.
Gaining a detailed and thorough understanding of the modern battle space is vital to the success of any military operation. Military commanders have access to significant quantities of information which originates from disparate and occasionally conflicting sources and systems. Combining this information into a single, coherent view of the environment can be extremely difficult, error prone and time consuming. In this paper we describe the Naval Research Laboratory's Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space visualization. The VRRWB is a stereoscopic 3D interactive graphics system which allows multiple participants to interact in a shared virtual environment and physical space. A graphical representation of the battle space, including the terrain and military assets which lie on it, is displayed on a projection table. Using a six degree of freedom tracked joystick, the user navigates through the environment and interacts, via selection and querying, with the represented assets and the terrain. The system has been successfully deployed in the Hunter Warrior Advanced Warfighting Exercise and the Joint Countermine ACTD Demonstration One. In this paper we describe the system and its capabilities in detail, discuss its performance in these two operations, and describe the lessons which have been learned.