Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual
environments and high resolution tiled display walls. This paper outlines the design and implementation of the
CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods
described here combine high-resolution photography with surround vision and full stereo view in an immersive
environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the
StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also
the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically
created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately
20 feet, or at the object of major interest.
A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces
the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking,
even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo
panoramas created with this acquisition and display technique can be applied without modification to a large array of VR
devices having different screen arrangements and different VR libraries.
Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE
Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first
Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has
grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a
barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person
interactive VR experience without the need for glasses or other gear to be worn by the user.
Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality
improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and
computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using
a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The
Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional
stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on
the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics
Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier.
Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks
them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable
to commercially available tracking systems.
Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered
data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported.
Local as well as distributed computation is employed in various applications. Long-distance collaboration has been
demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science
have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop
forms to fit a variety of space and budget constraints.
Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a
The development of a reliable untethered interactive virtual environment has long been a goal of the VR community. Several nonmagnetic tracking systems have been developed in recent years based on optical, acoustic, and mechanical solutions. However, an inexpensive, effective, and unobtrusive tracking solution remains elusive. This paper presents a camera based three-dimensional hand tracking system implemented in the PARIS augmented reality environment and used to drive a demonstration application.
This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.
The goal of this research is to develop a head-tracked, stern virtual reality system utilizing plasma or LCD panels. This paper describes a head-tracked barrier auto-stereographic method that is optimized for real-time interactive virtual reality systems. In this method, virtual barrier screen is created simulating the physical barrier screen, and placed in the virtual world in front of the projection plane. An off- axis perspective projection of this barrier screen, combined with the rest of the virtual world, is projected from at least two viewpoints corresponding to the eye positions of the head- tracked viewer. During the rendering process, the simulated barrier screen effectively casts shadows on the projection plane. Since the different projection points cast shadows at different angles, the different viewpoints are spatially separated on the projection plane. These spatially separated images are projected into the viewer's space at different angles by the physical barrier screen. The flexibility of this computational process allows more complicated barrier screens than the parallel opaque lines typically used in barrier strip auto-stereography. In addition this method supports the focusing and steering of images for a user's given viewpoint, and allows for very wide angles of view. This method can produce an effective panel-based auto-stereo virtual reality system.
In this paper we discuss issues involved in creating art and cultural heritage projects in Virtual Reality with particular reference to one interactive narrative, 'The Thing Growing'. In the first section we will briefly discuss the potential of VR as a medium for the production of art and the interpretation of culture. In the second section we describe 'The Thing Growing' project. In the third section we discuss building an interactive narrative in VR using XP, an authoring system we designed to simplify the process of producing projects in VR. In the fourth section we will discus some issues involved in presenting art and cultural heritage projects in VR.
This paper describes an architecture for virtual reality software which transparently supports a number of physical display systems and stereoscopic methods. Accurate, viewer- centered perspective projections are calculated, and graphics display options are set, automatically, independent of application code. The design is intended to allow greater portability of applications between different VR devices.
Tele-Immersion is the combination of collaborative virtual reality and audio/video teleconferencing. With a new generation of high-speed international networks and high-end virtual reality devices spread around the world, effective trans-oceanic tele-immersive collaboration is now possible. But in order to make these shared virtual environments more convenient workspaces, a new generation of desktop display technology is needed.
The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) specializes in virtual reality (VR) and scientific visualization research. EVL is a major potential beneficiary of guaranteed latency and bandwidth promised by cell switch networking technology as the current shared Internet lines are inadequate for doing VR-to-VR, VR-to- supercomputer, and VR-to-supercomputer-to-VR research. EVL's computer scientists are working with their colleagues at Argonne National Laboratory (ANL) and the National Center for Supercomputing Applications (NCSA) to develop an infrastructure that enables computational scientists to apply VR, networking and scalable computing to problem solving. ATM and other optical networking schemes usher in a whole new era of sustainable high-seed networking capable of supporting telecollaboration among computational scientists and compute scientists in the immersive visual/multi-sensory domain.
This paper discusses (1) a new proofing methodofprinting Cibachrome from monochrome films, producing higher saturation,
spatial resolution, and dimensional stability; (2) experiments with inks and materials for mass printing of barrier-strip and
lenticular autostereograms; (3) photographic enlargement and barrier-strip scaling; (4) techniques for combining photographs of
real objects with computer backgrounds; and (5) the mathematics of projection and interleaving cylindrical (non planar)
autostereograms producing up to a 360° viewing angle.