PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
It is often very hard to interpret molecular structure data obtained as a result of experimental measurement or theoretical calculations. Typical examples of such data sources are X-ray diffraction techniques, NMR techniques or quantum mechanic calculations. The obtained 3D data as electron density maps or atom positions are complex objects and they require sophisticated methods of visualization. In the first part of this article we will discus several data interpretation problems for which the stereoscopic visualization is strongly recommended. In the second part, an overview of existing chemical software supporting stereoscopic visualization will be given. We will show the necessary methods for stereoscopic visualization implementation on the MCE code development example. MCE is software we developed. It is targeted for interpretation of X-ray diffraction and quantum mechanical calculations. Based on our practical experiences, we summarize in the end of the article the requirement for creating and ergonomically
comfortable working environment for everyday stereoscopic visualization use for chemical structure analysis purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present paper presents the hands-on results of the use of a large screen stereoscopic installation to train technicians
on maintenance tasks of large machineries for a leading mechanical industry. Such machinery, deployed from the
company in remote locations around the world, need complex and lengthy maintenance procedures, to be performed
periodically by teams of highly trained technicians. The firm organize continuous training classes to a large number of
its technicians, using qualified trainers and continuously updating machinery documentation, resulting in long and
expensive periods of time of technicians inactivity. Classes involve training on assembly and disassembly operations of
the company complex mechanical products and were traditionally based on the use of video documentation, 2D
mechanical drawings and live demonstrations on real equipment. In an attempt to improve this process, the firm
equipped some of the training centers with large stereoscopic projection facilities and dedicated software, introducing
the use of real-time stereo rendering of CAD models for virtual disassembly/assembly sequences. The firm investigated
then potential benefits of the new methodology compared to the traditional one. The present article presents an overview
of the technological framework used, and outlines the results of such comparison performed over a period of 6 months.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes our experience making a short stereoscopic
movie visualizing the development of structure in the universe
during the 13.7 billion years from the Big Bang to the present day.
Aimed at a general audience for the Royal Society's 2005 Summer
Science Exhibition, the movie illustrates how the latest
cosmological theories based on dark matter and dark energy are
capable of producing structures as complex as spiral galaxies and
allows the viewer to directly compare observations from the real
universe with theoretical results. 3D is an inherent feature of the
cosmology data sets and stereoscopic visualization provides a
natural way to present the images to the viewer, in addition to
allowing researchers to visualize these vast, complex data sets.
The presentation of the movie used passive, linearly polarized
projection onto a 2m wide screen but it was also required to
playback on a Sharp RD3D display and in anaglyph projection at
venues without dedicated stereoscopic display equipment.
Additionally lenticular prints were made from key images in the
movie. We discuss the following technical challenges during the
stereoscopic production process; 1) Controlling the depth
presentation, 2) Editing the stereoscopic sequences, 3) Generating
compressed movies in display specific formats.
We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we
believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students.
Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC.
25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control.
Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy.
Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For stent graft selection in the treatment of abdominal aortic aneurysms (AAA) anatomic considerations are important. They determine GO/NO-GO of the treatment and help customize the stent. Current systems for AAA stent insertion planning based on pre-operative CT and MR of the patient do not provide an intuitive interface to
view the resulting measurements against the pre-operative CT/MR. Subsequent modifications of the measurements are frequent when automatic algorithms are inaccurate. However, 3D editing is difficult to achieve because of the limitations of monoscopic displays and 2D interface. In this paper, we present a system for automatic AAA
measurement and interactive 3D editing. The strength of this approach is that the resulting measurements can be reviewed and edited interactively in the 3D context of the volumetric rendering of the aorta, so that relationships of the measurements and the aorta are clearly perceived. This understanding is facilitated by the stereoscopic rendering that makes it possible to see the transparent vessel and its corresponding measurements all in one image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is often insufficient access to patients and linear accelerator treatment rooms to train radiotherapy students. An
alternative approach is for some training to use a hybrid virtual environment (HVE) that simulates an actual
radiotherapy treatment machine controlled with the actual machine's handheld control pendant. A study of training
using such a HVE is presented for "skin apposition" treatment, where the patient couch and radiotherapy equipment are
positioned so that the radiation beam strikes the skin perpendicularly. The HVE developed comprises a virtual
treatment room with a linear accelerator, modelled from laser scan data and a virtual patient. A genuine linear
accelerator control handheld "pendant" provided the user interface to the virtual linear accelerator. A virtual patient,
based on the visible human female dataset, complete with rectangular markings for a range of different treatment sites,
provided a range of treatment scenarios. Students were trained in groups with the virtual world being displayed
stereoscopically on a large work-wall. A study of 42 students was conducted to evaluate learning. 93% of students
perceived an improvement in their understanding of this treatment using the HVE and 69% found the control system to
be easy to master.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into
two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing
simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and
contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perception and Performance: Stereoscopic Human Factors
Stereoscopic display produces enhanced game playing experience for the user. However, this experience might be affected by eye strain symptoms produced by the convergence-accommodation conflict in the visual system. In this study we measured the level of sickness symptoms in mobile stereoscopic game playing situation. Our results showed that playing a mobile game with an autostereoscopic display did not cause eye strain that differed from eye strain caused by ordinary mobile device usage. The results suggest that with sufficiently small disparities a mobile stereoscopic display can be used to achieve a comfortable user experience. We also found links between experienced sickness symptoms and background variables. Firstly, our results indicated that females reported higher symptom levels than males. Secondly, we showed that the participants with higher susceptibility to motion sickness reported higher sickness
levels in the experiment. Thirdly, we showed that participants with less computes skills or with less enthusiastic attitude towards new technology had significantly more sickness symptoms than the other participants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that some viewers experience visual discomfort when looking at stereoscopic displays. One of the factors that can give rise to visual discomfort is the presence of large horizontal disparities. The relationship between excessive horizontal disparity and visual comfort has been well documented for the case in which disparity magnitude does not change across space and time, e.g. for objects in still images. Much less is known about the case in which
disparity magnitude varies over time, e.g., objects moving in depth at some velocity. In this study, we investigated the relationship between binocular disparity, object motion and visual comfort using computer-generated stereoscopic video sequences. Specifically, viewers were asked to rate the visual comfort of stereoscopic sequences that had objects moving periodically back and forth in depth. These sequences varied with respect to the number, size, position in depth, and velocity of movement of the objects in the scene. The results indicate that change in disparity magnitude over time might be more important in determining visual comfort than the absolute magnitude of the disparity per se. The results also suggest that rapid switches between crossed and uncrossed disparities might negatively affect visual comfort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autostereoscopic displays offer users the unique ability to view 3-dimensional (3D) imagery without special eyewear or headgear. However, the users' head must be within limited "eye boxes" or "viewing zones". Little research has evaluated these viewing zones from a human-in-the-loop, subjective perspective. In the first study, twelve participants evaluated the quality and amount of perceived 3D images. We manipulated distance from observer, viewing angle, and stimuli to characterize the perceptual viewing zones. The data was correlated with objective measures to investigate the amount of concurrence between the objective and subjective measures. In a second study we investigated the benefit of generating stimuli that take advantage of monocular depth cues. The purpose of this study was to determine if one could develop optimal stimuli that would give rise to the greatest 3D effect with off-axis viewing angles. Twelve participants evaluated the quality of depth perception of various stimuli each made up of one monocular depth cue (i.e., linear perspective, occlusion, haze, size, texture, and horizon). Viewing zone analysis is discussed in terms of optimal viewing distances and viewing angles. Stimuli properties are discussed in terms of image complexity and depth cues present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a set of experiments that compare 2D CRT, shutter glasses and autostereoscopic displays; measure user preference for different tasks in different displays; measure the effect of previous user experience in the interaction performance for new tasks; and measure the effect of constraining the user's hand motion and hand-eye coordination. In this set of tests, we used interactive object selection and manipulation tasks using standard scalable configurations of 3D block objects. We also used a 3D depth matching test in which subjects are instructed to align two objects located next to each other on the display to the same depth plane. New subjects tested with hands out of field of view constraint performed more efficiently with glasses than with autostereoscopic displays, meaning they were able to match the objects with less movement. This constraint affected females more negatively than males. From the results of the depth test, we note that previous subjects on average performed better than the new subjects. Previous subjects had more correct results than the new subjects, and they finished the test faster than the new subjects. The depth test showed that glasses are preferred to autostereo displays in a task that involves only stereoscopic depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common cause of asthenopia is viewing objects from a short distance, as is the case when working at a VDT (Visual Display Terminal). In general, recovery from asthenopia, especially accommodative asthenopia, is aided by looking into the distance. The authors have developed a stereoscopic 3-D display with dynamic optical correction that may reduce asthenopia. The display does this by reducing the discrepancy between accommodation and convergence, thereby presenting images as if they were actually in the distance. The results of visual acuity tests given before and after presenting stereoscopic 3-D images with this display show a tendency towards less asthenopia. In this study, the authors developed a refraction feedback function that makes the viewer's distance vision more effective when viewing stereoscopic 3-D images on the this display. Using this function, refraction is fed back during viewing and the viewer gradually acquires distance vision. The results of the study suggest that stereoscopic 3-D images are more effective than 2-D images for recovery from asthenopia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a dual-resolution foveated stereoscopic display built from commodity projectors and computers. The technique is aimed at improving the visibility of fine details of 3D models in computer-generated imagery: it projects a high-resolution stereoscopic inset (or fovea, by analogy with biological vision) that is registered
in image space with the overall stereoscopic display. A specific issue that must be addressed is the perceptual conflict between the apparent depth of the natural boundary of the projected inset (visible due to changes in color, brightness, and resolution) and that of the underlying scene being displayed. We solve this problem by assigning points to be displayed in either the low resolution display or the inset in a perceptually consistent manner. The computations are performed as a post-processing, are independent of the complexity of the model, and are guaranteed to yield a correct stereoscopic view. The system can accommodate approximately aligned projectors, through image warping applied as part of the rendering pipeline. The method for boundary adjustment is discussed along with implementation details and applications of the technique for the visualization of highly detailed 3-D models of environments and sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo projection using interference filters is an advanced wavelength multiplexing approach, that specifically takes into account the nature of the human eye, which is characterized by three types of color receptors. Accordingly, the filters used to code image information for the left and for the right eye image have three narrow bands each. In the present paper the current status of the interference filter technique for stereo imaging is outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The geometric differences between left and right images are known as a main factor of eye fatigue in the stereoscopic system. We developed a real-time stereoscopic error corrector which can adjust the vertical errors, the disparity, and the size (field of view) errors of HD moving pictures in VCR tape. The main idea of this system is to extract and use only the common area of both images by cropping the left and right images independently. For this system, we developed a realtime HD scaling hardware and stereoscopic error correcting software. We tested the system with the video streams taken by our HD stereoscopic camera. As a result, we confirmed that the developed system could reduce the efforts and time for correcting the stereoscopic errors compared to the other methods. We also developed a real-time zoom-convergence interlocked controller for HD parallel-axis stereoscopic camera using the same hardware. Because it doesn't need motors for parallax move, we could control the convergence more smoothly while locking it with zoom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A device, which keeps two camcorders permanently in synchronization, has been developed. The mentioned device uses LANC (CONTROL-L) camcorder's inputs for synchronization. It enables controlling of two camcorders simultaneously via built-in buttons, by using external LANC remote controller and/or by the PC via serial (RS232) communication. Since device requires LANC inputs on camcorders or ACC inputs on still cameras, it can be used on some camcorders produced by manufacturers Sony and Canon or some still cameras produced by Sony. The device initially synchronizes camcorders or still cameras by applying arbitrarily delayed power-up pulses on LANC (ACC)
inputs. Then, on user demand, the camcorders can be permanently synchronized (valid only for some camcorders produced by Sony). The effectiveness of the proposed device is demonstrated by several experiments on three types of camcorders (DCR-TRV900E, HDR-HC1, HVR-Z1U) and one type of still camera (DSC-V1). The electronic schemes,
PCB layouts, firmware and communication programs are freely available (under GPL licence).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emerging 3-D displays show several views of the scene simultaneously. A direct transmission of a selection of these views is impractical, because various types of displays support a different number of views and the decoder has to interpolate the intermediate views. The transmission of multiview image information can be simplified by only transmitting the texture data for the central view and a corresponding depth map. Additional to the coding of the texture data, this technique requires the efficient coding of depth maps. Since the depth map represents the scene geometry and thereby covers the 3-D perception of the scene, sharp edges corresponding to object boundaries, should be preserved. We propose an algorithm that models depth maps using piecewise-linear functions (platelets). To adapt to varying scene detail, we employ a quadtree decomposition that divides the image into blocks of variable size, each block being approximated by one platelet. In order to preserve sharp object boundaries, the support area of each platelet is adapted to the object boundary. The subdivision of the quadtree and the selection of the platelet type are optimized such that a global rate-distortion trade-off is realized. Experimental results show that the described method can improve the resulting picture quality after compression of depth maps by 1-3 dB when compared to a JPEG-2000 encoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For multiview auto-stereoscopic 3D displays, available stereo content needs to be converted to multiview content. In this paper we present a method to efficiently synthesize new views based on the two existing views from the stereo input. This method can be implemented in real-time and is also capable of handling uncalibrated stereo input. Good performance is shown compared to state-of-the-art disparity estimation algorithms and view rendering methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among various autostereoscopic display systems, the lenticular display is one of the most popular systems due to its easy manufacturability. For N-view lenticular display, N view images are to be regularly sub-sampled and interleaved to produce a 3D image. A lenticular system provides the best quality only when a viewer locates at a pre-determined optimal viewing distance and the lenticular sheet is precisely aligned on the LCD pixel array. In our previous work, we have proposed an algorithm to compensate the viewer's position change and the lenticular misalignment. However,
since the previous algorithm requires a considerable computational burden, we propose a new fast multiplexing algorithm. To improve the processing speed, we introduce a mapping table instead of directly using complex equations. In contrary to the previous algorithm, the proposed one can make real time compensation possible without degrading image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward
mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since
it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and
uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A head-tracked display could be made from a two-view autostereoscopic display where head-tracking allows the display to swap the two views when the eyes move from viewing zone to viewing zone. Variations in human interpupillary distance mean that this basic two-view version will not work well for the significant minority of the population who have interpupillary distance significantly different from the average. Woodgate et al. proposed, in 1997, that a three-view system would work well. Analysis of an ideal version of their proposal shows that it does work well for the vast majority of the population. However, most multi-view, multi-lobe autostereoscopic displays have drawbacks which mean that, in practice, such a system would be unacceptable because of the inter-view dark zones generated by the inter-pixel dark zones on the underlying display technology. Variations of such displays have been developed which remove the inter-view dark zones by allowing adjacent views to overlap with one another: the views appear to smoothly blend from one to the next at the expense of a little blurring. Such displays need at least five viewing zones to accommodate the majority of the adult population with head-tracking and at least six viewing zones to accommodate everyone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principle construction of the unique autostereoscopic 3D LCD wall is considered. This glasses-free 3D LCD wall provides presentation of high-quality stereo images for many users simultaneously. The technical characteristics of the 3D LCD wall are compared with the corresponding parameters of the multiview 3D projection wall. The general equation for the evaluation of the multiview stereo image in this 3D LCD wall will be presented and all of its parameters are analysed. We introduce here the fundamental matrices, in which will be contained the information about the contribution of the different views in every subpixel. The properties of these matrices as well as their use for the evaluation of the stereo-image are considered. The problem of the adjustment of the stereoscopic image on the 3D LCD wall is also discussed and different types of adjustment are considered. For some of them the corresponding equations are given. The presented approach may be applied also to the case of the multiview autostereoscopic 3D plasma wall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a flatbed-type autostereoscopic display system showing continuous motion parallax as an extended form of a one-dimensional integral imaging (1D-II) display system. The 1D-II display architecture is suitable for both flatbed and upright configurations. We have also designed an image format specification for encoding 1D-II data. In this parallax image array format, two (or more) viewpoint images whose viewpoint numbers are separated by a constant number are paired, and all of the paired images are combined to obtain an image the same size as the elemental image array. By using the format, 3-D image quality is hardly degraded by lossy codec. The conversion from this format to the elemental image array is simple and does not depend on changes in the viewing distance and associated changes in camera number. Decoding and converting speeds are sufficiently high due to utilization of middleware based on
DirectX.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Display technology has made big advances in last years. Displays are flat, offer high resolution, are bright, fast and almost free of flicker. Apart from new technologies that make displays still more
affordable the major direction of development turns to applications, especially TV. In this consolidation process new features are sought. Still lacking is the 3D display capability most obvious
compared to viewing real-world scenes.
In the last decades a lot of new 3D technologies have been proposed, developed and only few have reached the commercial market. A breakthrough into the mass market has been prevented for technical as well as commercial reasons. Most natural viewing is provided by holography. Unfortunately, even the technical challenges are so demanding that the 3D research community turned to the stereoscopic technology known for more than a century. Many technologies have been
proposed and the shutter technique has already matured to a commercial product. But the mass market requires 3D viewing without using additional viewing aids. Currently, these Autostereoscopic 3D Displays still cannot meet the quality standard and comfort of today's 2D displays. In our opinion 3D displays should first of all match all of today's 2D demands and additionally be capable of 3D displaying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the HoloVizio system design and give an overview about Holografika's approach to the 3D displaying. The patented HoloVizio technology uses a specially arranged array of optical modules and a holographic screen. Each point of the holographic screen emits light beams of different color and intensity to the various directions. The light beams generated in the optical modules hit the screen points in various angles and the holographic screen makes the necessary optical transformation to compose these beams into a perfectly continuous 3D view. With proper software control, light beams leaving the pixels propagate in multiple directions, as if they were emitted from the points of 3D objects at fixed spatial locations. We show that the direction selective light emission is a general requirement for every 3D systems and provide quantitative data on the FOV, on the angular resolution, determining field of depth of the displays, affecting the total number of light beams necessary for high-end 3D displaying. We present the results with the 10 Mpixel desktop display and the 50Mpixel large-scale system. We cover the real-time control issues at high pixel-count systems with the HoloVizio software environment and describe concrete 3D applications developed in the frame of European projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a 3D display system is used for remote manipulation, the special glasses for looking at the 3 D-image disturb the manipulation. So auto-stereoscopic display is preferable for remote manipulation work. However, the eye position area of the auto-stereoscopic display which shows the 3D-image is generally narrow. We constructed a 3D display system which solved these problems. In the system, 1.stereoscopic images displayed on the special LCD are projected on a large concave mirror by a projection lens. 2.The viewing-zone limiting aperture is set between the projection lens and the concave mirror. 3. The real image of the aperture plane is made at a certain position in vacant space by the concave mirror, and the image position is the viewing zone. By putting both eyes at the position and looking at the concave mirror plane, the observer can see the stereoscopic image without glasses. To expand the area at which the observer can observe the 3D-image, we proposed and constructed the system of the eye-position tracking of the viewing zone by detecting the eye-position of the observer. An observer can not only move horizontally and vertically by rotating the concave mirror, but also move to front and back by moving the viewing zone limiting aperture in the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A ray-based cylindrical display is proposed that allows multiple viewers to see 3D images from a 360-degree horizontal arc without wearing 3-D glasses. This technique uses a cylindrical parallax barrier and a one-dimensional light source array constructed from such semiconductor light sources as LEDs aligned in a vertical line. The light source array rotates along the inside of the cylindrical parallax barrier, and the intensity of each light is synchronously modulated with the rotation.
Since this technique is based on the parallax panoramagram, the density of rays is limited by the diffraction at the parallax barrier. In order to solve this problem, we employed revolving parallax barrier. We have developed two protype displays and they showed high presence 3D image. Especially the newer one is capable of displaying color images whose diameter is 200mm, it is suitable for displaying real object like a human head.
Therefore we acquired ray-space data using a video camera rotating around an object and reconstructed the object using the prototype display successfully. In this paper, we describe details of the system and discuss about ray control method to reconstruct object from ray-space data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high-density directional display, which was originally developed in order to realize a natural 3D display, is not only a 3D display but also a high-appearance display. The appearances of objects, such as glare and transparency, are the results of the reflection and the refraction of rays. The faithful reproduction of such appearances of objects is impossible using conventional 2D displays because rays diffuse on the display screen. The high-density directional display precisely controls the horizontal ray directions so that it can reproduce the appearances of objects. The fidelity of the reproduction of object appearances depends on the ray angle sampling pitch. The angle sampling pitch is determined by considering the human eye imaging system. In the present study the high-appearance display which has the resolution of 640×400 and emits rays in 72 different horizontal directions with the angle pitch of 0.38° was constructed. Two 72-directional displays were combined, each of which consisted of a high-resolution LCD panel (3,840×2,400) and a slanted lenticular sheet. Two images produced by two displays were superimposed by a half mirror. A slit array was placed at the focal plane of the lenticular sheet for each display to reduce the horizontal image crosstalk in the combined image. The impression analysis shows that the high-appearance display provides higher appearances and presence than the conventional 2D displays do.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe techniques for stereo panoramic image capture and rendering that are part of a personal panoramic virtual environment system. We examine the use of stereo shutter-glasses and several recently developed autostereoscopic (AS) displays to improve the sense of immersion. The stereo panorama pair is created by stitching strips that are sampled from images captured with swing camera panoramic imaging system. We apply Peleg's disparity adjustment algorithm to the generated stereo panorama to achieve large disparity (horizontal parallax) of far away scenes and smaller disparity of closer scenes for stereo perception. Unfortunately, vertical parallax effects in the stereo panorama still occur, causing display artifacts and problems in human stereo fusion.
To overcome these problems, we first present a general image capture model, specify geometrical parameters, and describe the panorama generating process. We then describe an efficient stitching algorithm that corrects dynamic exposure variation and removes moving objects without manual selection of ground-truth images. We present expressions for the horizontal and vertical parallax, describe parallax measurement techniques, and develop an adaptive vertical and horizontal parallax control algorithm for rendering in different viewing directions. We present a simple subjective test of stereo panoramas rendered on AS and other stereo displays, and discuss the relative quality of each.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a spherical layout for a camera array system when shooting images for use in Integral Videography (IV). IV is an autostereoscopic video image technique based on Integral Photography (IP) and is one of the preferred autostereoscopic techniques for displaying images. There are many studies on autostereoscopic displays based on this technique indicating its potential advantages. Other camera arrays have been studied, but their purpose addressed other issues, such as acquiring high-resolution images, capturing a light field, creating contents for non-IV-based autostereoscopic displays and so on. Moreover, IV displays images with high stereoscopic resolution when objects are displayed close to the display. As a consequence, we have to capture high-resolution images in close vicinity to the display. We constructed the spherical layout for the camera array system using 30 cameras arranged in a 6 by 5 array. Each camera had an angular difference of 6 degrees, and we set the cameras to the direction of the sphere center. These cameras can synchronously capture movies. The resolution of the cameras is a 640 by 480. With this system, we determined the effectiveness of the proposed layout of cameras and actually captured IP images, and displayed real autostereoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When designing a system capable of capturing and displaying three-dimensional (3-D) moving images in real time by the integral imaging (II) method, one challenge is to eliminate pseudoscopic images. To overcome this problem, we propose a simple system with an array of three convex lenses. This paper first describes by geometrical optics the lateral magnification of the elemental optics and expansion of an elemental image, confirming that the elemental optics satisfies the conditions under which pseudoscopic images can be avoided. By the II method, adjacent elemental images must not overlap, a condition also satisfied by the proposed optical system. Next, the paper describes an experiment carried out to acquire and display 3-D images. The real-time system we have constructed comprises elemental optics array with 54(H) x 59(V) elements, a CCD camera to capture a group of elemental images created by the lens array, and a liquid crystal panel to display these images. The experiment results confirm that the system produces orthoscopic images in real time and so is effective for
real-time application of the II method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In spite of significant improvements in three-dimensional (3D) display fields, the commercialization of a 3D-only display system is not achieved yet. The mainstream of display market is a high performance two-dimensional (2D) flat panel display (FPD) and the beginning of the high-definition (HD) broadcasting accelerates the opening of the golden age of HD FPDs. Therefore, a 3D display system needs to be able to display a 2D image with high quality. In this paper, two different 3D-2D convertible methods based on integral imaging are compared and categorized for its applications. One method uses a point light source array and a polymer-dispersed liquid crystal and one display panel. The other system adopts two display panels and a lens array. The former system is suitable for mobile applications while the latter is for home applications such as monitors and TVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We evaluate a new method for computing color anaglyphs based on uniform approximation in CIE color space. The method depends on the spectral distribution properties of the primaries of the monitor and the transmission functions of the filters in the viewing glasses. We will compare the result of this method with several other methods that have been proposed for computing anaglyphs. To compute the color at a given pixel in the anaglyph image requires solving a linear program. We exploit computational properties of the simplex algorithm to reduce computation time by 72 to 89 percent. After computing the color at a pixel, a depth-first search is performed to collect all neighboring pixels with similar color so that a simple matrix-vector multiplication can be applied. We also parallelize the algorithm and implement it on a cluster environment. We discuss the effects of different data dividing schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses about some technology breakthroughs to help solve the difficulties that have been clogging the popularity of 3D Stereo. We name this 3DHiVision (3DHV) System Solution. With the advance in technology, modern projection systems and stereo LCD panels have made it possible for a lot more people to enjoy a 3D stereo video experience in a broader range of applications. However, the key limitations to more mainstream applications of 3D video have been the availability of 3D contents and the cost and the complexity of 3D video production, content management and playback systems. With the easy availability of the modern PC based video production tools, advance in the technology of the projection systems and the great interest highly increased in 3D applications, the 3D video industry still remains stagnant and restricted within a small scale. It is because the amount of the cost for the production and playback of high quality 3D video has always been to such an extent that it challenges the limitations of our imagination. Great as these difficulties seem to be, we have surmounted them all and created a complete end-to-end 3DHiVision (3DHV for short) Video system based on an embedded PC platform, which significantly reduces the cost and
complexity of creating museum quality 3D video. With this achievement, professional film makers and amateurs as well will be able to easily create, distribute and playback 3D video contents. The HD-Renderer is the central component in our 3DHV solution line. It is a highly efficient software capable of decrypting, decoding, dynamically parallax adjusting and rendering HD video contents up to 1920x1080x2x30p in real-time on an embedded PC (for theaters) or
any other home PC (for main stream) with the 3.0GHz P4 CPU / GeForce6600GT GPU hardware requirements or above. And the 1280x720x2x30p contents can be performed with great ease on a notebook with 1.7GHz P4Mobile CPU / GeForce6200 GPU at the time when this paper is written.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
That animation created using CG modeling and animation tools is inherently three-dimensional is well known. In the middle to late nineties IMAX Corporation began actively exploring CG animated features as a possible source of economically viable content for its rapidly growing network of stereoscopic IMAX® 3D theatres. The journey from there to the spectacular success of the IMAX® 3D version of The Polar Express is an interesting mix of technical, creative and production challenges. For example 3D animations often have 2D elements and include many sequences that have framing, composition and lens choices that a stereographer would have avoided had 3D been part of the recipe at the outset. And of course the decision to ask for a second set of deliverables from an already stressed production takes nerve. The talk will cover several of these issues and explain why the unique viewing experience enabled by the wideangle geometry of IMAX® 3D theatres makes it worth all the pain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence.
Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer.
Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We studied a new 3-D display that uses two stereoscopic displays instead of two 2-D displays in a depth-fused 3D display. We found that two 3-D images with the same shape displayed at different depths by the two stereoscopic displays were fused into one 3-D image when they were viewed as overlapping. Moreover, we found that the perceived depth of the fused 3-D image depends on both the luminance ratio of the two 3-D images and their original perceived depth. This paper presents the simulation results for the perceived depth of the fused 3-D image on the new 3-D display. We applied a model in which the human visual system uses a low-pass filter to perceive the fused image, the same as that used for a conventional DFD display. The simulation results revealed that the perceived depth of the fused image changed depending on both the luminance ratio of the two 3-D images and their original perceived depth, as in the subjective test results, and the low-pass filter model accurately presented the perception of a 3-D image on our 3-D display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PolarScreens, with the collaboration of MacNaughton Inc., has developed a stereoscopic display that has the unique advantage of displaying two images without multiplexing whatsoever. Multiplexing means timesharing or pixel sharing between the left and right eyes. This effectively reduces resolution or brightness, and is subject to crosstalk. PolarScreens uses 2 LCD panels stacked on one another to avoid this.
Instead of using an LCD to block a specific eye, PolarScreens uses the second LCD to add extra information to the photon using a polar coordinate transformation algorithm. The first LCD controls total pixel intensity and the second controls left-eye/right-eye distribution ratio. This is the only technology where one photon carries the information for both eyes.
The theoretical concept was proven in 1996. At the time many technologies needed were inadequate for a commercial product; LCDs were slow, had very small aperture and were very expensive. Electronics were too slow for real time transformation. Micro-optical technologies were at its beginning. The project was periodically re-activated in order to re-evaluate its feasibility. In 2002 it was determined that these technologies were mature enough to re-activate the project. Since then PolarScreens has worked on improving the technology and built many prototypes of different size ranging from 15in to 19in. As a result, today it is possible to manufacture a very high quality stereoscopic monitor based on PolarScreens technology at a reasonable price.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel walk-through 3D display based on the patented FogScreen, an "immaterial" indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three major ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wall-sized, immaterial display that creates an engaging 3D visual.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A workstation for testing the efficacy of stereographic displays for applications in radiology has been developed, and is currently being tested on lung CT exams acquired for lung cancer screening. The system exploits pre-staged rendering to achieve real-time dynamic display of slabs, where slab thickness, axial position, rendering method, brightness and contrast are interactively controlled by viewers. Stereo presentation is achieved by use of either frame-swapping images or cross-polarizing images. The system enables viewers to toggle between alternative renderings such as one using
distance-weighted ray casting by maximum-intensity-projection, which is optimal for detection of small features in many cases, and ray casting by distance-weighted averaging, for characterizing features once detected. A reporting mechanism is provided which allows viewers to use a stereo cursor to measure and mark the 3D locations of specific features of interest, after which a pop-up dialog box appears for entering findings. The system's impact on performance is being tested on chest CT exams for lung cancer screening. Radiologists' subjective assessments have been solicited for other
kinds of 3D exams (e.g., breast MRI) and their responses have been positive. Objective estimates of changes in performance and efficiency, however, must await the conclusion of our study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the trial development of an ergonomic evaluation system for stereoscopic video production. The purpose of the system is to quantify the parallax distribution of stereoscopic images and evaluate their viewing safety and comfort. The authors processed the images to extract the optical flow between the right and left images. The reference values for safety and comfort were obtained from two subjective evaluation and precedent studies. This paper reports the results of the experiments and the development of a prototype evaluation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integral imaging has the problem of the limitation of viewing angle. This paper describes a wide-viewing-angle 3D display system using holographic optical element (HOE) lens array. This display system consists of a flat HOE lens array and a projector. However, the axis of each elemental HOE lens is eccentric. Since every axis of the elemental HOE lens is convergent, the flat HOE lens array works as a virtual curved lens array. Thus, this display system has a wide viewing angle. On the other hand, generally, in a integral imaging system each elemental lens has its corresponding area on the display panel. To prevent the image flipping, the elemental image that exceeds the corresponding area is discarded. Therefore, the number of the elemental images is limited and the viewing angle is limited. In the proposed system, since the HOE lens array is flat and the light rays from the projector are parallel, the elemental image does not exceed the corresponding area and the flipped images are not observed. Also, the configuration of this display system is simple. The principle of the proposed system is explained and the experimental result is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of producing depth maps for depth-image-based rendering (DIBR) of stereoscopic views is proposed and tested. The method is based on depth-from-defocus techniques, utilizing two original images, one with the camera focused at a near point and the other with it focused at a far point in the captured scene to produce depth maps from blur at edge locations. It is assumed that the level of blur at an edge reflects the distance it is from the focused distance. For each image, estimates of the level of blur edges at local regions are obtained by determining the optimal scale for edge
detection based on a luminance gradient. An Edge-Depth map is then obtained by evaluating differences in blur for corresponding regions in the two images. This is followed by an additional process in which regions in the Edge-Depth map that have no depth values are filled to produce a Filled-Depth map. A group of viewers assessed the depth quality of a representative set of stereoscopic images that were produced by DIBR using the two types of depth maps. It was
found that the stereoscopic images generated with the Filled-Depth and the Edge-Depth maps produced depth quality ratings that were higher than those produced by their monoscopic, two-dimensional counterparts. Images rendered using the Filled-Depth maps, but not the Edge-Depth maps, produced ratings of depth quality that were equal to those produced with factual, full depth maps. A hypothesis as to how the proposed method might be improved is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a depth map-based disparity estimation algorithm using multi-view and depth camera system. When many objects are arranged in the 3D space with a long depth range, the disparity search range should be large enough in order to find all correspondences. In this case, traditional disparity estimation algorithms that use the fixed disparity search range often produce mismatches if there are pixels that have similar color distribution and similar textures along the epipolar line. In order to reduce the probability of mismatch and save computation time for the disparity estimation, we propose a novel depth map-based disparity estimation algorithm that uses a depth map captured by the depth camera for setting the disparity search range adaptively as well as for setting the mid-point of the disparity search range. The proposed algorithm first converts the depth map into disparities for the stereo image pair to be matched using calibrated camera parameters. Next, we set the disparity search range for each pixel based the converted disparity. Finally, we estimate a disparity for each pixel between stereo images. Simulation results with various test data sets demonstrated that the proposed algorithm has better performance in terms of the smoothness, global quality and computation time compared to the other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on a multiview autostereoscopic display with double-sided reflecting scanning micromirrors. There is a trade-off between the resolution and the number of images in existing multiview stereoscopic displays. In order to solve this problem, we propose the way of projecting time-series pixel data into the discrete directions by using scanning micromirrors. As the number of the view angles depends on the number of pixel data projected in one cycle of the scan, the resolution and the number of the view angles can be independently increased. Double-sided reflecting micromirrors actuated by both external magnetic force and Lorentz force were designed and fabricated based on the MEMS (Micro Electro Mechanical Systems) technology. Fabricated micromirrors are 450 um x 520 um in size, and characteristics of a micromirror, for example the range of the movement and the resonance frequency, were measured. Then the fabricated micromirrors were integrated with a microlens array, pinhole arrays and an LED matrix to construct a prototype of the multiview autostereoscopic display, and the relationship between the view angle and the light intensity was measured. The validity of our proposed method was proved from the light intensity distribution of this prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a floating display system based on integral imaging (InIm) was proposed. Though the floating display system based on InIm can provide moving picture with great feel of depth to the observer, it has limited expressible depth range because the expressible depth range of InIm is limited. In this paper, the expressible depth range of the floating display system based on InIm is analyzed based on the analysis on the expressible depth range of the InIm. Also, a depth-enhanced floating display system based on InIm is proposed. In the proposed depth-enhanced floating display system based on InIm, the lens array of the InIm is placed at the focal plane of the floating lens. Additionally, the seams on the lens array become less distinct since they are also placed at the focal plane of the floating lens. However, the size of the object changes when the object is out of the overall central depth plane. Thus, the size of objects in elemental image should be rescaled to display correct three-dimensional image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D sprite technique is proposed, which enables to rapidly update 3D images of the lenticular type 3D display. The 3D sprite technique was developed for the 72-directional display which consists of a WQUXGA (3,840×2,400) LCD panel and a slanted lenticular sheet. It displays a large number of directional images in 72 different horizontal directions with nearly parallel rays. When using a slanted lenticlular sheet, the image interpolation is required in the image
interlacing process. The time required to update the whole 3D image is about 0.5 second using a PC (Pentium4 2.4GHz). The 3D sprites were implemented by software. The developed software has the ability to display 40, 12, and 4 sprites at the video rate (30 Hz) for the sprite sizes of 8×8, 16×16, and 32×32, respectively. The 3D sprite technique developed in the present study has following features: (a) supports three different data types (2D type, 3D type, and 360° type), (b) variable image size (8×8, 16×16, 32×32, and etc.), (c)scaling sprites depending on the z-coordinates, and (d) correct occlusion of sprites depending on z-coordinate. The 3D sprite technique is combined with the fingertip detection system to construct a virtual reality system in which the 3D image is interactively manipulated by the fingertip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An analysis and an optical model for the polarization optics of a 3D display system based on two LCD monitors in an "open book" configuration and a beam combiner (BC) are presented. Calculations of the angle of incidence (AOI) distributions for likely display-observer configurations and an estimate of the range of AOI for which the BC should be optimized are reported. Experimental data for commercial BCs is presented and analyzed in the [RS, TP] unit square. A first order model is developed to calculate the effect of the BC optical properties on the stereo channels' brightness balance, system light efficiency and crosstalk. The model predicts that a significant reduction of the crosstalk can be achieved by a uniform rotation of the analyzers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new 3D adapter system with a lens unit interposed between a capturing lens and an adapter housing for alternately passing right and left video images of an object there through, wherein the lens unit has an entrance pupil point formed outside the lens unit, the lens unit has a magnification of 1:1, and the lens unit comprises a plurality of symmetrically arranged lenses for reversing the video images, whereby it is possible to capture video images with wide picture angles without increasing the size of the adapter housing, and to prevent occurrence of any distortion in the resulting video images comprised of the integrated right an left images of the object.
From some experimental result, the conventional 3D adapter system has the standard deviation of x axis is 3.92 pixels and the standard deviation of y axis is 2.92 pixels. But in the used camera system, the standard deviation of x axis is 1.11 pixels and the standard deviation of y axis is 0.39 pixels. Thus the errors in pixel of proposed system are smaller than the conventional system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have devised a new and efficient method of zoom-convergence interlocked control in the moving-parallel axes style stereoscopic camera system. We set up a simple and smart algorithm of our own, which is based on the basic geometry of the stereoscopic camera system. And, instead of making the Look-Up-Table by measuring the zoom value and the convergence at each step, we utilized the lens data sheet, which can be obtained from the lens manufacturer, so that we can secure the accuracy and the handiness without any measuring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the problems appearing in the virtual reality (VR) application is the image distortion and blending correction for curved screens with single or multiple projectors. There are ways to solve this problem via a special circuit implementation within the image projectors or via special image correction PC based boxes. In this study we proposed own algorithm for the image correction based on the back ray tracing approach. The algorithm is using reverse ray tracing and it was tested on the Cybersphere1 setup. We propose the software implementation of the algorithm which allows using it for any programs not limited by a certain technology such as OpenGL for instance as in other image correction algorithms. The algorithm can be used for distributed image rendering and projection such as Chromium based sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for
presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical
input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart
Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an industrial application of VR, which has been integrated as a core component of a virtual technical support system. The problems that often impede conventional technical support and the way in which a virtual technical support system can overcome them are discussed. Field engineers are able to use the system to access improved information and knowledge through their laptop computers while on-the-job. Thereby, taking advantage of scarce investment resources. When used in synergy, the application of VR, multimedia, coordinated multiple views and knowledge-based technologies can be shown to significantly reduce technical support costs. Initial results are presented showing the effectiveness and benefits of such a system for field engineer support in the water and ventilation hygiene industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR) is regarded as a high-end user-computer interface that involves real-time simulation and interactions through multiple sensorial channels. It is assumed that VR will reshape the interaction interfaces between user and computer technology by offering new approaches for the communication of information, the visualisation of processes and the creative expression of ideas. The VR application in construction has a relatively long history but its successful stories are not heard quite often.
In this paper, the authors have explored how much further the construction industry could be supported by new three dimensional (3D) VR technologies in different construction processes. The design information in the construction industry has been discussed first followed by a detail construction process analysis. A questionnaire survey has been conducted and the results of the survey are presented and discussed. As an investigation into the application of 3D VR technologies in the context of the construction processes, the benefits and challenges of current and potential applications of 3D VR in the construction industry have been identified. This study also reveals the strengths and weaknesses of 3D VR technology applications in the construction processes. Suggestions and future works are also provided in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The laparoscopic technique for performing abdominal surgery requires a very high degree of skill in the medical practitioner. Much interest has been focused on using computer graphics to provide simulators for training surgeons. Unfortunately, these tend to be complex and have a very high cost, which limits availability and restricts the length of time over which individuals can practice their skills. With computer game technology able to provide the graphics required for a surgical simulator, the cost does not have to be high. However, graphics alone cannot serve as a training simulator. Human interface hardware, the equivalent of the force feedback joystick for a flight simulator game, is required to complete the system. This paper presents a design for a very low cost device to address this vital issue. The design encompasses: the mechanical construction, the electronic interfaces and the software protocols to mimic a laparoscopic surgical set-up. Thus the surgeon has the capability of practicing two-handed procedures with the
possibility of force feedback. The force feedback and collision detection algorithms allow surgeons to practice realistic operating theatre procedures with a good degree of authenticity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will discuss the potentiality towards a methodology for creating perceptual shifts in virtual reality (VR) environments. A perceptual shift is a cognitive recognition of having experienced something extra-marginal, on the boundaries of normal awareness, outside of conditioned attenuation. Definitions of perceptual shifts demonstrate a historical tradition for the wonder of devices as well as analyze various categories of sensory and optical illusions. Neuroscience and cognitive science attempt to explain perceptual shifts through biological and perceptual mechanisms using the sciences. This paper explores perspective, illusion and projections to situate an artistic process in terms of perceptual shifts. Most VR environments rely on a single perceptual shift while there remains enormous potential for perceptual shifts in VR. Examples of artwork and VR environments develop and present this idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will explore how the aesthetics of the virtual world affects, transforms, and enhances the immersive emotional experience of the user. What we see and what we do upon entering the virtual environment influences our feelings, mental state, physiological changes and sensibility. To create a unique virtual experience the important component to design is the beauty of the virtual world based on the aesthetics of the graphical objects such as textures, models, animation, and special effects. The aesthetic potency of the images that comprise the virtual environment can make the immersive experience much stronger and more compelling. The aesthetic qualities of the virtual world as born out through images and graphics can influence the user's state of mind. Particular changes and effects on the user can be induced through the application of techniques derived from the research fields of psychology, anthropology, biology, color theory, education, art therapy, music, and art history. Many contemporary artists and developers derive much inspiration for their work from their experience with traditional arts such as painting, sculpture, design, architecture and music. This knowledge helps them create a higher quality of images and stereo graphics in the virtual world. The understanding of the close relation between the aesthetic quality of the virtual environment and the resulting human perception is the key to developing an impressive virtual experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality has been in the public eye for nearly forty years. Its early promise was vast: worlds we could visit and live in, if we could bend the technology to our desires. Progress was made, but along the way the original directions and challenges of fully immersive VR took a back seat to more ubiquitous technology such as games that provided many of the same functions. What was lost in this transition was the potential for VR to become a stage for encounters that are meaningful, those experiences that tap into what it means to be human. This paper describes examples of such experiences using VR technology and puts forward several avenues of thought concerning how we might reinvigorate these types of VR explorations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors propose an inexpensive human interface for teleoperation of mobile robots by giving a perspective-transformed image of a virtual 3D screen on a standard PC display. Conventional teleoperation systems of mobile robots have used multiple screens for multiple cameras or a curved screen for a wide view camera, both of which are expensive solutions intended only for professional use. We adopt a single standard PC display as the display system for the operator to make the system affordable to all PC users. To make the angular location perceivable with a 2D display, the authors propose a method to show on the flat screen a perspective-transformed image of a virtual 180-degree cylindrical screen. In this system the image shown on the 2D screen preserves angular information of the remote place, which can help the operator grasp the angular location of the objects in the image. The result of the experiments indicates that the perspective-transformed images of the cylindrical screen can give the operator a better understanding of the remote world, which enables easier and more instinctive teleoperation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an interactive software simulator that assists with the design of multi-camera setups for applications such as image-based virtual reality, three-dimensional reconstruction from still or video imagery, surveillance, etc. Instead of automating the camera placement process, our goal is to assist a user by means of a simulator that supports interactive placement and manipulation of multiple cameras within a pre-modeled three-dimensional environment. It provides a real-time 3D rendering of the environment, depicting the exact coverage of each camera (including indications of occluded and overlap regions) and the effective spatial resolution on the surfaces. The simulator can also indicate the dynamic coverage of pan-tilt-zoom cameras using "traces" to highlight areas that are reachable within a user-selectable interval. We describe the simulator, its underlying "engine" and its interface, and we show an example multi-camera setup for remote 3D medical consultation, including preliminary 3D reconstruction results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines historical audio applications used to provide real-time immersive sound for CAVETM environments and discusses their relative strengths and weaknesses. We examine and explain issues of providing spatialized sound immersion in real-time virtual environments (VEs), some problems with currently used sound servers, and a set of requirements for an 'ideal' sound server. We present the initial configuration of a new cross-platform sound server solution using open source software and the Open Sound Control (OSC) specification for the creation of real-time spatialized audio with CAVE applications, specifically Ygdrasil (Yg) environments. The application, aNother Sound Server (NSS) establishes an application interface (API) using OSC, a logical server layer implemented in Python, and an audio engine using SuperCollider (SC). We discuss spatialization implementation and other features. Finally, we document the Synthecology project which premiered at WIRED NEXTFEST 2005 and was the first VE to use NSS. We also discuss various techniques that enhance presence in networked VEs, as well as possible and planned extensions of NSS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.