We propose an integral three-dimensional (3D) display system with a wide viewing zone and depth range using a time-division display and eye-tracking technology. In the proposed system, the optical viewing zone (OVZ) is narrowed to a size that only covers an eye to increase the light ray density using a lens array with a long focal length. In addition, a system with low crosstalk with respect to the viewer’s movement is constructed by forming a combined OVZ (COVZ) that covers both eyes through a time-division display. Further, an eye-tracking directional backlight is used to dynamically control the COVZ and realize a wide system viewing zone (SVZ). The luminance unevenness is reduced by partially overlapping two OVZs. The combination of OVZs formed a COVZ with an angle that is ∼1.6 times larger than that of the OVZ, and an SVZ of 81.4 deg and 47.6 deg for the horizontal and vertical directions, respectively, was achieved using the eye-tracking technology. The comparison results of the three types of display systems (i.e., the conventional system, our previously developed system, and our currently proposed system) confirmed that the depth range of the 3D images in the proposed system is wider than that of the other systems.
KEYWORDS: 3D acquisition, 3D displays, Photography, Optical resolution, 3D metrology, Image resolution, 3D image processing, Optical engineering, 3D vision, Visualization
We experimentally verified the depth perception and accommodation-convergence conflict in viewing integral photography. For comparison, the same measurements were performed with binocular stereoscopic images and real objects. First, the depth perception in viewing an integral three-dimensional (3D) target was measured at three display resolutions: 153, 229, and 458 ppi. The results showed that the depth perception was dependent on the display resolution. The results were also evaluated in a statistical test at a significance level of 5%. The results showed that the recognized depth perception ranges were 180, 240, and 330 mm when the display resolutions were 153, 229, and 458 ppi, respectively. The results were also analyzed in terms of image resolution. This suggested that depth perception occurred at over 1.0 cpd. The accommodation and convergence responses in viewing an integral 3D target displayed on a 3D display with 458 ppi were measured using PowerRef 3. The experimental results were evaluated with a multiple comparison test. It was found that 6 of the 10 observers did not have an accommodation-convergence conflict when viewing the integral 3D target inside and outside the depth of field. In conclusion, integral photography can provide a natural 3D image that looks like a real object.
We studied an integral three-dimensional (3D) TV based on integral photography to develop a new form of broadcasting that provides a strong sense of presence. The integral 3D TV can display natural 3D images that have motion parallax in the horizontal and vertical directions. However, a large number of pixels are required to obtain superior 3D images. To improve image quality, we applied ultra-high-definition video technologies to an integral 3D TV system. Furthermore, we are developing several methods for combining multiple cameras and display devices to improve the quality of integral 3D images.
KEYWORDS: Cameras, 3D modeling, 3D image processing, 3D displays, Robotics, Integral imaging, Image resolution, Image processing, Zoom lenses, Stereoscopy
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject’s position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
We propose a method for arranging multiple projectors in parallel using an image-processing technique and for enlarging the viewing zone in an integral three-dimensional image display. We have developed a method to correct the projection distortion precisely using an image-processing technique combining projective and affine transformations. To combine the multiple viewing zones formed by each projector continuously and smoothly, we also devised a technique that provides accurate adjustment by generating the elemental images of a computer graphics model at high speed. We constructed a prototype device using four projectors equivalent to 4K resolution and realized a viewing zone with measured viewing angles of 49.2 deg horizontally and 45.2 deg vertically. Compared with the use of only one projector, the prototype device expanded the viewing angles by approximately two times in both the horizontal and vertical directions.
A compact integral three-dimensional (3D) imaging device for capturing high resolution 3D images has been developed that positions the lens array and image sensor close together. Unlike the conventional scheme, where a camera lens is used to project the elemental images generated by the lens array onto the image sensor, the developed device combines the lens array and image sensor into one unit and makes no use of a camera lens. In order to capture high resolution 3D images, a high resolution imaging sensor and a lens array composed of many elemental lenses are required, and in an experimental setup, a CMOS image sensor circuit patterned with multiple exposures and a multiple lens array were used. Two types of optics were implemented for controlling the depth of 3D images. The first type was a convex lens that is suitable for compressing a relatively large object space, and the second was an afocal lens array that is suitable for capturing a relatively small object space without depth distortion. The objects captured with the imaging device and depth control optics were reconstructed as 3D images by using display equipment consisting of a liquid crystal panel and a lens array. The reconstructed images were found to have appropriate motion parallax.
A three-dimensional (3D) capture system based on integral imaging with an enhanced viewing zone by using a camera array was developed. The viewing angle of the 3D image can be enlarged depending on the number of cameras consisting of the camera array. The 3D image was captured by using seven high-definition cameras, and converted to be displayed by using a 3D display system with a 4K LCD panel, and it was confirmed that the viewing angle of the 3D image can be enlarged by a factor of 2.5 compared with that of a single camera.
The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.
Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.
KEYWORDS: 3D displays, LCDs, 3D image processing, Stereoscopic displays, Eye, 3D metrology, Image resolution, Holography, Photography, 3D image reconstruction
We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display
conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an
optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that
comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468
dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal
length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned
60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth
positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15
and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object
display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions.
The results suggest that the IP image induced more natural accommodation responses compared to the binocular
stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however,
they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce
accommodation to the depth positions of 3D images.
KEYWORDS: 3D image processing, Imaging systems, Integral imaging, 3D displays, 3D image reconstruction, LCDs, 3D vision, Staring arrays, Multichannel imaging systems, Image quality
We developed a three-dimensional (3-D) imaging system with an enlarged horizontal viewing angle for integral
imaging that uses our previously proposed method for controlling the ratio of the horizontal to vertical viewing
angles by tilting the lens array used in a conventional integral imaging system. This ratio depends on the tilt
angle of the lens array. We conducted an experiment to capture and display 3-D images and confirmed the
validity of the proposed system.
Integral 3D television based on integral imaging requires huge amounts of information. Earlier, we built an Integral 3D
television using Super Hi-Vision (SHV) technology, with 7680 pixels horizontally and 4320 pixels vertically. Here we
report on an improvement of image quality by developing a new video system with an equivalent of 8000 scan lines and
using this for Integral 3D television. We conducted experiments to evaluate the resolution of 3D images using this
prototype equipment and were able to show that by using the pixel-offset method we have eliminated aliasing that was
produced by the full-resolution SHV video equipment. As a result, we confirmed that the new prototype is able to
generate 3D images with a depth range approximately twice that of Integral 3D television using the full-resolution SHV.
We present a method of changing the ratio of a horizontal to vertical angles by rotating a lens array in integral
imaging. We arranged an elemental image with a width and height that is not equal to the pitch of an elemental
lens as the total number of the pixels of the elemental image is invariant. Additionally, we rotated the lens
array to avoid overlapping the specially-shaped elemental images. We enlarged the horizontal viewing angle by
arranging the elemental images with a width that is larger than height and the pitch of the elemental lens. We
investigated the arrangement of these images and found that rotating the lens array changed the ratio of the
horizontal to vertical viewing angles.
An integral 3DTV system needs high-density elemental images to increase the reconstructed 3D image's resolution,
viewing zone, and depth representability. The dual green pixel-offset method, which uses two green
channels of images, is a means of achieving ultra high-resolution imagery. We propose a precise and easy method
for detecting the pixel-offset distance when a lens array is mounted in front of the integral imaging display. In
this method, pattern luminance distributions based on sinusoidal waves are displayed on each panel of green
channels. The difference between phases (amount of phase variation) of these patterns is conserved when the
patterns are sampled and transformed to a lower frequency by aliasing with the lens array. This allows the
pixel-offset distance of the display panel to be measured in a state of magnification. The relation between the
contrast and the amount of phase variation of the pattern is contradicted in relation to the pattern frequency.
We derived a way to find the optimal spatial frequency of the pattern by regarding the product of the contrast
and amount of phase variation of the patterns as an indicator of accuracy. We also evaluated the pixel-offset
detection method in an experiment with the developed display system. The results demonstrate that the resolution
characteristics of the projected image were refined. We believe that this method can be used to improve
the resolution characteristics of the depth direction of integral imaging.
We present a generating method of stereoscopic images from moving pictures captured by a single high-definition
television camera mounted on the Japanese lunar orbiter Kaguya (Selenological and Engineering Explorer, SELENE).
Since objects in the moving pictures look as if they are moving vertically, vertical disparity is caused by
the time offset of the sequence. This vertical disparity is converted into horizontal disparity by rotating the images
by 90 degrees. We can create stereoscopic images using the rotated images as the images for a left and right
eyes. However, this causes spatial distortion resulting from the
axi-asymmetrical positions of the corresponding
left and right cameras. We reduced this by adding a depth map that was obtained by assuming that the lunar
surface was spherical. We confirmed that we could provide more acceptable views of the Moon by using the
correction method.
KEYWORDS: Modulation transfer functions, Spatial frequencies, 3D image processing, Image processing, Integral imaging, Image restoration, Modulation, 3D image reconstruction, Imaging systems, 3D displays
Integral imaging system uses a lens array to capture an object and display a three-dimensional (3-D) image of
that object. In principle, a 3-D image is generated at the depth position of the object, but for an object located
away from the lens array in the depth direction, the modulation transfer function (MTF) of the integral imaging
system will be degraded. In this paper, we propose a method that uses pupil modulation and depth-control
processing to alleviate this degraded MTF. First, to alleviate changes in the MTF due to differences in depth
when capturing the object, we use a pupil-modulated elemental lens to obtain an elemental image. Next, we use
a filter having characteristics opposite those of the MTF characteristics of the pupil-modulated elemental lens
to restore the degraded image. Finally, we apply depth-control processing to the restored elemental image to
generate a reconstructed image near the lens array. This method can alleviate degradation in the MTF of the
integral imaging system when an object is located at a distance from the lens array. We also show results of
computer simulations that demonstrated the effectiveness of the proposed method.
We geometrically analyzed incident light rays into a pupil of an observer in integral 3D imaging. The object's depth
area in which the projected image on the retina is not sampled by lens array was derived. Also, the depth area in which
observer's eye focuses on the reconstructed 3D images was found. These depth areas depend on the pitch of an
elemental lens constituting the lens array, diameter of the observer's pupil, and viewing distance. Further, we clarified
that even when the eye could not focus on the reconstructed 3D image, influence of inconsistency between focus
accommodation and convergence can be relieved.
KEYWORDS: Spatial frequencies, Image processing, Integral imaging, Image filtering, Imaging arrays, Process control, 3D image processing, 3D displays, Linear filtering, 3D image reconstruction
In integral imaging, lens arrays are used to capture the image of the object and display the three-dimensional
(3-D) image. In principle, the 3-D image is reconstructed at the position where the object was. We have hitherto
proposed a method for controlling the depth position of the reconstructed image by applying numerical processing
to the captured image information. First, the rays from the object are regenerated by numerical processing by
using information captured from the actual object together with a first virtual lens array. Next, the regenerated
rays are used to generate 3-D information corresponding to a prescribed depth position by arranging a second
virtual lens array. In this paper, we clarify the spatial frequency relationship between the object and the depthcontrolled
reconstructed image, and we propose filter characteristics that can be used to avoid aliasing. We also
report on experiments in which we confirm the effectiveness of the proposed filter.
KEYWORDS: 3D image processing, 3D image reconstruction, Image processing, Integral imaging, 3D displays, Data processing, Displays, 3D acquisition, Process control, Imaging systems
We propose a method for controlling the depth of three-dimensional (3-D) images by processing the captured
elemental image data based on an integral imaging system. Incoherent light is reflected from 3-D objects,
propagates through a lens array, and is captured as a first elemental image by a capturing device. Firstly, the
electric-field distribution in an arbitrary field is generated by use of the first elemental image data and the second
lens array. A computer generated electric-field distribution is referred to as the "intermediate image." Next, the
third lens array is assumed, and elemental images of the intermediate image formed by the third lens array are
calculated. Finally, to reconstruct the 3-D images, we use a conventional display system of integral imaging.
The depth of reconstructed images can be controlled according to the distance from the second lens array to the
third lens array. Experimental results showed that the depth of the 3-D image was arbitrarily controlled by the
proposed method.
KEYWORDS: Distortion, 3D image processing, 3D image reconstruction, 3D displays, Video, Image quality, Signal processing, Imaging arrays, Projection systems, Video processing
An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D
images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution
(EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D
system is that positional errors appear between the elemental image and the elemental lens when there is geometric
distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental
images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified
that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and
that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover,
we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR
projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing
angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.
The integral method is one of the ideal means for forming 3D spatial images like real objects. It requires, however,
extremely high-resolution device in order to satisfy sufficient resolution and wide viewing angle. The authors have been
examining integral 3D television systems applying the Super Hi-Vision (SHV) system, which uses ultrahigh-definition
LCOS, D-ILA devices. This paper describes the experimental integral 3D display and approaches to improve the quality
of elemental images, which are projected behind the lens array, by decreasing blurs and improving registration accuracy.
The display panels are four chips of D-ILA (4096 × 2160 pixels), each of which is used for R, B, G1 and G2 (pixel-offset
method). The optics of the R/B projector and the G1/G2 projector are accurately aligned by a half mirror and the
elemental images are formed on a 22 inches screen. The diffuser of the screen is a thin LC film with sufficient resolution
and homogeneous visual field. The lens array consists of newly developed short focus lenses to enable wide viewing
angle for multiple viewing. A drastic improvement of the 3D image quality has been achieved together with the
electronic distortion correction technique.
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both
capture and display stages. We present an experimental 3D television system based on the integral method using an
extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method
for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The
viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full
color and full parallax 3D images in real time.
A lens array consisting of microlenses is normally used when shooting or displaying three-dimensional (3-D)
images of a subject with use of the integral photography (IP) method. Elemental images that are produced
through many microlenses (elemental lenses) are acquired during shooting, and spatial images using elemental
images and lens array are produced during display. In this case, geometric distortions and chromatic aberrations
of elemental lenses cause reconstructed images to deteriorate. Shooting and displaying have been done with
use of a planar mirror instead of elemental lens in order to avoid these causes of deterioration. A pseudoscopic
image where depth is reversed is produced in the structuring where a planar mirror is used. In this paper, an
orthoscopic image that is produced by the use of a planar mirror array and gradient-index lens array has been
confirmed. In addition, a method of avoiding pseudoscopic images by using differing mirror arrays in the shooting
equipment and display setup has been proposed, and the confirmation of the effects through the experiment has
been confirmed.
The visual-resolution characteristics of an array, comprising many elemental afocal optical units, for an optical viewer are investigated. First, it is confirmed by wave optics that lightwaves exiting the array will converge, forming a three-dimensional optical image. Next, it is shown that the convergence point and resolution characteristics depend on the angular magnification of the afocal unit. When the magnification is 1.0, the lightwaves focus on the convergence point, and the resolution characteristics depend only on the diffraction of the afocal units. At magnifications other than 1.0, the lightwaves do not focus on the convergence point, decreasing the resolution as a result. To clarify this result quantitatively, we determined the viewing distances at which the alignment of the afocal units is not perceptible when an optical image formed by the array is viewed by an observer, and we calculated the modulation transfer functions normalized by the viewing distance.
We propose an optical viewer based on the integral method. The viewer composed of two GRIN lens arrays and a diffuser in between can form observable three-dimensional images of objects. The length of the elemental GRIN lenses that constitute the arrays is three quarters of the cycle of the meandering ray path on the input side and one quarter of the cycle on the output side. By substituting the diffuser with an image intensifier as an optical amplifier, we were able to observe 3-D images of objects placed in a dark space without the use of a camera or display equipment. The primitive experimental results proved the viewer produces three-dimensional images that can be observed. We also describe the visual resolution characteristics.
When designing a system capable of capturing and displaying three-dimensional (3-D) moving images in real time by the integral imaging (II) method, one challenge is to eliminate pseudoscopic images. To overcome this problem, we propose a simple system with an array of three convex lenses. This paper first describes by geometrical optics the lateral magnification of the elemental optics and expansion of an elemental image, confirming that the elemental optics satisfies the conditions under which pseudoscopic images can be avoided. By the II method, adjacent elemental images must not overlap, a condition also satisfied by the proposed optical system. Next, the paper describes an experiment carried out to acquire and display 3-D images. The real-time system we have constructed comprises elemental optics array with 54(H) x 59(V) elements, a CCD camera to capture a group of elemental images created by the lens array, and a liquid crystal panel to display these images. The experiment results confirm that the system produces orthoscopic images in real time and so is effective for
real-time application of the II method.
The authors describe visual resolution characteristics of an array comprising many afocal optical units. It is shown by wave optics that light beams exiting the array will converge, forming a three- dimensional optical image. It is also clarified that the converging point and resolution are dependent on the angular magnification of the afocal unit. When the magnification is 1.0, the optical wave focuses on the converging point, and the resolution is dependent only on the diffraction of the afocal unit. If the magnification is not 1.0, the optical wave does not focus on the converging point, affecting the resolution as a result. Based on this, we have obtained viewing distances and object distances at which the effects on the resolution by diffraction or defocusing are not perceptible when an optical image formed by the array is viewed by an observer.
An afocal lens array is proposed to form three-dimensional (3D) images. The array, which is composed of many afocal optical units, can form an image whose depth position is dependent on the angular magnification of the unit. The point of an image formed by the whole array differs from that where an image is formed by an afocal unit, except in the case that the angular magnification is 1.0. Especially, when the angular magnification has a negative value, an optical image has a negative longitudinal magnification, i.e., a 3D image with inverted depth. When used for integral imaging, the array can control the depth position and avoid pseudoscopic images with reversed depth.
Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.
This paper describes a new means to control depth positions of 3-D images for integral photography. A GRIN lens array is set in front of a lens array for image capturing. The length of each elemental GRIN lens composing the array is a half of one period of the optical path. The GRIN lens array also avoids pseudoscopic effects with revised depth. The depth position of the 3-D images is controlled by adjusting the distance between the GRIN lens array and the capturing lens array, thus producing 3-D images without depth distortion.
In an integral three-dimensional television (integral 3-D TV) system, 3-D images are reconstructed by integrating the light beams from elemental images captured by a pickup system. 160(H) x 118(V) elemental images are used for reconstruction in this system. We use a camera with 2000 scanning lines for the pickup system and a high-resolution liquid crystal display for the display system and have achieved an integral 3-D TV system with approximately 3000(H) x 2000(V) effective pixels. Comparisons with theoretical resolution and viewing angle are performed, and it is shown that the resolution and viewing angle of 3-D images are improved about 2 times and 1.5 times respectively compared to previous system. The accuracy of alignment of microlenses is another factor that should be considered for integral 3-D TV system. If the lens array of the pickup system or display system is not aligned accurately, positional errors of elemental images may occur, which cause the 3-D image to be reconstructed at an incorrect position. The relation between positional errors of elemental image and reconstructed image is also shown. As a result, the 3-D images reconstructed far from the lens array are greatly influenced by such positional error.
This paper proposes a new function of the two-dimensional lens array that is composed of many gradient-index lenses. The length of the lenses is an odd-integer multiple of the half period of the optical path. The array produces pseudoscopic three-dimensional (3D) images with reversed depth. Two lens arrays are positioned at a suitable distance so that orthoscopic 3D images with correct depth are formed in front of the lens arrays. The combined array captures, transmits and displays 3D images without other devices. A diffuser or opto-electronic amplifier can be inserted at the specific plane within the lens array.
A three-dimensional video system based on integral photography using a micro-lens array is described. Four problems with the system and technical means to solve them are proposed. Especially, resolution characteristics including those of pickup and display are described in detail. The experimental system using television camera and liquid crystal display (LCD) provides full color and autostereoscopic 3-D images with full parallax in real time.
A real-time three-dimensional (3-D) pickup and display setup called a Real-time IP system is proposed. In this system, erected real images of an object are formed by a GRIN lens array as element images, and are directly shot by a television camera. The video signal of a group of element images is transmitted to display device that combines a liquid crystal panel display and a convex micro-lens array, producing a color 3-D image in real-time. Full-color and autostereoscopic 3-D images with full-parallax can be observed. We confirmed the possibility of the 3-D television system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.