The light field concept can correctly and completely describe the distribution of rays in 3D space within theory of geometrical optics. However, the data quantity is huge, and not easy to capture or process. Though light field 3D displays are almost ideal in principle, they are not really practical given the huge number of pixels required. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses a characteristic of human vision. Though several cameras are needed, VELF can be captured by a camera array. Reconstructing the ray distribution involves linear blending, but this process is so simple that we can realize this calculation optically in the VELF3D display. It produces high image quality as its high pixel usage efficiency overcomes the tradeoff between resolution and directional density of rays. In this paper, we summarize the relationship between the characteristics of human vision and VELF. We introduce the VELF3D display that consists of a horizontal RGB stripe LCD panel and a parallax barrier, whose spacing width is almost the same as pixel pitch. Though it is similar to the conventional parallax barrier type autostereoscopic 3D display, it can reproduce correct rays for human vision. High feel of existence is induced by the display’s smooth and exact motion parallax; its resolution is high enough to display characters. Head tracking allows the viewing zone to be greatly expanded while maintaining smooth motion parallax. Since image capture and display are very simple, VELF is suitable for realtime live action applications with high image quality.
KEYWORDS: Cameras, Visualization, Video, 3D image processing, 3D displays, Prototyping, Video acceleration, Linear filtering, Image processing, 3D modeling
A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.
KEYWORDS: 3D displays, Projection systems, Scattering, LCDs, Light scattering, 3D image processing, Visualization, Polarization, 3D volumetric displays, Head
A new depth-fused 3-D (DFD) display for multiple users is presented. A DFD display, which consists of a stack of
layered screens, is expected to be a visually comfortable 3-D display because it can satisfy not only binocular disparity,
convergence, accommodation, but also motion parallax for a small observer displacement. However, the display cannot
be observed from an oblique angle due to image doubling caused by the layered screen structure, so the display is
applicable only for single-observer use. In this paper, we present a multi-viewing-zone DFD display using a stack of a
see-through screen and a multi-viewing-zone 2-D display. We used a film, which causes polarization-selective scattering,
as the front screen, and an anisotropic scattering film for the rear screen. The front screen was illuminated by one
projector, and the screen displayed an image at all viewing angles. The rear screen was illuminated by multiple
projectors from different directions. The displayed images on the rear screen were arranged to be well overlapped for
each viewing direction to create multiple viewing zones without image doubling. This design is promising for a large-area
3-D display that does not require special glasses because the display uses projection and has a simple structure.
A new type of holographic polymer dispersed liquid crystal (HPDLC) device in which liquid-crystal (LC) alignment is controlled by polymer layers has been developed. An LC containing light-curable prepolymer was placed between two substrates with anti-parallel alignment layers and cured using the interferential fringes of Ar+ laser light. Photo- polymerization occurred at the peaks of the interferential fringes and polymer networks were formed. LC layers formed at the nodes of the fringes. The LC molecules contained in the polymer layers were fixed so that they were aligned parallel to the substrates. Therefore, the polymer layers could be used to control the LC molecules in the LC layers and act as alignment layers. This alignment-controlled HPDLC device has a possibility to make effective use of the refractive index anisotropy of the LC. The fundamental operation of the alignment-controlled HPDLC device has been confirmed experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.