Open Access
13 June 2018 Special Section Guest Editorial: Light Field and Holographic Displays: New Trends in 3-D Imaging and Visualization
Jung-Young Son, Michael T. Eismann, Sumio Yano, Jose Manuel Rodrigues Ramos, Yi-Pei Huang
Author Affiliations +
Abstract
This guest editorial introduces the special section on Light Field and Holographic Displays.

The ultimate goal of 3-D imaging is to create the natural viewing condition through a display. This goal is still far away from being realized, even though the first 3-D imaging device was introduced 180 years ago. This delay is mainly due to the absence of displays specialized for 3-D imaging. This is also why the market for 3-D imaging remains limited even with its ability to provide depth sense.

Current 3-D imaging is based on display panels for a planar image. These panels have not allowed 3-D imaging to realize its full potential because their pixel sizes are too big and pixel resolution is too low to support multiview 3-D viewing of sufficient quality. The most serious problem is a narrow focusable depth range limiting the range where viewers’ eyes’ accommodation and convergence are not in conflict with each other. This range is approximately given as ±0.3D (diopter) of the optimum viewing distance of the imaging system. This focusable depth range represents the depth of field (DOF) of the viewers’ eyes and exists only at close range to a display panel or screen of the imaging system.

Due to this narrow focusable depth range, it is common for most multiview 3-D imaging systems to have accommodation and convergence conflict (ACC), the main cause of eye fatigue and other symptoms derived by the fatigue. ACC is induced when the convergence of a viewer’s two eyes mismatches with the accommodation of each eye. This mismatch degrades the main depth cue of the multiview 3-D imaging, binocular and motion parallax.

The second problem is the presence of an optimum viewing distance and viewing zone in current multiview 3-D imaging. This zone restricts viewing positions to a narrow space along the depth direction. Since this restriction is caused by the optical principle of perceived depth in the imaging, it is considered as inherent to the imaging, but there is still room to extend the zone. This area should be extended as much as possible to give more freedom of movement for the viewers and to allow more simultaneous viewers.

Other problems include image distortion, chromatic distortion, image noise, and low individual image resolution. However, the recent introduction of flat panel displays with ultrahigh definition (UHD) resolution may help address the above problems.

Light field (LF) and electro-holographic (EH) imaging have been introduced to resolve the ACC problem by extending the focusable depth range more than the diopter range allowed by the DOF of the viewers’ eyes. The name “light field” has been used for many 3-D displays developed for computer graphics, but the two types of imaging are just new names for multiview 3-D imaging, which can provide a continuous parallax, and holographic imaging, which uses a display panel/chip to display a hologram electronically. Hence, it is not difficult to imagine that the current flat or curved display panel for planar imaging can also be the main display panels for both LF and EH imaging, and the three types of imaging could be displayed simultaneously on a panel.

The strong market preference for organic light-emitting diode (OLED) displays over liquid crystal display (LCD), however, makes the prediction somewhat vulnerable, because combining three imaging types on an OLED display panel will require extra effort compared to LCDs. An OLED display is an active display and the light from each pixel is incoherent. EH imaging, which requires coherent light sources, can hardly be realized on an OLED display. Added to this, the current curved or flat display panel is not optimized for LF and EH imaging in the sense of pixel size, pixel resolution and regularity, and panel size: the pixel size is too big and the resolution is too small for OLEDs.

Moreover, the regularity in the pixel alignment in the panel/chip induces light losses, and the chip size is too small for EH imaging. The smallest pixel size used for EH imaging was 1  μm2, but the chip size was 1  mm2. It is too small to display a resolvable holographic image. The display chip with an approximate size of 37×21  mm2, which has the pixel size of 4.8×4.8  μm2 and pixel resolution of 7680×4320, was also used to display a hologram, but its viewing zone angle was too small to view the reconstructed image. For this reason, display chips have been built with the size of 50×50  mm2 and pixel size of 1  μm2 for EH imaging, and 11k resolution with near 11×11  μm2 pixel size for LF imaging.

These developments will fulfill two essential 3-D display requirements derived from the natural viewing condition of a scene/object in a surrounding environment: 1) displaying an equally demagnified image of a scene/object to the size of the active surface of the display panel in all three dimensions, and 2) providing a continuous parallax in all directions. The first requirement indicates that there should be no other differences between the displayed image and the real object/scene, except the size, and relaxed restrictions such as viewing angle, position, distance, or image space where the perceived image can be located. These are the essential components for creating an immersive atmosphere with 3-D displays. The second requirement indicates that the perceived image should have a spatial image nature so that viewers do not suffer from ACC. This is an essential requirement to interact with a 3-D image. Furthermore, the disparity between multiview images in the image set should also be very small to provide continuous parallax. For this reason, multiview imaging can be obtained with a plenoptic camera or an aperture-sharing camera.

LF and EH displays have the most potential of meeting these requirements; however, they require advances in optical components such as viewing zone forming optics (VZFO) and display panels. For light field displays, the resolving power of the VZFO will be the bottleneck to realization. Each elemental lens in the VZFO should resolve all pixels and subpixels in the pixel cell/elemental image under the lens. If the lens cannot resolve each pixel or subpixel, images will be lost, reducing the total number of resolved images. If the width of a viewing region for each image is assumed as 1.5 mm at a viewing position to satisfy the supermultiview condition, which is the main condition of being a light field display, the required pixel size will be 5  μm for the viewing position’s distance from the VZFO, which is 300× the elemental lens’s focal length. To resolve all different view images loaded on the display panel, the elemental lens can resolve at least 5  μm size, which corresponds to the focused beam size by the 8-element compound objective lens used in high-end mobile phones. In comparison, a typical microlens or lenticular array used as the VZFO in current contact-type 3-D displays is only a single lens and, therefore, may not be able to resolve this size.

The natural viewing condition is realized by the supermultiview condition in the LF that requires the pupils of the viewer’s eyes to receive at least two images simultaneously without any overlap between them. By doing so, it has been observed that the DOF can be extended more than that for a stereo image. For example, when a viewer is watching a stereo image, the focusable depth range where he/she can accommodate and converge his/her eye to the image can be extended to ±0.3D without ACC. The D in the range represents the diopter value obtained from 1/d, where d is the viewing distance in meters of the viewer from the screen/panel. The ±0.3D indicates that when the viewing distance is 750 mm (1/0.75=1.33D), the focusable depth range extends from 1.03D to 1.63D. These diopter values correspond to a distance range of 613 mm to 971 mm, meaning that viewers can accommodate and converge their eyes to the image up to 137 mm in front of and 221 mm behind the panel or screen. This DOF value represents a characteristic value of human eyes. But when the number of simultaneously projected images to each eye increases by 2, 3, 4 and more, the DOF increases further with the increasing image numbers.

In this special section, recent advances in 3-D imaging technologies regarding light field and holographic displays are presented by many known researchers in the field. Many address the major challenges in multiview 3-D displays outlined in this introduction and expand on the solution approaches aimed at providing the quality of 3-D displays necessary to significantly increase the market demand.

Biography

Jung-Young Son earned his bachelor of science degree in avionics from the Civil Aviation University of Korea in 1973, and master of science in electronic engineering and PhD in engineering science with a major in optics from the University of Tennessee, Knoxville, USA, in 1982 and 1985, respectively. He also earned a doctor of technical science degree from National Technical University of Ukraine “Kiev Polytechnic Institute” in 2011. He is currently a chair professor in the Biomedical Engineering Department of Konyang University, Nonsan, Chungnam, Korea, an SPIE fellow, and an Academician of Technological Sciences of Ukraine since 2008. He has chaired the SPIE Three-Dimensional Display and Visualization Conference since 2005 with Prof. Bahram Javidi of Univ. of Connecticut. He also serves as an associate editor of Optical Engineering. His main research interest is 3-D imaging, including electroholography.

Michael Eismann is the chief scientist of the Sensors Directorate of the Air Force Research Laboratory (AFRL), adjunct professor in the Engineering Physics Department of the Air Force Institute of Technology, and Editor-in-Chief of Optical Engineering. He received a BS in physics from Thomas Moore College in 1985, MS in electrical engineering from the Georgia Institute of Technology in 1987, and a PhD in electro-optics from the University of Dayton in 2004. He is a Fellow of AFRL and SPIE with research interests in hyperspectral remote sensing, infrared imaging, and statistical signal processing.

Sumio Yano is a professor of Shimane University, Matuse, Japan. He received his BS in engineering and DrEng from The University of Electro-Communications, Tokyo. He was a research member involved in the development of HDTV and the human vision system at Japan Broadcasting Corporation (NHK). He was also a member of the development team of the stereoscopic HDTV system at NHK. He worked on the human vision system at Advanced Telecommunications Research Institute International, Kyoto, and developed a light field display system at the National Institute of Information and Communications Technology, Tokyo. His research interest is the human vision system and its application to image systems.

Jose Manuel Rodriguez-Ramos received his BS in astrophysics in 1990 and PhD in physics in 1997 at University of La Laguna (ULL), Spain. He has been a research and postdoctoral fellow at the IAC and an associate professor and vice dean of the Engineering Faculty at the ULL. Presently, he is CEO of Wooptix, a technological company, portfolio of Intel Capital, specialized in image processing and wavefront phase acquisition. His research interests include adaptive optics, 3-D reconstruction through turbulent mediums, and light field acquisition using mobile devices.

Yi-Pai Huang is a professor in the Department of Photonics, the associate vice president of R&D, and the director of the Display Center at National Chiao Tung University. His research interests include liquid crystal devices, AR/VR and 3-D technology, and display optics. He was awarded the SID Peter Brody Prize, Google Research Award, SPIE Fumio Okano Best 3D Paper Prize, and has seven times received the Society of Information Display (SID) distinguished paper award. He is also currently the Honorable Director of the 3D Interaction & Display Association (3DIDA) in Taiwan, and the Chairman of the SID Taipei chapter.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)
Jung-Young Son, Michael T. Eismann, Sumio Yano, Jose Manuel Rodrigues Ramos, and Yi-Pei Huang "Special Section Guest Editorial: Light Field and Holographic Displays: New Trends in 3-D Imaging and Visualization," Optical Engineering 57(6), 061601 (13 June 2018). https://doi.org/10.1117/1.OE.57.6.061601
Published: 13 June 2018
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D displays

Stereoscopy

LCDs

Holography

3D image processing

Visualization

3D visualizations

RELATED CONTENT


Back to Top