Open Access
13 January 2022 Integral three-dimensional display system with wide viewing zone and depth range using time-division display and eye-tracking technology
Author Affiliations +
Abstract

We propose an integral three-dimensional (3D) display system with a wide viewing zone and depth range using a time-division display and eye-tracking technology. In the proposed system, the optical viewing zone (OVZ) is narrowed to a size that only covers an eye to increase the light ray density using a lens array with a long focal length. In addition, a system with low crosstalk with respect to the viewer’s movement is constructed by forming a combined OVZ (COVZ) that covers both eyes through a time-division display. Further, an eye-tracking directional backlight is used to dynamically control the COVZ and realize a wide system viewing zone (SVZ). The luminance unevenness is reduced by partially overlapping two OVZs. The combination of OVZs formed a COVZ with an angle that is ∼1.6 times larger than that of the OVZ, and an SVZ of 81.4 deg and 47.6 deg for the horizontal and vertical directions, respectively, was achieved using the eye-tracking technology. The comparison results of the three types of display systems (i.e., the conventional system, our previously developed system, and our currently proposed system) confirmed that the depth range of the 3D images in the proposed system is wider than that of the other systems.

1.

Introduction

The integral three-dimensional (3D) display is a 3D imaging system based on integral photography proposed by Lippmann.1 This promising 3D display technique can potentially be applied to various fields and has been actively researched because it does not require special glasses to view the natural full-parallax images.211 However, the viewing zone, spatial resolution, and depth range of the reconstructed 3D images are limited because the integral 3D display reconstructs a large amount of light ray information.12,13 Therefore, a system design that efficiently increases the light ray density is necessary for displaying 3D images with a wide viewing zone, high spatial resolution, and wide depth range.

To date, various design methods for the viewing zones of the integral 3D display system have been reported. As shown in Fig. 1, they are classified into three types: optical viewing zone (OVZ), combined OVZ (COVZ), and system viewing zone (SVZ). OVZ is determined by the design of the elemental image array (EIA) and lens array, as shown in Fig. 1(a). Generally, the OVZ angle θovz is expressed as

Eq. (1)

θovz=2arctan(e2f),
where e is the size of the elemental image and f is the focal length of the lens array. Moreover, there is a trade-off between the OVZ angle and light ray density. The light ray density increases when the OVZ is narrowed, resulting in a wide depth range. Furthermore, a side lobe OVZ, in addition to the main lobe OVZ passing through the corresponding lens, is formed by passing through the adjacent lens when the EIA is displayed with diffused light. In contrast, the COVZ is formed by combining multiple OVZs using directional light from different directions, as shown in Fig. 1(b). COVZ is formed because the side lobe OVZ is removed by forming an OVZ using a directional light. Finally, the SVZ is formed by generating the EIA in real-time and dynamically controlling the OVZ, as shown in Fig. 1(c). This is achieved by deriving the eye position of the viewer using the eye-tracking technology, which generates the EIA for each frame and forms an OVZ based on the eye position of the viewer.

Fig. 1

Classification of viewing zones in the integral 3D display systems: (a) OVZ, (b) COVZ, and (c) SVZ.

OE_61_1_013103_f001.png

Thus, the viewing zone and light ray density can be enhanced by optically and systematically designing the formation of the viewing zone. Therefore, we propose a method that forms both COVZ and SVZ using the time-division display and eye-tracking technology and realizes a wide viewing zone and depth range. Furthermore, we construct a prototype system for displaying the integral 3D images with a wide viewing zone and depth range by applying the novel design of the viewing zone and evaluate its characteristics.

2.

Related Work

2.1.

3D Display System that Forms COVZ

Previous studies have reported a method using multiple projectors as a 3D display system to form a COVZ.1416 A collimating lens was placed in front of the projector to provide directivity to the projected light, and the light passes through the lens array to form an OVZ without side lobes. An OVZ without side lobes was formed based on the projection angle by projecting from another angle using another projector. Then, a COVZ is formed by combining multiple OVZs, expanding the viewing zone.

A method of forming a COVZ via a time-division display using a transmissive liquid-crystal display (LCD) and a directional backlight was proposed. Yang et al. constructed a directional backlight using a light-emitting diode (LED) array and formed a COVZ with a time-division display.17 Further, Liu et al. formed 40 deg OVZs in three directions using multidirectional backlight units and displayed a 3D image with a wide viewing zone of 120 deg in the horizontal direction using a lenticular lens.18

Because the aforementioned methods use projectors or LED arrays, the COVZ is static. In addition, they are primarily aimed at expanding the viewing zone by forming the COVZ and do not consider narrowing the OVZ to increase the light ray density.

2.2.

3D Display System that Forms SVZ

We previously proposed a method to achieve a wide viewing zone by generating EIA in real-time based on the eye position of the viewer, dynamically controlling the OVZ, and forming an SVZ.19 Furthermore, the light ray density was increased by narrowing the OVZ to a single viewer using a lens array with a long focal length, which expanded the depth range. In addition, unlike the binocular system, the integral 3D system with eye-tracking forms an OVZ, resulting in the suppression of the effect of system latency and reduction of the crosstalk caused by the viewer’s movement using the outer margins of both eyes. Therefore, we constructed a system using a lens design that can widen the OVZ horizontally and showed that an OVZ margin of 101  mm or more is robust to the viewer’s movement. In the previous design, when the light ray density needs to be increased to expand the depth range, a lens array with a longer focal length should be used, but the OVZ can only cover one eye.

To solve the aforementioned problem, in this paper, we propose a method that further expands the depth range by forming both the COVZ and SVZ using time-division display and eye-tracking technology while maintaining the previous wide viewing zone and robustness based on the viewer’s movement.

3.

Proposed Method

The light ray density is low when using the conventional integral 3D display, which consists only of the lens array and display, because a wide OVZ is formed to ensure that multiple viewers could see simultaneously, as shown in Fig. 2(a). Therefore, in the previously developed design,19 the viewer was limited to a single viewer, the focal length of the lens array was lengthened to increase the light ray density, and the eye-tracking technology was used to dynamically move the OVZ. A wide SVZ was realized based on the result, as shown in Fig. 2(b). In addition, by designing a lens array arrangement that forms a horizontally widened OVZ, it is possible to have a margin in the OVZ around both eyes and reduce the occurrence of crosstalk due to system latency when a viewer moves. However, the OVZ cannot be narrowed further because it is necessary to form an OVZ that covers both eyes. Therefore, in this study, we propose a method that can further narrow the OVZ and increase the light ray density.

Fig. 2

Comparison of the system configuration and OVZ (COVZ) at the viewing position using the (a) conventional, (b) previous,19 and (c) proposed designs.

OE_61_1_013103_f002.png

In the proposed method, an OVZ covering an eye is formed, as shown in Fig. 2(c). Then, the OVZ for the left and right eyes is alternately switched to form a COVZ with a size covering both eyes using the directional backlight and time-division display. As a result, a COVZ of the same size as the previous design is formed, which makes the system robust with low crosstalk with respect to the viewer’s movement.

It is necessary to remove the OVZ of the side lobe to form a COVZ using time-division display. Therefore, we built a directional backlight composed of a point light source and convex lens. In addition, the direction of the light must be dynamically changed based on the eye position. Therefore, the point light source image is displayed using a display panel, and the direction of the light ray is dynamically changed by changing the position of the point light source based on the eye position. It is possible to realize a display with a higher light ray density using the proposed design than the previous design that formed an OVZ that includes both eyes. Moreover, it is possible to realize a display with low crosstalk with respect to the movement of the viewer because a COVZ of the same size as the previous design can be formed. Furthermore, a wide SVZ is realized because the COVZ can be dynamically moved. Table 1 shows the comparison among the technical features of the conventional, previous, and proposed designs.

Table 1

Comparison of the technical features of the conventional, previous, and proposed designs.

Conventional designPrevious design19Proposed design
Time-division displayNoNoYes
Eye-tracking technologyNoYesYes
Light ray densityLowMiddleHigh

In the eye-tracking device, using the horizontal and vertical eye detectable angles, θtrack,h and θtrack,v, respectively, the horizontal and vertical SVZ angles, θsvz,h and θsvz,v, respectively, are expressed using the following equations:

Eq. (2)

θsvz,hθtrack,h+θcovz,hθipd,

Eq. (3)

θsvz,vθtrack,v+θcovz,v,
where θcovz,h and θcovz,v are the horizontal and vertical COVZ angles, respectively.

The angle θipd formed by the interpupillary distance dipd and viewing distance L is given by

Eq. (4)

θipd=2arctan(dipd2  L).

4.

Construction of the Prototype System

4.1.

System Configuration

Figure 3 shows the configuration of the system for generating the point light source image and the EIA based on the eye position. The time-division display for each eye is performed alternately. The prototype of the proposed system was constructed using an eye-tracking directional backlight, an LCD for the EIA display, a lens array, a camera for eye tracking, and a PC for image generation (Figs. 3 and 4). The eye-tracking directional backlight was produced using a convex Fresnel lens with a focal length of 200 mm and an LCD for displaying the point light source image. The refresh rate of both of the LCDs for the point light source and EIA was 60 Hz, and they were synchronized by outputting the images using the same graphic board. A camera (Logicool C922) was used for estimating the eye position, and Dlib,20 which is a library containing machine learning algorithms, was employed for estimating the 3D position of the eye position. The camera operates at 60 frames per second (fps) and has a 640×360 resolution and a 78 deg lens angle of view. The estimation error of the eye position was suppressed below 10 mm in all directions. A high-pixel-density LCD manufactured by Sharp with a pixel density of 537 pixels per inch (ppi) and a resolution of 2560×1440 was used to display the EIA.

Fig. 3

System configuration for time-division display based on eye position: (a) left eye and (b) right eye.

OE_61_1_013103_f003.png

Fig. 4

Prototype of the proposed design.

OE_61_1_013103_f004.png

4.2.

Lens Array Design

The lens array corresponding to each system was designed and created to compare the display performance of the system in the conventional, previous, and proposed designs described in Sec. 3. Table 2 shows the specifications of the lens design. Lens arrays with a lens pitch of 0.5 mm were used for all designs. The focal length was designed to increase in the order of 1.0, 2.0, and 3.0 mm. In proportion to the focal length, the light density was two or three times higher than that of the conventional design in both the horizontal and vertical directions, and the depth range was widened. Further, to reduce the crosstalk caused by the viewer’s movement, a square-structure lens array was rotated by 45 deg in the proposed design to form a horizontally widened OVZ. The aspect ratio of the OVZ is set to 2:1 (H:V) by arranging the lens array and generating the corresponding EIA, as shown in Fig. 5.19,21 A sufficient margin of the OVZ can be achieved, similar to the previous design, by forming the COVZ using time-division display to ensure the construction of a system that is robust to the viewer’s movement. In Sec. 5.3, the results of the evaluation of the difference in the depth range by applying the corresponding lens arrays to the three types of display systems shown in Fig. 2 are presented.

Table 2

Specifications of the lens arrays.

Conventional designPrevious design19Proposed design
Lens pitch0.5 mm0.5 mm0.5 mm
Focal length1.0 mm2.0 mm3.0 mm
ArrangementSquare (0 deg rotation)Honeycomb (30 deg rotation)Square (45 deg rotation)
OVZ (H×V)28.1  deg×28.1  deg24.4  deg×7.2  deg13.4  deg×6.7  deg

Fig. 5

(a) Lens array arrangement of the proposed design. (b) Magnified image of the corresponding EIA.

OE_61_1_013103_f005.png

4.3.

Real-Time Rendering of the Point Light Source Image and EIA

A point light source image for a directional backlight and an EIA were generated after analyzing the camera image and estimating the eye position. Figure 6 shows the rendering pipeline of the point light source image and EIA. Real-time rendering was performed using a PC with an Intel Core i9-10980XE, 3.00 GHz CPU, 128 GB memory, and NVIDIA GeForce RTX 2080 Ti GPU. A point light source image is rendered using a fragment shader by calculating the position where the point light source of a white image is displayed from the parameters of the eye position. Meanwhile, EIA is rendered using a method to generate an EIA at high speed by parallel processing with the GPU.2224 The multiviewpoint images of a 3D model generated by a virtual camera array are stored in a texture array, and the EIA is generated in real time by performing pixel mapping using a fragment shader. Rendering was performed at 60 fps in synchronization with the refresh rate of the display using a virtual camera array of 14 horizontal cameras and 7 vertical cameras. The displays for the point light source image and EIA were synchronized using the same graphics board (NVIDIA GeForce RTX 2080 Ti). Both images were rendered using the same program in the Unity game engine with the output at 60 fps. The aforementioned rendering process was performed in every frame such that the OVZs were alternatively formed for the left and right eyes.

Fig. 6

Rendering pipeline for the point light source image and EIA.

OE_61_1_013103_f006.png

5.

Evaluation Experiments and Results

5.1.

Reduction of Luminance Unevenness

The luminance unevenness was observed for each OVZ of the prototype system. The luminance unevenness was caused by a combination of factors, including the diffusion of light in the LCD, aberration of the lens, size of the point light source, and optical system placement error. The highest luminance was found in the central part, and it gradually decreased toward the periphery. Therefore, the luminance was measured using a luminance meter (TOPCON BM-7) at 1 deg intervals while tracking the center of the display by moving it horizontally at a viewing distance of 700 mm, as shown in Fig. 7. The luminance was measured by switching the point light source for the directional backlight between ON for only the left eye, ON for only the right eye, and ON for both. A white image was input to the LCD for the EIA.

Fig. 7

Setup for the luminance measurement.

OE_61_1_013103_f007.png

Figure 8(a) shows the luminance profile when the position of the point light source was set to completely separate the OVZs for the left and right eyes. The horizontal and vertical axes represent the viewing angle and luminance, respectively. The luminance drops in the central part of the COVZ, and the luminance unevenness is large when completely separated. Therefore, the light for the left eye and right eye is partially overlapped in the central part to reduce the luminance unevenness. The two OVZs were combined to overlap at approximately half of the peak luminance of each OVZ. Figure 8(b) shows the measurement results when partially overlapped. The luminance is smoothed and the luminance unevenness is reduced in the COVZ by partially overlapping the light for the left and right eyes. The reduction index value Ir of the luminance unevenness is expressed as

Eq. (5)

Ir=Lb,maxLb,minLa,maxLa,min,
where La,max and La,min are the maximum and minimum luminance in the COVZ when separated completely, respectively, and Lb,max and Lb,min are the maximum and minimum luminance in the COVZ when partially overlapped, respectively. Therefore, it was confirmed that the luminance unevenness was reduced to 48%. In particular, the center part of the COVZ becomes brighter by suppressing the luminance unevenness using this method. This allows the viewer to view 3D images naturally without darkening even when the viewer moves. When completely separated, as the luminance value becomes zero at the center of the COVZ, as shown in Fig. 8(a), the contrast ratio cannot be obtained in the performance of the display. In two-dimensional flat panel displays, a certain level of contrast is obtained within the range of the viewing angle; therefore, we consider that obtaining a certain level of contrast within the range of the COVZ is desirable for the performance of 3D displays. When partially overlapped, as shown in Fig. 8(b), as a certain level of contrast can be obtained in the entire COVZ, the configuration is suitable for a 3D display. As a future study, the effect of the light diffusion must be reduced, and the luminance unevenness should be further suppressed.

Fig. 8

Luminance profiles when (a) OVZs are completely separated and (b) OVZs are partially overlapped.

OE_61_1_013103_f008.png

5.2.

Viewing Zones

First, we conducted an evaluation experiment on the OVZ and COVZ of the proposed system. In the experiment, the aspect ratio of the OVZ was made horizontally long by 2:1 by rotating the lens array of a square structure by 45 deg.19 In the proposed method, the OVZs for the left and right eyes were combined using a time-division display. As discussed in Sec. 5.1, the OVZs were partially overlapped and combined to reduce the luminance unevenness of the connecting part. Figure 9 shows the difference in 3D images, which depends on the presence or absence of time-division display at both ends of the OVZ (COVZ), when viewed in the center of the display. The horizontal OVZ without time-division display is 13.4 deg [Fig. 9(a)], and the COVZ with time-division display is 21.4 deg [Fig. 9(b)]. The OVZ expansion of 1.6 times was achieved. As a result, a margin of 101  mm in the COVZ was realized on the outside of both eyes, and a system with low crosstalk with respect to the movement of the viewer was constructed when the viewer moved at a speed of 340  mm/s or less.19

Fig. 9

Reconstructed integral 3D images at both ends of (a) OVZ when time-division display is not performed and (b) COVZ when time-division display is performed.

OE_61_1_013103_f009.png

Subsequently, an evaluation experiment was conducted on the SVZ by applying an eye-tracking technology in addition to a time-division display. Figure 10 shows an integral 3D image viewed from different viewpoints. An SVZ angle of 81.4 deg in the horizontal direction and 47.6 deg in the vertical direction was achieved. The integral 3D images could be continuously observed within the range in which the viewer’s eye could be detected by the camera.

Fig. 10

Reconstructed integral 3D images viewed from different viewpoints in the horizontal (Video 1) and vertical (Video 2) directions when SVZ is formed using the eye-tracking technology (Video 1, MP4, 4.8 MB [ https://doi.org/10.1117/1.OE.61.1.013103.1]; Video 2, MP4, 4.5 MB [ https://doi.org/10.1117/1.OE.61.1.013103.2]).

OE_61_1_013103_f010.png

In the experiment, we used a camera with a lens angle of view of 78 deg for eye tracking. SVZ can be further expanded by widening the angle of view of the camera. However, if a camera with a wide-angle lens is used, the periphery of the image is likely to be distorted significantly, which would necessitate the need for eliminating the effect of lens distortion to accurately estimate the eye position.

5.3.

Spatial Frequency Characteristics

We evaluated the spatial frequency characteristics of the integral 3D image experimentally using the three types of lens arrays shown in Table 2 that were applied to the three types of display systems (i.e., conventional, previous, and proposed designs) shown in Fig. 2. The LCD for the EIA described in Sec. 4.1 was used in all systems.

First, we performed a simulation of the spatial frequency characteristics for each design. In an integral 3D display, the upper-limit spatial frequency γ is expressed using the viewing spatial frequency β and Nyquist frequency βn12

Eq. (6)

β=f2p(Lz)|z|,

Eq. (7)

βn=L2Pl,

Eq. (8)

γ=min[β,βn],
where L is the viewing distance, z is the distance from the lens array to the display position of the 3D image (the front is positive), f is the focal length of the lens array, p is the pixel pitch, and Pl is the lens pitch. Figure 11 shows the simulation results of the spatial frequency characteristics of the integral 3D image for each design. In the figure, the horizontal axis represents the distance of the reconstructed image from the lens array, and the vertical axis represents the upper-limit spatial frequency in the horizontal direction. As shown in Table 2, the lens pitches are the same for all lens designs. However, the Nyquist frequencies, βn, differ owing to different lens arrangements: square (0 deg rotation), honeycomb (30 deg rotation), and square (45 deg rotation). For all designs, the viewing distance was set to 700 mm. The light ray density increased proportionally by increasing the focal lengths of the conventional, previous, and proposed lens arrays to 1.0, 2.0, and 3.0 mm. To compare the depth ranges at the same spatial frequency for the three types of designs, we compared them at 12.2 cycles per degree (cpd), which is the Nyquist frequency of the conventional design—the lowest Nyquist frequency among the three designs. As a result, the depth ranges of the 3D image were improved to 21.2, 42.4, and 63.6 mm at a spatial frequency of 12.2 cpd.

Fig. 11

Upper-limit spatial frequency characteristics of the conventional, previous, and proposed designs.

OE_61_1_013103_f011.png

The reconstructed images of the resolution chart at positions of 60, 30, 0, 30, and 60 mm (the front is positive) from the lens array are shown in Fig. 12 to evaluate the depth range of the 3D image. They were taken from the center position. A time-division display was performed in the proposed design unlike in the conventional and previous designs. The lower part of Fig. 12 shows the magnified images of the reconstructed images when displayed at 60 mm from the lens array. Aliasing occurs at a spatial frequency of 2.0  cpd in the conventional design, 3.9  cpd in the previous design, and 5.9  cpd in the proposed design. These results are congruent with the simulation graph shown in Fig. 11, confirming that the spatial frequency characteristics of the integral 3D obtained in the proposed design are superior to those obtained in the conventional and previous designs.

Fig. 12

Reconstructed integral 3D images of the resolution chart at different depths in the conventional, previous, and proposed designs.

OE_61_1_013103_f012.png

6.

Conclusion

In this study, we proposed an integral 3D display system using an eye-tracking directional backlight and a time-division display. We used a lens array with a focal length that is longer than the previous design to increase the light ray density. We constructed a system with low crosstalk with respect to the movement of the viewer by combining the OVZs for the left and right eye using time-division display to form the COVZ. Furthermore, a wide SVZ of 81.4 deg horizontally and 47.6 deg vertically was realized by dynamically controlling the COVZ using eye-tracking technology. Results show that the luminance unevenness in the COVZ can be reduced by overlapping and combining a part of the OVZs. In addition, we evaluated the spatial frequency characteristics using three types of lens arrays and display systems, such as the conventional, previous, and proposed designs. Through this, we confirmed that the depth range of 3D images was the widest in the proposed design. Finally, this method can efficiently realize a wide viewing zone, a wide depth range, and low crosstalk, indicating its potential application in practical integral 3D displays in the future. The limitation of the current prototype is that the luminance of the reconstructed image is not sufficient for viewing because a general LCD is used as the display for the eye-tracking directional backlight. As a future study, we will consider using a display with higher luminance for the eye-tracking directional backlight to brighten the 3D image.

Acknowledgments

The authors declare no conflicts of interest.

References

1. 

M. G. Lippmann, “Épreuves, réversibles donnant la sensation du relief,” J. Phys. Theor. Appl., 7 (1), 821 –825 (1908). https://doi.org/10.1051/jphystap:019080070082100 Google Scholar

2. 

F. Okano et al., “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt., 36 (7), 1598 –1603 (1997). https://doi.org/10.1364/AO.36.001598 APOPAI 0003-6935 Google Scholar

3. 

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett., 26 (3), 157 –159 (2001). https://doi.org/10.1364/OL.26.000157 OPLEDP 0146-9592 Google Scholar

4. 

H. Liao et al., “High-quality integral videography using a multiprojector,” Opt. Express, 12 (6), 1067 –1076 (2004). https://doi.org/10.1364/OPEX.12.001067 OPEXFF 1094-4087 Google Scholar

5. 

B. Javidi, I. Moon and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express, 14 (25), 12096 –12108 (2006). https://doi.org/10.1364/OE.14.012096 OPEXFF 1094-4087 Google Scholar

6. 

R. Martínez-Cuenca et al., “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express, 15 (24), 16255 –16260 (2007). https://doi.org/10.1364/OE.15.016255 OPEXFF 1094-4087 Google Scholar

7. 

J. Arai et al., “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express, 21 (3), 3474 –3485 (2013). https://doi.org/10.1364/OE.21.003474 OPEXFF 1094-4087 Google Scholar

8. 

S.-G. Park et al., “Recent issues on integral imaging and its applications,” J. Inf. Disp., 15 (1), 37 –46 (2014). https://doi.org/10.1080/15980316.2013.867906 JSIDE8 0734-1768 Google Scholar

9. 

N. Okaichi et al., “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express, 25 (3), 2805 –2817 (2017). https://doi.org/10.1364/OE.25.002805 OPEXFF 1094-4087 Google Scholar

10. 

M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics, 10 (3), 512 –566 (2018). https://doi.org/10.1364/AOP.10.000512 AOPAC7 1943-8206 Google Scholar

11. 

B. Javidi et al., “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express, 28 (22), 32266 –32293 (2020). https://doi.org/10.1364/OE.402193 OPEXFF 1094-4087 Google Scholar

12. 

H. Hoshino, F. Okano and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A, 15 (8), 2059 –2065 (1998). https://doi.org/10.1364/JOSAA.15.002059 JOAOD6 0740-3232 Google Scholar

13. 

A. Stern, Y. Yitzhaky and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE, 102 (10), 1571 –1587 (2014). https://doi.org/10.1109/JPROC.2014.2348938 IEEPAD 0018-9219 Google Scholar

14. 

M. Ashraful Alam et al., “Viewing-angle-enhanced integral imaging display system using a time-multiplexed two-directional sequential projection scheme and a DEIGR algorithm,” IEEE Photonics J., 7 (1), 6900214 (2015). https://doi.org/10.1109/JPHOT.2015.2396904 Google Scholar

15. 

N. Okaichi et al., “Continuous combination of viewing zones in integral three-dimensional display using multiple projectors,” Opt. Eng., 57 (6), 061611 (2018). https://doi.org/10.1117/1.OE.57.6.061611 Google Scholar

16. 

H. Watanabe et al., “Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images,” Opt. Express, 28 (17), 24731 –24746 (2020). https://doi.org/10.1364/OE.397647 OPEXFF 1094-4087 Google Scholar

17. 

L. Yang et al., “Viewing-angle and viewing-resolution enhanced integral imaging based on time-multiplexed lens stitching,” Opt. Express, 27 (11), 15679 –15692 (2019). https://doi.org/10.1364/OE.27.015679 OPEXFF 1094-4087 Google Scholar

18. 

B. Liu et al., “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express, 27 (24), 35728 –35739 (2019). https://doi.org/10.1364/OE.27.035728 OPEXFF 1094-4087 Google Scholar

19. 

N. Okaichi et al., “Design of optical viewing zone suitable for eye-tracking integral 3D display,” OSA Contin., 4 (5), 1415 –1429 (2021). https://doi.org/10.1364/OSAC.421086 Google Scholar

20. 

D. E. King, “Dlib-ml: a machine learning toolkit,” J. Mach. Learn. Res., 10 1755 –1758 (2009). https://doi.org/10.1145/1577069.1755843 Google Scholar

21. 

M. Miura et al., “Integral imaging system with enlarged horizontal viewing angle,” Proc. SPIE, 8384 83840O (2012). https://doi.org/10.1117/12.921388 PSISDG 0277-786X Google Scholar

22. 

S. W. Min et al., “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys., 45 (28), L744 –L747 (2006). https://doi.org/10.1143/JJAP.45.L744 Google Scholar

23. 

B. N. R. Lee et al., “Design and implementation of a fast integral image rendering method,” in Proc. ICEC 2006, 135 –140 (2006). https://doi.org/10.1007/11872320_16 Google Scholar

24. 

K. S. Park, S. W. Min and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst., 90-D (1), 233 –241 (2007). https://doi.org/10.1093/ietisy/e90-1.1.233 ITISEF 0916-8532 Google Scholar

Biography

Naoto Okaichi received his MS degree in complexity science and engineering from the University of Tokyo, Tokyo, Japan, in 2008. In 2008, he joined the Japan Broadcasting Corporation (NHK), Tokyo. Since 2012, he has been with the NHK Science & Technology Research Laboratories, where he has been engaged in research on 3D display systems. He is currently working toward his PhD in interdisciplinary information studies at the University of Tokyo, Tokyo, Japan.

Hisayuki Sasaki received his BS degree in engineering systems and his MS degree in engineering mechanics from the University of Tsukuba, Tsukuba, Japan, in 1999 and 2001, respectively, and his PhD in information sciences from Tohoku University, Sendai, Japan, in 2018. In 2001, he joined the Japan Broadcasting Corporation (NHK) and worked at Akita Station. Since 2006, he has been engaged in research on 3D imaging systems at the NHK Science & Technology Research Laboratories. He was seconded to the National Institute of Information and Communications Technology (NICT) as a research expert from 2012 to 2016.

Masanori Kano received his BS and MS degrees from Sophia University, Tokyo, Japan, in 2007 and 2009, respectively. In 2009, he joined the Japan Broadcasting Corporation (NHK), Tokyo, Japan. Since 2013, he has been working at the NHK Science & Technology Research Laboratories. His research interests include camera calibration, 3D image processing, and 3D displays.

Jun Arai received his BS and MS degrees and his PhD in applied physics from Waseda University, Tokyo, Japan, in 1993, 1995, and 2005, respectively. In 1995, he joined the Science & Technology Research Laboratories of the Japan Broadcasting Corporation (NHK), Tokyo, Japan. Since then, he has been working on 3D imaging systems. He is a fellow of SPIE.

Masahiro Kawakita received his BS and MS degrees in physics from Kyushu University, Japan, in 1988 and 1990, respectively, and his PhD in electronic engineering from the University of Tokyo, Japan in 2005. In 1990, he joined the Japan Broadcasting Corporation (NHK), Tokyo, Japan. From 1993 to 2021, he was with the NHK Science & Technology Research Laboratories, where he researched applications of liquid-crystal devices and optically addressed spatial modulators, 3D TV cameras, and display systems. Since 2021, he has been a professor in the Department of Media Science, Graduate School and Faculty of Information Science and Technology, Osaka Institute of Technology.

Takeshi Naemura received his PhD in electrical engineering from the University of Tokyo, Japan, in 1997. He is currently a professor at the Interfaculty Initiative in Information Studies at the University of Tokyo. He was a visiting assistant professor of computer science at Stanford University, United States, supported by the Japan Society for Promotion of Science (JSPS) Postdoctoral Fellowships for Research Abroad, from 2000 to 2002. His current research interests include virtual reality and human interfaces. He is a member of ITE, IEICE, and ACM.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Naoto Okaichi, Hisayuki Sasaki, Masanori Kano, Jun Arai, Masahiro Kawakita, and Takeshi Naemura "Integral three-dimensional display system with wide viewing zone and depth range using time-division display and eye-tracking technology," Optical Engineering 61(1), 013103 (13 January 2022). https://doi.org/10.1117/1.OE.61.1.013103
Received: 30 August 2021; Accepted: 28 December 2021; Published: 13 January 2022
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D displays

Eye

3D image processing

LCDs

Light sources

Spatial frequencies

Cameras

Back to Top