Although head-up displays (HUDs) have already been installed in some commercial vehicles, their application to augmented reality (AR) is limited owing to the resulting narrow field of view (FoV) and fixed virtual-image distance. The matching of depth between AR information and real objects across wide FoVs is a key feature of AR HUDs to provide a safe driving experience. Meanwhile, current approaches based on the integration of two-plane virtual images and computer-generated holography suffer from problems such as partial depth control and high computational complexity, respectively, which makes them unsuitable for application in fast-moving vehicles. To bridge this gap, here, we propose a light-field-based 3D display technology with eye-tracking. We begin by matching the HUD optics with the light-field display view formation. First, we design mirrors to deliver high-quality virtual images with an FoV of 10 × 5° for a total eyebox size of 140 × 120 mm and compensate for the curved windshield shape. Next, we define the procedure to translate the driver eye position, obtained via eye-tracking, to the plane of the light-field display views. We further implement a lenticular-lens design and the corresponding sub-pixel-allocation-based rendering, for which we construct a simplified model to substitute for the freeform mirror optics. Finally, we present a prototyped device that affords the desired image quality, 3D image depth up to 100 m, and crosstalk level of <1.5%. Our findings indicate that such 3D HUDs can form the mainstream technology for AR HUDs.
Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.
In this paper, an inversion-free subpixel rendering method that uses eye tracking in a multiview display is proposed. The
multiview display causes an inversion problem when one eye of the user is focused on the main region and the other eye
is focused on the side region. In the proposed method, the subpixel values are rendered adaptively depending on the eye
position of the user to solve the inversion problem. Also, to enhance the 3D resolution without the color artifact, the
subpixel rendering algorithm using subpixel area weighting is proposed instead of the pixel values. In the experiments,
36-view images were seen using active subpixel rendering with the eye tracking system in a four-view display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.