Multi-color holograms rely on simultaneous illumination from multiple light sources. These multi-color holograms could utilize light sources better than conventional single-color holograms and can improve the dynamic range of holographic displays. In this letter, we introduce AutoColor, the first learned method for estimating the optimal light source powers required for illuminating multi-color holograms. For this purpose, we establish the first multi-color hologram dataset using synthetic images and their depth information. We generate these synthetic images using a trending pipeline combining generative, large language, and monocular depth estimation models. Finally, we train our learned model using our dataset and experimentally demonstrate that AutoColor significantly decreases the number of steps required to optimize multi-color holograms from > 1000 to 70 iteration steps without compromising image quality.
This poster presentation was prepared for the Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) IV Conference at the SPIE AR | VR | MR 2023 Symposium.
In this work, we developed a wearable, head-mounted device that automatically calculates the precise Relative Afferent Pupillary Defect (RAPD) value of a patient. The device consists of two RGB LEDs, two infrared cameras, and one microcontroller. In the RAPD test, the parameters like LED on-off durations, brightness level, and color of the light can be controlled by the user. Upon data acquisition, a computational unit processes the data, calculates the RAPD score and visualizes the test results with a user-friendly interface. Multiprocessing methods used on GUI to optimize the processing pipeline. We have shown that our head-worn instrument is easy to use, fast, and suitable for early-diagnostics and screening purposes for various neurological conditions such as RAPD, glaucoma, asymmetric glaucoma, and anisocoria.
Cataract is a common ophthalmic disease in which a cloudy area is formed in the lens of the eye and requires surgical removal and replacement of eye lens. Careful selection of the intraocular lens (IOL) is critical for the post-surgery satisfaction of the patient. Although there are various types of IOLs in the market with different properties, it is challenging for the patient to imagine how they will perceive the world after the surgery. We propose a novel holographic vision simulator which utilizes non-cataractous regions on eye lens to allow the cataract patients to experience post-operative visual acuity before surgery. Computer generated holography display technology enables to shape and steer the light beam through the relatively clear areas of the patient’s lens. Another challenge for cataract surgeries is to match the right patient with the right IOL. To evaluate various IOLs, we developed an artificial human eye composed of a scleral lens, a glass retina, an iris, and a replaceable IOL holder. Next, we tested different IOLs (monofocal and multifocal) by capturing real-world scenes to demonstrate visual artifacts. Then, the artificial eye was implemented in the benchtop holographic simulator to evaluate various IOLs using different light sources and holographic contents.
We introduce an open-source toolkit for simulating optics and visual perception. The toolkit offers differentiable functions that ease the optimization process in design. In addition, this toolkit supports applications spanning from calculating holograms for holographic displays to foveation in computer graphics. We believe this toolkit offers a gateway to remove overheads in scientific research related to next-generation displays.
KEYWORDS: Perceptual learning, Human vision and color perception, Computer generated holography, Visualization, Holograms, Holography, 3D displays, Prototyping, Optical simulations, Image quality, 3D acquisition, Image restoration
Computer-Generated Holography (CGH) promises to deliver genuine, high-quality visuals at any depth. We argue that combining CGH and perceptually guided graphics can soon lead to practical holographic display systems that deliver perceptually realistic images. We propose a new CGH method called metameric varifocal holograms. Our CGH method generates images only at a user’s focus plane while displayed images are statistically correct and indistinguishable from actual targets across peripheral vision (metamers). Thus, a user observing our holograms is set to perceive a high quality visual at their gaze location. At the same time, the integrity of the image follows a statistically correct trend in the remaining peripheral parts. We demonstrate our differentiable CGH optimization pipeline on modern GPUs, and we support our findings with a display prototype. Our method will pave the way towards realistic visuals free from classical CGH problems, such as speckle noise or poor visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.