We present an all-optical image denoiser based on spatially-engineered diffractive layers. Following a one-time training process using a computer, this analog processor composed of fabricated passive layers achieves real-time image denoising by processing input images at the speed of light and synthesizing the denoised results within its output field-of-view, completely bypassing digital processing. Remarkably, these designs achieve high output diffraction efficiencies of up to 40%, while maintaining excellent denoising performance. The effectiveness of this diffractive image denoiser was experimentally validated at the terahertz spectrum, successfully removing salt-only noise from intensity images using a 3D-fabricated denoiser that axially spans <250 wavelengths.
Multi-color holograms rely on simultaneous illumination from multiple light sources. These multi-color holograms could utilize light sources better than conventional single-color holograms and can improve the dynamic range of holographic displays. In this letter, we introduce AutoColor, the first learned method for estimating the optimal light source powers required for illuminating multi-color holograms. For this purpose, we establish the first multi-color hologram dataset using synthetic images and their depth information. We generate these synthetic images using a trending pipeline combining generative, large language, and monocular depth estimation models. Finally, we train our learned model using our dataset and experimentally demonstrate that AutoColor significantly decreases the number of steps required to optimize multi-color holograms from > 1000 to 70 iteration steps without compromising image quality.
This poster presentation was prepared for the Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) IV Conference at the SPIE AR | VR | MR 2023 Symposium.
We introduce an open-source toolkit for simulating optics and visual perception. The toolkit offers differentiable functions that ease the optimization process in design. In addition, this toolkit supports applications spanning from calculating holograms for holographic displays to foveation in computer graphics. We believe this toolkit offers a gateway to remove overheads in scientific research related to next-generation displays.
KEYWORDS: Perceptual learning, Human vision and color perception, Computer generated holography, Visualization, Holograms, Holography, 3D displays, Prototyping, Optical simulations, Image quality, 3D acquisition, Image restoration
Computer-Generated Holography (CGH) promises to deliver genuine, high-quality visuals at any depth. We argue that combining CGH and perceptually guided graphics can soon lead to practical holographic display systems that deliver perceptually realistic images. We propose a new CGH method called metameric varifocal holograms. Our CGH method generates images only at a user’s focus plane while displayed images are statistically correct and indistinguishable from actual targets across peripheral vision (metamers). Thus, a user observing our holograms is set to perceive a high quality visual at their gaze location. At the same time, the integrity of the image follows a statistically correct trend in the remaining peripheral parts. We demonstrate our differentiable CGH optimization pipeline on modern GPUs, and we support our findings with a display prototype. Our method will pave the way towards realistic visuals free from classical CGH problems, such as speckle noise or poor visual quality.
Augmented Reality (AR) near-eye displays promise new human-computer interactions that can positively impact people’s lives. However, the current generation of AR near-eye displays fails to provide ergonomic solutions that counter design trade-offs such as form factor, weight, computational requirements, and battery life. Unfortunately, these design trade-offs are significant obstacles on the path towards an all-day usable near-eye display. We argue that a new way of designing AR near-eye displays that remove active components from a near-eye display could be a key to solving trade-off related issues. We propose the beaming display,1 a new near-eye display system that uses a projector and an all passive wearable headset. In our proposal, we project images from a distance to a passive wearable near-eye display as we track the location of that near-eye display. This presentation will present the latest version of our prototype while we discuss the potential future directions for beaming displays
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.