Learning-based computer-generated holography (CGH) has great potential for real-time, multi-depth holographic displays. However, most existing algorithms only use the amplitude of the target image as a dataset to simplify the algorithmic process. This does not adequately consider the incorporation of angular spectrum (ASM) method into neural networks that can compute multiplanar attributes. Here, we propose a multi-depth diffraction model-driven neural network (MD-Holo). MD-Holo utilizes the weights of the pre-trained ResNet34 as initialization in the encoder stage of the complex amplitude generating network to extract more general features. Motion blur, Gaussian filtering, lens blur and low-pass filtered images are added to accommodate a wider range of images. Compared to the super-resolution DIV2K dataset alone, the use of the enhanced dataset allows both the generation of high-fidelity super-resolution images and the generalization of a wider variety of images. Simulations and optical experiments show that MD-Holo can reconstruct multi-depth images with high quality and fewer artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.