Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction, from FOV B to FOV A. Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A → B) with high power efficiency while distorting the image formation in the backward direction (B → A) along with low power efficiency. Our reciprocal design features a set of spatially engineered linear diffractive layers that are statistically optimized for partially coherent illumination with a given phase correlation length. Our analyses reveal that when illuminated by a partially coherent beam with a correlation length of ≥∼1.5λ, where λ is the wavelength of light, diffractive unidirectional imagers achieve robust performance, exhibiting asymmetric imaging performance between the forward and backward directions—as desired. A partially coherent unidirectional imager designed with a smaller correlation length of <1.5λ still supports unidirectional image transmission but with a reduced figure of merit. These partially coherent diffractive unidirectional imagers are compact (axially spanning <75λ), polarization-independent, and compatible with various types of illumination sources, making them well-suited for applications in asymmetric visual information processing and communication.
We demonstrate a reconfigurable diffractive deep neural network (termed R‑D2NN) with a single physical model performing a large set of unique permutation operations between an input and output field-of-view by rotating different layers within the diffractive network. Our study numerically demonstrated the efficacy of R‑D2NN by accurately approximating 256 distinct permutation matrices using 4 rotatable diffractive layers. We experimentally validated the proof-of-concept of reconfigurable diffractive networks using terahertz radiation and 3D-printed diffractive layers, achieving high concordance with numerical simulations. The reconfigurable design of R‑D2NN provides scalability with high computing speed and efficient use of materials within a single fabricated model.
We introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in immunohistochemically (IHC) stained breast cancer tissue images. Our deep learning-based method leverages pyramid sampling to analyze features across multiple scales from IHC-stained breast tissue images, managing the computational load effectively and addressing the challenges of HER2 expression heterogeneity by capturing detailed cellular features and broader tissue architecture. Upon application to 523 core images, the model achieved a classification accuracy of 85.47%, demonstrating the ability to counteract staining variability and tissue heterogeneity, which might improve the accuracy and timeliness of breast cancer treatment planning.
We present subwavelength imaging of amplitude- and phase-encoded objects based on a solid-immersion diffractive processor designed through deep learning. Subwavelength features from the objects are resolved by the collaboration between a jointly-optimized diffractive encoder and decoder pair. We experimentally demonstrated the subwavelength-imaging performance of solid immersion diffractive processors using terahertz radiation and achieved all-optical reconstruction of subwavelength phase features of objects (with linewidths of ~λ/3.4, where λ is the wavelength) by transforming them into magnified intensity images at the output field-of-view. Solid-immersion diffractive processors would provide cost-effective and compact solutions for applications in bioimaging, sensing, and material inspection, among others.
We present a method for accurately performing complex-valued linear transformations with a Diffractive Deep Neural Network (D2NN) under spatially incoherent illumination. By employing 'mosaicing' and 'demosaicing' techniques, complex data are encoded into optical intensity patterns for all-optical diffractive processing, and then decoded back into the complex domain at the output aperture. This framework not only enhances the capabilities of D2NNs for visual computing tasks but also opens up new avenues for applications in image encryption under natural light conditions to demonstrate the potential of diffractive optical networks in modern visual information processing needs.
We present a diffractive network (D2NN) design to all-optically perform distinct transformations for different input data classes. This class-specific transformation D2NN processes the input optical field, generating the output optical field whose amplitude or intensity closely approximates the transformed/encrypted version of the input using a transformation matrix specific to the corresponding data class. The original information can be recovered only by applying the class-specific decryption keys to the corresponding class at the diffractive network's output field-of-view. The efficacy of the presented class-specific image encryption framework was validated both numerically and experimentally, tested at 1550 nm and 0.75 mm wavelengths.
Fluorescence lifetime imaging microscopy (FLIM) measures fluorescence lifetimes of fluorescent probes to investigate molecular interactions. However, conventional FLIM systems often requires extensive scanning that is time-consuming. To address this challenge, we developed a novel computational imaging technique called light field tomographic FLIM (LIFT-FLIM). Our approach acquires volumetric fluorescence lifetime images in a highly data-efficient manner, significantly reducing the number of scanning steps. We demonstrated LIFT-FLIM using a single-photon avalanche diode array on various biological systems. Additionally, we expanded to spectral FLIM and demonstrated high-content multiplexed imaging of lung organoids. LIFT-FLIM can open new avenues in the biomedical research.
We report deep learning-based design of diffractive all-optical processors for performing arbitrary linear transformations of optical intensity under spatially incoherent illumination. We show that a diffractive optical processor can approximate an arbitrary linear intensity transformation under spatially incoherent illumination with a negligible error if it has a sufficient number of optimizable phase-only diffractive features distributed over its diffractive surfaces. Our analysis and design framework could open up new avenues in designing incoherent imaging systems with an arbitrary set of spatially-varying point-spread functions (PSFs). Moreover, this framework can also be extended to design task-specific all-optical visual information processors under natural illumination.
As an optical processor, a diffractive deep neural network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing, completing its tasks at the speed of light propagation through thin optical layers. With sufficient degrees of freedom, D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light. Similarly, D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination; however, under spatially incoherent light, these transformations are nonnegative, acting on diffraction-limited optical intensity patterns at the input field of view. Here, we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light. Through simulations, we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products, a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination. The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.
We present data class-specific transformation diffractive networks that all-optically perform different preassigned transformations for different input data classes. The visual information encoded in the amplitude, phase, or intensity channel of the input field is all-optically processed and transformed/encrypted by the diffractive network. The amplitude or intensity of the resulting field approximates the transformed/encrypted input information using the transformation matrix specifically assigned for that data class. We experimentally validated this class-specific transformation framework by designing and fabricating two diffractive networks at 1550nm and 0.75mm wavelengths. The presented framework provides a fast, secure, and energy-efficient solution to data encryption applications.
We report a novel few-shot transfer learning scheme based on a convolutional recurrent neural network architecture, which was used for holographic image reconstruction. Without sacrificing the hologram reconstruction accuracy and quality, this few-shot transfer learning scheme effectively reduced the number of trainable parameters during the transfer learning process by ~90% and improved the convergence speed by 2.5-fold over baseline models. This method can be applied to other deep learning-based computational microscopy and holographic imaging tasks, and facilitates the transfer learning of models to new types of samples with minimal training time and data.
We present a deep learning-based framework to virtually transfer images of H&E-stained tissue to other stain types using cascaded deep neural networks. This method, termed C-DNN, was trained in a cascaded manner: label-free autofluorescence images were fed to the first generator as input and transformed into H&E stained images. These virtually stained H&E images were then transformed into Periodic acid–Schiff (PAS) stain by the second generator. We trained and tested C-DNN on kidney needle-core biopsy tissue, and its output images showed better color accuracy and higher contrast on various histological features compared to other stain transfer models.
Deep learning-based microscopic imaging methods commonly have limited generalization to new types of samples, requiring diverse training image data. Here we report a few-shot transfer learning framework for hologram reconstruction that can rapidly generalize to new types of samples using only small amounts of training data. The effectiveness of this method was validated on small image datasets of prostate and salivary gland tissue sections unseen by the network before. Compared to baseline models trained from scratch, our approach achieved ~2.5-fold convergence speed acceleration, ~20% training time reduction per epoch, and improved image reconstruction quality.
We report a virtual image refocusing framework for fluorescence microscopy, which extends the imaging depth-of-field by ~20-fold and provides improved lateral resolution. This method utilizes point-spread function (PSF) engineering and a cascaded convolutional neural network model, which we termed as W-Net. We tested this W-Net architecture by imaging 50 nm fluorescent nanobeads at various defocus distances using a double-helix PSF, demonstrating ~20-fold improvement in image depth-of-field over conventional wide-field microscopy. W-Net architecture can be used to develop deep-learning-based image reconstruction and computational microscopy techniques that utilize engineered PSFs and can significantly improve the spatial resolution and throughput of fluorescence microscopy.
Holographic imaging plays an essential role in label-free microscopy techniques, and the retrieval of the phase information of a specimen is vital for image reconstruction in holography. Here, we demonstrate recurrent neural network (RNN) based holographic imaging methods that simultaneously perform autofocusing and holographic image reconstruction from multiple holograms captured at different sample-to-sensor distances. The acquired input holograms are individually back propagated to a common axial plane without any phase retrieval, and then fed into a trained RNN which successfully reveals phase-retrieved and auto-focused images of the unknown samples at its output. As an alternative design, we also employed a dilated convolution in our RNN design to demonstrate an end-to-end phase recovery and autofocusing framework without the need for an initial back-propagation step. The efficacy of these RNN-based hologram reconstruction methods was blindly demonstrated using human lung tissue sections and Papanicolaou (Pap) smears. These methods constitute the first demonstration of the use of RNNs for holographic imaging and phase recovery, and would find applications in label-free microscopy and sensing, among other fields.
We report a deep learning-based virtual image refocusing method that utilizes double-helix point-spread-function (DH-PSF) engineering and a cascaded neural network model, termed W-Net. This method can virtually refocus a defocused fluorescence image onto an arbitrary axial plane within the sample volume, enhancing the imaging depth-of-field and lateral resolution at the same time. We demonstrated the efficacy of our method by imaging fluorescent nano-beads at various defocus distances, and also quantified the nano-particle localization performance achieved with our virtually-refocused images, demonstrating ~20-fold improvement in image depth-of-field over wide-field microscopy, enabled by the combination of DH-PSF and W-Net inference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.