PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The recent years have witnessed a rapid growth of microscopic techniques for high-speed, large-scale imaging in biological systems. Biological structures are hierarchical and dynamic, which posts challenges to balance the field of view, imaging depth, resolution, speed, and SNR. In this talk, I will first introduce our recent efforts in developing mesoscopic oblique plane microscopy (Meso-OPM) to allow flexible tuning of the imaging scale/resolution, and then introduce the challenges and opportunities for computational augmentations to improve imaging quality, SNR and general image formation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We recently have developed a stand-alone single-impulse photoacoustic computed tomography (SIP-PACT) system, which integrates high spatiotemporal resolution, deep penetration, and full-view fidelity, as well as anatomical, dynamic, and functional contrasts. To better reveal detailed features inside the body, we developed a half-time dual-speed-of-sound (dual-sos) universal back-projection algorithm to compensate for the first-order effect of acoustic inhomogeneity. However, the previous dual-sos reconstruction method requires human intervention for reconstruction parameter tuning. Later, we developed a smart reconstruction via machine learning, it can produce almost the same quality images anatomically. By localizing single-dyed droplets, the spatial resolution of SIP-PACT has been improved by six-fold in vivo but compromising the temporal resolution. Deep-learning accelerates the droplet localization process, improving the temporal resolution by almost 20-fold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This conference presentation was prepared for SPIE BiOS, 2024.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the Fourier Light field Camera Array Microscope (FL-CAM) for high-throughput, single-snapshot 3D imaging of mesoscale samples, particularly freely moving model organisms. The FL-CAM substitutes the micro-lens array of typical light field systems for a synchronized array of 48 independent imaging systems. The FL-CAM can capture multi-perspective images over a 3.1 cm x 4.1 cm field of view with 38o angular range at up to 200 frames per second and uses a physics-supervised machine learning algorithm, which accounts for the unique distortion patterns created by the cascade of lenses, to achieve 3D visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Miniaturized endoscopes offer a high potential for biomedical imaging applications. However, conventional fiberoptic endoscopes require lens systems which are not suitable for real-time 3D imaging. Instead, a diffuser is utilized for passively encoding incoherent 3D objects into 2D speckle patterns. For computational image reconstruction beyond the optical memory effect in a spatially varying system, a physics-informed neural network is employed. For the first time worldwide, we demonstrate single-shot 3D incoherent fiber imaging with keyhole access at video rate. The diffuser fiber endoscope can be applied for fluorescence imaging which is promising for in vivo deep brain diagnostics with cellular resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this talk, I will discuss about several projects in my lab at the confluence of optics, sensors and artificial intelligence. In particular, I will provide examples of how co-designing sensors, optics and AI algorithms results in superior performance capabilities for imaging systems. I will provide three example projects: (1) how on-chip computation can allow us to realize high-resolution flash LIDARs, (2) how novel diffractive and meta-optical elements allow us to realize imaging systems with novel functionalities and form-factors and finally (3) how emerging neural representations along with high resolution spatial light modulators can allow us to image through thick scattering media without the need for guidestars. I will use these projects to argue that we should look at the three computational blocks within an imaging system, optics, sensors and algorithms together and that co-designing them can result in significant performance improvements over the state of art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional hyperspectral imaging systems employ scanning or down-sampling along each dimension axis, leading to prolonged acquisition times or reduced resolutions. To remedy this problem, we propose two spectral imaging systems: Hyperspectral Light Field Tomography (Hyper-LIFT) and Tunable Image Projection Spectrometry (TIPS). I will discuss system design and applications of those systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral imaging holds great promise for the non-invasive diagnosis of retinal diseases. However, to acquire a spectral datacube, conventional spectral cameras require extensive scanning, leading to a prolonged acquisition. Therefore, they are inapplicable to retinal imaging because of the rapid eye movement. To address this problem, we built a coded aperture snapshot spectral imaging fundus camera, which captures a large-sized spectral datacube in a single exposure. Moreover, to reconstruct a high-resolution image, we developed a robust deep unfolding algorithm using a state-of-the-art spectral transformer in the denoising network. We demonstrated the performance of the system through various experiments, including imaging standard targets, utilizing an eye phantom, and conducting in vivo imaging of the human retina.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning operators, such as neural networks, are universal function approximators—albeit, in practice, their generalization ability depends on the quality of the training data and the algorithm designer’s wisdom in choosing a particular operator form, i.e. how well it matches the function at hand. Scientific machine learning is a class of methods that constrain the neural network operator by forcing its output to match time-series data from a partially known dynamical model, e.g. an ordinary or partial vector differential equation. In this talk, we make the case for regularizing optical image measurements using this approach. Applications are expected to be in processes with high-complexity constitutive relationships, such as pharmaceutical and cell manufacturing, plant biology, and ecology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a high-resolution, wide-field of view (FOV) computational microscope that employs an array of image sensors with gaps between them and a diffractive optical element (DOE) in the Fourier plane. The sensor array consists of a 6 x 8 array of 13-megapixel sensors (total ~0.6 gigapixels), spanning a 5 cm by 6.6 cm region with a ~22% fill factor. To fill in the inter-sensor gaps without scanning, we introduce a DOE at the pupil that generates a distributed PSF, allowing us to multiplex information from the missing ~78% of the total area into the sensing regions. Our large-scale reconstruction algorithm demixes the superimposed information, resulting in a >4x expanded FOV. Our approach can enable multi-gigapixel imaging in a single snapshot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Characterizing wavefront of free-form optical elements is challenging. Existing interferometric techniques are accurate but not suitable for freeform elements. We propose a computational metrology technique using a coded sensor to capture diffraction measurements. Our calibration process achieves remarkable accuracy with less than 10 nm error over a 40 mm² field of view. This technique allows water-immersion operation and can characterize steep surfaces with up to 30 degrees slope. With a large field of view, nanometer precision, and a non-interferometric configuration, it has the potential to revolutionize optical metrology in manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The level of computational power we can currently access has significantly changed the way we think about, process and interact with image information. In this talk, I will give a broad survey of the exciting work going on in the field of computational imaging – ranging from physical-based computational microscopy methods to machine-learning driven image rendering. As a case study, I will discuss how computational imaging is impacting digital pathology in predictable and surprising ways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interferometric diffuse optics (iDO) is an exciting new class of approaches to quantitatively assess diffuse light that propagates through turbid media. Compared to classical diffuse optics, iDO provides high information content, including access to light distributions and field fluctuations resolved according to time-of-flight. This rich information content enables multi-parametric reconstruction of complex tissue geometries from a single source-collector pair. In addition, iDO affords flexibility in data acquisition, including for instance, the ability to realize of different effective time-of-flight filters by electronic modulation of the source. Such flexible data acquisition provides new opportunities for computational imaging approaches in diffuse optics, where the acquisition and reconstruction are designed together. Here we summarize the field of interferometric diffuse optics to date, categorizing hardware advances and approaches in the field. We highlight the relative strengths of each approach for different applications. Finally, we identify areas where iDO provides tangible benefits over classical diffuse optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Achieving high-precision light manipulation is crucial for delivering information through complex media with high fidelity. Digital micromirror devices (DMDs) have emerged as a promising candidate as high-speed wavefront shaping devices but at the cost of compromised fidelity, largely due to the limited degrees of freedom and the challenge of optimizing a binary amplitude mask. Here we leverage the properties of sparse-to-dense transformation in complex media and introduce a sparsity-constrained optimization framework. The proposed optimization framework could enhance existing holographic setups without changes to the hardware, and enable high-fidelity and high-speed wavefront shaping through different scattering media and platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I will present recent advances in Computational Miniature Mesoscope that combine novel miniature optics and advanced algorithms to enable single-shot, 3D high-resolution fluorescence imaging across a wide FOV in a miniaturized platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging of complex birefringent scattering samples is of important for material characterization and bio-tissue inspection. In this talk, I will present a computation optical tomography imaging method based a new multi-slice-birefringent analytical model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed differential structured illumination microscopy (dSIM) for efficiently imaging and reconstructing 3D biological samples at high-resolution over a large field-of-view. Using plane wave and grating-based structured illumination pairs in a differential illumination scheme, dSIM encodes scattering information from high-angle, traditionally nonlinear “darkfield” illuminations into linear intensity measurements enabling efficient 3D object reconstruction with linear inverse scattering models. This illumination scheme exceeds the 2X resolution limit enhancement of traditional phase-based SIM techniques. We reconstruct 3D objects with 4.5X better resolution than the coherent imaging bandwidth while maintaining an almost 1mm2 field-of-view. We illustrate this technique in simulation and experimentally on cell cultures and other living biological specimens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a high-throughput phase-guided digital histological staining based on Fourier ptychographic microscopy using a generalizable deep neural network. Since the phase information includes the refractive index distribution of the specimen, we can digitally stain the unstained tissue slides from the quantitative phase images, which present the same color features that can be observed under a conventional microscope with the staining process. Here, we utilize Fourier Ptychographic Microscope that enables wide field and high-resolution quantitative phase imaging using multiple measurements by varying illumination angle. Additionally, we design a neural network that has remarkable generalization regarding sample dependence with the learned forward model. Along with this network architecture, we realize the efficient and effective digital staining process that does not require the labeled dataset from unstained tissue slides. We will report on the digital stained result from raw FPM images, the performance comparison, and discuss the future direction of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescent nano-particle labeling methods produce high contrast signals for rare-cell detection. Unfortunately, many imaging methods cannot yet reach adequate space-bandwidth products for these samples. We propose a lens-free time-gated fluorescence system using pulsed excitation and long-lifetime fluorescent nanoparticles. We minimize the required sample to sensor distance by applying a temporal filter instead of spectral filters and achieve a resolution of 8.77 μm. This approach simplifies the architecture, requires minimal image reconstruction, and reduces system size and cost. Our method surpasses the performance of other non-computational fluorescent lens-free imaging approaches and provides a foundation for future resolution enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. This talk will describe new microscopes and space-time algorithms that enable 3D fluorescence and phase measurement with high resolution on dynamic samples. Traditional model-based image reconstruction algorithms work together with neural networks to optimize the inverse problem solver and the data capture strategy in order to account for model mis-match and aberrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this talk, we will report two of our recent progresses on intersection between complex optics and machine learning. We will first talk about the work on how a multiple-scattering cavity can be harnessed as a nonlinear optical data encoder for ultra-fast and parallel information processing. Then we will discuss how complex optics can be exploitted to build reconfigurable and trainable optical neural networks. We demonstrate our work on performing machine learning tasks on both industry and scientific datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach for quantitative phase imaging (QPI) through random, unknown phase diffusers using a diffractive optical network consisting of successive layers optimized through deep learning. Unlike traditional digital reconstruction methods, our all-optical diffractive processor requires no external power beyond the illumination light and completes its QPI reconstruction as the light is transmitted through a thin diffractive processor. With its low power consumption, high frame rate, and compact size, our design offers a transformative alternative for QPI through random, unknown phase diffusers, and it can be readily scaled to work at different wavelengths for various applications in biomedical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With applications in quantitative metabolomics and label-free digital pathology, Quantitative Phase Microscopy (QPM) measures refractive index maps of thin transparent specimens like live cells or tissue sections. In QPM, refractive index maps are usually reconstructed from interference measurements of the object’s light field with respect to a reference field. To this end, many previous works focused on designing stable full-field interferometers from the bottom up. In this work, we present an alternative strategy to design a QPM system top-down, starting from the desired measurement outcomes, with no explicit knowledge about interferometry. We call our inverse design strategy Differentiable Microcopy. To this end, our Differentiable Microcopy approach designed a range of Fourier-filter-based QPM systems that do not require computational post-reconstruction. Our designs are superior to existing similar designs in numerical benchmarks. We also experimentally validated one design using a spatial light modulator. Finally, to fabricate these custom designs in the future, we also propose a new fabrication-aware Differentiable Microcopy pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I will discuss diffractive optical networks designed by deep learning to all-optically implement various complex functions as the input light diffracts through spatially-engineered surfaces. These diffractive processors designed by deep learning have various applications, e.g., all-optical image analysis, feature detection, object classification, computational imaging and seeing through diffusers, also enabling task-specific camera designs and new optical components for spatial, spectral and temporal beam shaping and spatially-controlled wavelength division multiplexing. These deep learning-designed diffractive systems can broadly impact (1) all-optical statistical inference engines, (2) computational camera and microscope designs and (3) inverse design of optical systems that are task-specific. In this talk, I will give examples of each group, enabling transformative capabilities for various applications of interest in e.g., autonomous systems, defense/security, telecommunications as well as biomedical imaging and sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning has revolutionized computational imaging, offering powerful solutions for performance enhancement and addressing diverse challenges. However, the traditional discrete pixel-based representations limit their ability to capture continuous, multiscale details of objects.
Here, we introduce a novel Local Conditional Neural Fields (LCNF) framework, leveraging a continuous implicit neural representation. We demonstrate the capabilities of LCNF in solving the highly ill-posed inverse problem in Fourier ptychographic microscopy (FPM) with multiplexed measurements. Our LCNF achieves versatile and generalizable continuous-domain super-resolution image reconstruction by combining a CNN-based encoder and an MLP-based decoder conditioned on a learned local latent vector. We show LCNF can accurately reconstruct wide field-of-view, high-resolution phase images, robustly capture the continuous object priors and eliminate various phase artifacts even trained imperfect datasets. We further demonstrate that LCNF exhibits strong generalization, reconstructing diverse biological samples with limited training data or dataset simulated using natural images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a scheme termed Hardware Domain Adaptation that transforms the visual appearance of biomedical images to match that of a given optical system. This allows us to exploit large publicly available datasets for the training of custom machine learning algorithms for inference on data sets captured by a different imaging hardware for the same task. Moreover, this method allows us to train models for lower-quality image datasets that are difficult or impossible to annotate manually. We demonstrate the efficacy of this method by using publicly available data to train an algorithm to identify and count white blood cells in images obtained on our custom hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional fluorescence microscopy with conventional optics suffers from the trade-off between the resolution, field-of-view (FOV) and miniaturization. Computational imaging techniques overcome these limitations by leveraging miniature optics and enabling strong multiplexing. However, the shift-variant degradation caused by miniaturized lenses poses computational and memory challenges. In this work, we developed a Multi-channel FourierNet that learns the global shift variant filters in the frequency domain without any prior knowledge, providing consistent performance on a large-scale FOV. Additionally, we validate the effectiveness of our network by visualizing the correspondence between the saliency map and the truncated PSFs from different viewpoints. We demonstrate the network fueled by simulation data can perform real-time reconstruction on biological samples. We believe this innovative approach holds great promise for advancing computational imaging techniques across diverse applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative phase imaging (QPI) is a powerful label-free imaging technique that enables high-resolution, three-dimensional imaging of unlabeled samples by exploiting refractive index (RI) distributions as intrinsic imaging contrast. In this talk, we present the latest developments in 2D, 3D QPI techniques in visible1-3 and X-ray wavelengths. We will elucidate the principles of various QPI techniques, detail the reconstruction algorithms involved, and explore the enhancement of image analysis through machine learning algorithms. Moreover, we will delve into potential applications, spanning cell biology, biotechnology, and industrial inspections in fields such as semiconductors and display devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ptychographic systems have required oversampling for phase retrieval. By combining novel optical design using array cameras and metalenses with neural feature abstraction, it is possible to code systems for snapshot ptychography. Resolution limits in 2 and 3D, coherence requirements, experiments and simulations for this application are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier ptychographic microscopy (FPM) has its strength in tackling the trade-off between resolution and field-of-view of imaging systems by computational methods. Here, we present a time-efficient and physics-based algorithm for FPM image stack reconstruction using implicit neural representation and tensor low-rank approximation. The method is free of any pre-training process and can be easily adapted to various computational microscopes. Compared to the conventional FPM methods for image stack reconstruction, the proposed method can be several times faster than conventional FPM methods on the same graphics processing units (GPU) and significantly reduce data volume for storage. The proposed method has potential applications in digital pathology and its downstream data-driven tasks, and can be beneficial to data collaboration in biological sciences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel polarization-sensitive Fourier Ptychographic Microscopy (FPM) method that leverages multiplexing techniques in the Fourier plane, eliminating the need for costly polarization cameras or mechanical polarizer rotations. By simply introducing semicircular 0° and 90° linear polarizers in the Fourier plane of a conventional FPM setup, we can effectively split a single pupil into two half-circle pupils, enabling the simultaneous multiplexing of two channels' signals within a single measurement. By imposing two pupil functions on FP phase retrieval, we reconstructed the amplitude and phase information of the two orthogonal polarization channels, ultimately obtaining the Jones matrix of the anisotropic specimen. To validate our proposed method, we demonstrate its application by accurately reconstructing the orientation of the slow axis and phase retardation of MSU crystals known as the birefringence object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For imaging, an ideal lens should give images with high-resolution across a large field-of-view (FOV). However, designing and manufacturing such lens is almost impossible due to the intrinsic properties of real materials. Fourier ptychography microscopy (FPM), a computational imaging method, attracts board interests as it improves over imperfections of a real objective. With the aid of computation, FPM can provide aberration-free, high-resolution images over a large FOV. However, its iterative reconstruction is non-convex and may not converge to a real solution. Moreover, its aberration correction algorithm does not work well under large aberrations. In this talk, I will present a new imaging method, termed analytical multiangle illumination microscopy (AMIM), that performs complex field reconstructions using all analytical methods. By using critical-angle and darkfield measurements, AMIM extracts the aberration and reconstructs the complex field in a purely analytical way. We show that AMIM works well with extremely large aberrations and can reconstruct the complex field in a non-iterative way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In pushing the physical limits, it is common that one parameter is optimized at the price of sacrificing other advantages. To address such tradeoffs, we integrate advanced instrumentation and computational approach to allow high-speed, high-content, and high-sensitivity mapping of biomolecules in cells and tissues. Several computational chemical imaging platforms are developed by using this strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence lifetime imaging (FLI) has become an increasingly popular method as it provides unique insights into numerous biological processes. However, FLI is not a direct imaging modality and datasets need to be postprocessed to quantify fluorescence lifetime or lifetime-based parameters. Such technical implementations can be complex, computationally expensive, require high level of expertise as well as user inputs. Herein, we will report on the development and validation of DL models as fast and user-friendly image formation tools for FLI. To date, our contributions have focused on outputting the quantitative lifetime image from raw FLI measurements without iterative solvers and user input, enabling enhanced multiplexed studies by leveraging both spectral and lifetime contrast simultaneously, performing FLI topography corrected by the tissue optical properties, facilitating the implementation of high-end new instrumental concepts leveraging compressive sensing techniques and performing end-to-end 3D optical reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence lifetime imaging microscopy (FLIM) measures fluorescence lifetimes of fluorescent probes to investigate molecular interactions. However, conventional FLIM systems often requires extensive scanning that is time-consuming. To address this challenge, we developed a novel computational imaging technique called light field tomographic FLIM (LIFT-FLIM). Our approach acquires volumetric fluorescence lifetime images in a highly data-efficient manner, significantly reducing the number of scanning steps. We demonstrated LIFT-FLIM using a single-photon avalanche diode array on various biological systems. Additionally, we expanded to spectral FLIM and demonstrated high-content multiplexed imaging of lung organoids. LIFT-FLIM can open new avenues in the biomedical research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wawefront shaping can overcome light scattering to focus deep inside turbid media, but it typically requires feedback from guidestars and is restricted by the isoplanatic patch size and modulation speed. Here, we present scattering matrix tomography (SMT) that uses the measured scattering matrix to perform virtual spatiotemporal wavefront shaping, with a digitally scanned confocal spatiotemporal focus and guidestar-free wavefront optimization for every isoplanatic patch. We experimentally use SMT to achieve diffraction-limited resolution behind one-millimeter-thick ex-vivo mouse brain tissue which reduces the target signal by over ten million-fold. We also realize 3D tomography with ideal transverse and axial resolutions inside a dense colloid, where conventional imaging methods fail due to multiple scattering, across a large depth of field of over 70 times the Rayleigh range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging through a diffusive medium has a wide range of biological and medical applications. However, multiple scattering through a turbid medium distorts the spatiotemporal information of the incident light. We propose a method based on a light field tomography that allows imaging of an object through a turbid medium without requiring any prior knowledge of medium/target or inverting ill-posed diffusion equation. The approach employs ballistic photons via time gating, and maps projections of different angles onto a 1-D detector by using a cylindrical lens array. This ballistic photon measurement along with depth retrieval capacity enables 3D imaging through turbid medium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we propose an unsupervised module that combines wavelet-packet transform and k-means++ clustering to extract frequency features and classify patches of medical images. This module produces region labels for each patch-image, bypassing heavy computation and methodological labelling. Our WeCREST model, powered by this module, outperforms CycleGAN in terms of SSIM and PSNR, partly outperforms the supervised pix2pix, but underperforms compared to the state-of-the-art weakly supervised WeCREST. This improvement of the original WeCREST provides new insights into wavelet-based feature extraction and unsupervised region-style classification for medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence lifetime imaging microscopy (FLIM) provides valuable insights into molecular interactions and states in complex cellular environments. Conventional FLIM analysis methods struggle with accurate lifetime estimation with low photons-per-pixel (PPP). We propose DeepFLR, a self-supervised deep learning framework for robust FLIM signal restoration with limited photons. By exploiting the spatiotemporal dependencies of FLIM signals, DeepFLR reconstructs the fluorescence decay curves, leading to accurate lifetime estimations using existing lifetime estimation methods. The results demonstrate that DeepFLR enables reliable lifetime estimation with less than 10 PPP for a diverse set of biological samples. The proposed approach significantly reduces the photon budget of FLIM and opens up numerous low-light FLIM applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative differential phase-contrast (DPC) microscopy is a viable imaging method that provides phase images of transparent objects by using multiple intensity images. Conventional DPC methods rely on a linearized image formation model that is applicable to weakly scattering objects only, thus limiting the phase range of objects that can be accurately imaged. Additionally, these methods necessitate additional measurements and complex algorithms to correct for system aberrations. In this presentation, we introduce self-calibrated DPC microscopy using an untrained neural network (UNN-DPC) that incorporates a nonlinear image formation model and system aberration. Our method overcomes the limitations imposed by the linearized model and enables the simultaneous reconstruction of complex object information and aberrations without a training dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed sensing has emerged as a promising technique for high-speed 3-D imaging. Recent work proposed light field tomography (LIFT) to acquire en-face one-dimension (1-D) projections instead of 2-D images and reformulate as a computed tomography problem. The light field with reduced dimension brings high temporal resolution and synthetic refocusing ability in post-processing. We hereby propose a scanning light sheet system with LIFT detection, specifically tailored for achieving kilohertz 3-D fluorescent microscopy. The selective illumination introduces signal sparsity, and thus better reconstruction quality for the compressive detection system. And the high-speed light field imager replaces the active focusing unit in scanning light sheet system and increases the volume rate of 3D detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Application of compressed imaging methods in optics has led to advancements in high-speed snapshot imaging such as Light Field Tomography (LIFT). LIFT utilizes computed tomography and light field imaging to compress a snapshot 3-D field of view into a 1-D line of light. It can achieve excellent imaging speeds when used with a high-speed linear sensor. We propose using LIFT to image multi-neuron dynamics with the ASAP voltage indicator, which has proven challenging to image due to millisecond dynamics and low brightness. Here we report a prototype system to confirm performance by imaging high speed GCaMP Calcium Indicator dynamics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typical LED array microscopes require multiple image acquisitions for phase retrieval. Here, we propose a polarized LED array microscope for single-shot quantitative phase imaging with aberration correction. We implemented polarization-encoded illumination multiplexing by placing a custom-made polarization filter on top of the LED array. A single image was captured using a polarized sensor under polarized LED illumination. We reconstructed the quantitative phase by incorporating the polarization multiplexing model with a phase retrieval algorithm. We showed that the proposed technique can reconstruct aberration-corrected phase images with a single-shot intensity image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a learning-based dual-shot hyperspectral recovery technique with scene-aware optimal illumination design, in which two trichromatic images under different illumination power spectrum densities are utilized to recover full-band hyperspectral signatures. The scene-aware illumination optimization is algorithmically obtained by a deep learning model and is achieved by a self-built DMD-based spectrally tunable light source in hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.