Optical imaging has served as a primary method to collect information about biosystems across scales—from functionalities of tissues to morphological structures of cells and even at biomolecular levels. However, to adequately characterize a complex biosystem, an imaging system with a number of resolvable points, referred to as a space-bandwidth product (SBP), in excess of one billion is typically needed. Since a gigapixel-scale far exceeds the capacity of current optical imagers, compromises must be made to obtain either a low spatial resolution or a narrow field-of-view (FOV). The problem originates from constituent refractive optics—the larger the aperture, the more challenging the correction of lens aberrations. Therefore, it is impractical for a conventional optical imaging system to achieve an SBP over hundreds of millions. To address this unmet need, a variety of high-SBP imagers have emerged over the past decade, enabling an unprecedented resolution and FOV beyond the limit of conventional optics. We provide a comprehensive survey of high-SBP imaging techniques, exploring their underlying principles and applications in bioimaging. |
1.IntroductionInformation requirements in bio-optical imaging are ever increasing. This demand is due to the landscape shift in contemporary biology, from morphological explorations and phenotypic probing of organisms, to an ongoing search for quantitative insights into underlying mechanisms at cellular and molecular levels. For example, observing large-scale neuronal activities of a brain1 requires an imaging system with subcellular resolution within a field-of-view (FOV) that encompasses the whole brain. To image a whole mouse brain of volume with resolution requires 500 billion spatial samplings, an enormous quantity that is far beyond the acquisition bandwidth of most current imaging systems. For optical imaging, the information content is commonly described by the space-bandwidth product (SBP), a dimensionless quantity that equals the number of optically resolvable spots within an FOV.2,3 The higher the SBP, the more information we acquire, and the richer the measurement. In practice, the SBP of an imaging system is determined by two factors: the pixel count of the camera and the performance of optics. With recent advances in large-format image sensors, imaging optics have become the bottleneck in achieving a large SBP—in conventional imaging systems, the fundamental limit is optical diffraction while the practical limits are the geometrical aberrations and mechanical/thermal constraints of constituent components. For example, a high-performance objective lens (Olympus, UPlanSAPO20X) with magnification and a 0.75 numerical aperture (NA) has an FOV of 1.35 mm in diameter and captures a spatial frequency content up to at 550 nm. Neglecting the aberrations, a total SBP is . In practice, for state-of-the-art microscope objective lenses with similar form factors, a typical SBP varies from a few million to tens of millions. By contrast, current high-resolution complementary metal-oxide-semiconductor (CMOS) sensors can have as many as 250 million pixels.4 Even commercial smartphones with 100-megapixel cameras are available,5 far exceeding the SBP of conventional lenses. To increase the SBP, the conventional approach relies on complicated lenses, resulting in a bulky setup and costly fabrication. Even with a long history of continuing effort, very few modern optical systems with a large aperture can achieve the diffraction-limited performance across a large FOV—we are approaching the end of the Moore’s law-like limit that the SBP of the system can be hardly improved solely by manipulating the lens parameters.6 To overcome this limitation, modern approaches utilize three strategies. The first strategy, referred to as the spatial-domain method, captures multiple images in the spatial domain to scale up the SBP. Representative techniques encompass array microscopy7–11 and multiscale optical imaging.12–14 The second strategy, referred to as the frequency-domain method, augments the SBP by performing a series of measurement in the Fourier domain. Within this category, the most important techniques include Fourier ptychography15–19 and structured illumination microscopy (intensity20,21 and complex field imaging22–25). It is worth noting that both spatial- and frequency-domain methods leverage the advantage of small-aperture optics in managing the lens aberrations.12 By contrast, the third strategy—wavefront-engineering-based methods—utilizes large-aperture lenses. The correction for the lens aberrations is accomplished by altering the phase of the wavefront through either the hardware-26–28 or computation-based approaches.29–32 In this review, we provide a comprehensive survey of these high-SBP imaging techniques in a unified framework, divulging their underlying principles, interconnections, and comparative advantages in bioimaging. We first introduce the concept of the SBP and discuss its relationship with the information capacity of an optical imaging system. The subsequent sections focus on the strategies to increase the SBP and their application in bioimaging. Finally, we summarize the field and provide perspectives. The scope of this review is limited to the methods that increase the SBP of the imaging system rather than correcting for the sample-induced aberrations and scatterings.33–38 In practice, the lensless on-chip microscopy systems have been known for high-SBP imaging because of the absence of imaging lenses.39–43 However, they are only applicable to samples in proximity to the image sensor, restricting the breadth of biological applications.44,45 Therefore, we exclude them from the discussion herein. 2.Bioimaging and Space-Bandwidth Product2.1.Limited Performance of Conventional Imaging SystemsMost bio-optical imaging systems are built upon refractive optics, where the light emanating from an object passes through a series of refractive lenses and forms an image on an image sensor. The paths of the light rays are mainly governed by the surface curvatures and refractive index of the constituting lenses, which bend the light rays following Snell’s law. Under the paraxial approximation, the light rays converge to a perfect focal spot. However, with an increased incident angle to the surface normal the paraxial approximation fails, and the light rays are refracted to a direction that deviates from the nominal focus. The aberrations so induced are functions of both the field height from the optical axis (or field angle) and aperture size. Therefore, the larger the FOV and aperture size (i.e., the larger the SBP), the worse the aberrations. Correcting for aberrations in a system with a large FOV and aperture is a nontrivial problem. Conventional lens design techniques such as lens bending/splitting, stop shifting/symmetry, and use of aspherical surfaces often lead to a complicated configuration with tens or even hundreds of lenses, incurring a prohibitive fabrication cost and a large form factor. For instance, a high-performance photolithography objective lens supports a high NA ( in the air) across a large FOV ().46 However, the total length of stacked lenses is , and they weigh several hundred kilograms. Such a bulky and complex imaging system is unsuitable for use in bioimaging in a laboratory or clinical setting. Recently, McConnell et al.47,48 developed a microscope lens called the mesolens comprising 15 optical elements of up to 63 mm in diameter. The lens provides a 6-mm FOV and a 0.5 NA, enabling a resolution approximately four times higher than a state-of-the-art objective lens with a similar FOV. Despite being a significant advance, the FOV provided is still insufficient for large-scale bioimaging such as interrogating the functional connectivity of the brain in large animals.49 In this regard, this review excludes the imaging systems that improve the SBP purely through the conventional lens design process. 2.2.Space-Bandwidth ProductThe SBP of an imaging system is a dimensionless number proportional to the information throughput, and it is usually calculated as the product of the FOV (space) and the spatial frequency range (bandwidth). The SBP is also referred to as Shannon number, the minimum number of samples required to completely determine the signal,50,51 or the maximum number of resolvable spots over the FOV. For example, for a given objective lens with a field number, FN, and a magnification, Mag, the FOV equals . In coherent imaging, the cut-off spatial frequency of the lens is for a complex amplitude, where NA and are the numerical aperture of the objective lens and wavelength, respectively. The cut-off frequency increases by a factor of two in incoherent imaging, with a triangular optical transfer function.51 For incoherent imaging, the diffraction-limited SBP equals the product of the FOV and spatial-frequency range: The FN is limited by the field diaphragm, which is bounded by the form factor of the objective lens. In general, a high-NA objective lens tends to have a relatively small SBP (Fig. 1), because the magnification of an objective lens does not increase at the same pace as that of its NA. For instance, at a 550 nm wavelength, the SBP of a objective lens (Olympus, UPlanSApo10X) is , and this number drops dramatically to for a objective lens (Olympus, UPlanSApo100X). Because of this nonlinear dependence, it is challenging to build an imaging system simultaneously with a high resolution and a large FOV.Although the SBP well quantifies the spatial degree of freedom of the optical system, the amount of information measured by the system must be discussed in conjunction with the signal-to-noise ratio (SNR). More specifically, the information capacity of each resolvable point under a white Gaussian noise is given as ,54 which monotonically increases with the SNR. Therefore, the total information capacity of an imaging system with two independent polarization states is55–57 It is worth noting that the information capacity does not directly state the Rayleigh resolution or FOV of the imaging system. Instead, it serves as the theoretical upper bound. For example, even with a high-resolution imaging system, a severe noise will deteriorate the image quality and, therefore, the practical resolution. The relationship between the information capacity and the SNR was first presented by Fellgett et al.55 Cox and Sheppard56 further discussed the information capacity in conjunction with the resolution. And later, this framework was extended to super-resolution microscopies, such as structured illumination microscopy and single-molecule localization microscopy.58,59 In a system with a nonuniform spatial resolution across the FOV, such as foveated lenses,60,61 the SBP is not equal to the product of the FOV and spatial frequency bandwidth. Instead, it must be calculated as a total number of spatially resolved spots. For simplicity, we confine the discussion to systems with a uniform spatial resolution.57 3.High SBP Imaging: Spatial-Domain Methods3.1.Array MicroscopyA simple method to increase the SBP is to scan a sample using a high-NA objective lens and stitch the high-resolution images. However, because high-NA objective lenses normally have a small FOV (Fig. 1), the scanning of a single objective lens leads to a prolonged acquisition. For example, to scan a FOV with a objective lens, it takes , provided that the combined scanning and camera exposure time at each step is 1 s, which is typical for wide-field fluorescence imaging. This imposes a demanding requirement on the mechanical stability of the system. Also, an autofocusing system is required because the image can be easily defocused due to the misalignment between the scanning direction and sample plane or environmental variations, such as temperature fluctuation. Moreover, this acquisition scheme applies only to static samples. The motion of the object could, otherwise, introduce severe artifacts. Despite challenges described above, the step-and-repeat scanning method has achieved a remarkable success in whole-slide pathological imaging.62–65 To address these issues, Weinstein et al.7 developed a parallelization scheme using an array of microscopes [Figs. 2(a) and 2(b)]. Rather than using a single objective lens, they populated the objective lenses into an array, increasing the FOV by a factor of (number of objective lenses) while maintaining the high resolution of an individual lens.7 Because the FOV of each lens is small, the geometrical aberrations can be well corrected using relatively simple optics. The team used microlenses with a 0.65 NA and a 0.25-mm FOV. The correspondent SBP of an individual microlens is . The total SBP of the system in this acquisition scheme is scalable—it is proportional to the number of microlenses in the array. With a total of 80 microlenses, the system can capture an image with a total SBP of in a snapshot, surpassing the performance of the conventional objective lenses (Fig. 1). Using this system, the team demonstrated digital scanning of a large pathology slide across a area with a submicron spatial resolution [Fig. 2(c)]. The array of microscopes was initially developed for wide-field transmission imaging. Recently, this method has been extended to fluorescence imaging as well [Fig. 2(d)].9,67,68 Despite parallel image acquisition, array microscopy still requires scanning to capture a complete picture of the sample. On the array, each objective lens forms a magnified image on the camera. To avoid the overlap between the adjacent images, the dimension of the magnified image cannot exceed the lens pitch, . Given a magnification , the maximum FOV of an individual lens at the object side is . Therefore, there is a gap between adjacent areas imaged. To fill this information, one must scan the sample across a distance of along both in-plane axes. Therefore, the higher the magnification , the longer the scanning range. Although this scanning range is much smaller than that required in the single lens-scanning-based approach, mechanically translating the sample is slow and prone to motion artifacts. To mitigate this problem, McCall et al.66 replaced mechanical scanning with temporal sequential imaging [Fig. 2(e)]. They built separate illumination for each microscope and lit only a subset at a time. Because the adjacent FOVs are not imaged simultaneously, they can overlap on the camera, alleviating the trade-off between the magnification and the total acquisition time. 3.2.Multiscale Optical ImagingIn conventional lens design, large-FOV imaging systems are particularly vulnerable to off-axis aberrations such as coma, astigmatism, and field curvature, which are functions of the field height from the optical axis. Among these aberrations, the field curvature is the toughest to correct for—it solely depends on the refractive indices and optical powers of lenses, and typical lens design techniques such as lens bending/splitting or stop symmetry/shifting are inapplicable.69 For a coaxial imaging system, a practical method to flatten the field curvature is to add a negative lens close to the image plane.70 However, the field flattening lens introduces an astigmatism that complicates the optical design. In addition, the imperfections of the lens surface such as scratches, dirt, and dust, would appear superimposed on the image. A multiscale optical architecture addresses the field curvature issue by utilizing a large-scale main lens and a small-scale lenslet array [Fig. 3(a)]. The object is first imaged by the primary main lens onto a curved Petzval surface. This intermediate image is then relayed by the secondary lenslet array to an array of cameras on a curved surface. The resultant images are computationally combined to reproduce a large FOV. Because the field-dependent focal shift can be physically compensated for by moving individual cameras to the correspondent focal positions, this method possesses a key advantage that the field curvature can be loosely tolerated when designing the main lens, thereby easing the correction of other aberrations. Particularly, when imaging a distant object, the primary main lens can be a simple ball lens with an aperture at the centre [Fig. 3(b)].71 Because of the rotational symmetry about the chief rays, no off-axis aberrations are introduced. The resultant system, referred to as a monocentric camera,72–74 exhibits only spherical and chromatic aberrations, which can be further corrected for using multiple concentric layers of different refractive indices.75 However, due to the use of a large number of lenslets and cameras () at the focal surface, the early monocentric cameras are generally bulky. For example, the first-generation monocentric gigapixel imaging system with 98 cameras13 has a volume of and a weight of 93 kg. To improve the form factor, Karbasi et al.76 replaced the lenslet array using a curved fiber bundle, directly transmitting the focal surface image to a single large-format camera. The resultant 25-megapixel monocentric imaging system has a footprint of . Kim et al.77 recently developed a more compact system by placing a bio-inspired hemispherical silicon nanorod photodiode array at the focal surface of the ball lens. Both multiscale optical imaging and array microscopy78 leverage the same fact that smaller lenses outperform larger lenses in image quality. Here, the performance of a lens is quantified as the ratio of SBP achieved to the theoretical maximum for a given magnification and FOV. While the multiscale lens system generally includes a hierarchy of aperture sizes stepping the field down from the primary lens to small-scale lenslets, the array microscopy consists of only one-level structure. In microscopy, the benefit of using a primary lens is that it can premagnify the image for the secondary lenslets to process. Therefore, the magnification of the secondary lenslet can be less than one, allowing the individual FOVs at the focal surface to be overlapped. The complete picture of the sample can be captured in a snapshot without scanning. Also, the use of the primary lens allows a large standoff, which is critical for imaging distant scenes. However, the downside is the introduction of additional aberrations by the primary lens. Therefore, the secondary lenslets must correct for these aberrations in addition to relaying the image. Using a multiscale microscopy system, Fan et al.14 demonstrated video-rate imaging of biological dynamics at centimetre scale and micrometre resolution [Fig. 3(c)]. A custom primary objective lens with a working distance of 20 mm images a large-FOV fluorescence scene. The intermediate focal image is segmented and relayed by the secondary lenses arranged on a curved surface. The final individual images are measured by an array of sCMOS sensors. Using this system, the team demonstrated calcium imaging of the nonuniform propagation of epileptiform neural activities. 4.High SBP Imaging: Frequency-Domain MethodsIn contrast to array microscopy/multiscale optical systems where the individual images are stitched in the spatial domain, the frequency-domain methods combine the images in the spatial-frequency domain (Fourier domain). Because a translational shift in the Fourier domain corresponds to an angular shift in the real space,51 the images associated with various spatial-frequency components can be measured by illuminating the sample at varied angles or patterns, eliminating the need for mechanical scanning. Within this category, representative techniques encompass Fourier ptychography and structured illumination microscopy. 4.1.Fourier PtychographyFourier ptychography15 is a computational imaging technique that can capture high-SBP images using low-cost, small-aperture imaging systems. By varying the illumination angle, Fourier ptychography shifts the frequencies of the object information in the Fourier domain followed by passing the components that fall within the aperture of the imaging system. The images so obtained are subaperture representations of the object, and they can be computationally combined in the Fourier domain to compose a large aperture (Fig. 4). Zheng et al. first demonstrated this method in optical microscopy and reported an SBP of 0.23 billion for a complex amplitude image.15,81 The state-of-the-art implementation achieved 1.45 NA using a objective lens in the air [Fig. 4(d)].79 An oil-immersion condenser lens can also be used with a objective to achieve an NA of 1.6.19 The resultant SBP is two orders of magnitude higher than that of a benchmark objective lens (Olympus ). We illustrate the operating principle of Fourier ptychography in Fig. 4(a). Under the on-axis illumination, the addressable spatial frequency range of the system is an area of a circle with a diameter of in the Fourier domain (coherent transfer function). Here, is the wavelength, and , where and are the entrance pupil diameter and focal length of the small-aperture lens, respectively. The center of the circle coincides with the origin of the Fourier space. Under angled illumination, the chief ray of the diffraction light cone changes with the illumination, leading to a linear shift of the frequency representation in the Fourier domain. The shifted distance equals , where is the incident angle of illumination. Therefore, the high-frequency components of the sample that is initially blocked by the aperture of the imaging system can be collected. By capturing a series of images under varied illumination angles and stitching their frequency representations in the Fourier domain, we can recover a large spatial frequency range of the object. The name of Fourier ptychography comes from a related lensless imaging modality, ptychography.82 With ptychography, the object is typically illuminated by a spatially confined beam at the spatial domain. The far-field diffraction patterns are then recorded at the spatial frequency domain as the object is mechanically scanned to different positions.83 Fourier ptychography swaps the spatial domain and the Fourier domain via a lens. With Fourier ptychography, the confined support constraint is imposed by the pupil aperture in the spatial frequency domain while the images are recorded in the spatial domain. In contrast to the mechanical scanning process in ptychography, Fourier ptychography scans the object’s Fourier spectrum in the spatial frequency domain via angle-varied illuminations. Fourier ptychography also shares its root with synthetic aperture imaging, which was first developed in radio astronomy for bypassing the resolution limit of a single radio telescope.84 A similar concept has been demonstrated for light microscopy where intensity and phase information are measured via interferometric setups.22,85–88 With Fourier ptychography, however, no direct phase measurement is needed in the acquisition process. Instead, the phase information is recovered from the intensity images using an iterative process referred to as phase retrieval.89–91 One widely adopted algorithm for phase retrieval is alternating projection,92 which iteratively imposes object constraints in the spatial and Fourier domains. For Fourier ptychography, the measured intensity is used as a modulus constraint in the spatial domain, and the confined pupil aperture is used as a support constraint in the Fourier domain [Fig. 4(b)].93 The nonreliance on direct phase measurement in Fourier ptychography eliminates the challenges of interferometry-based techniques, such as inherent speckle noise and sensitivity to phase errors. In addition, a Fourier ptychography microscope can be built with low-cost optics,94 facilitating its use in point-of-care applications.95–98 On the other hand, since the phase information cannot be directly measured as in interferometry, the recovery of a complex amplitude from intensity-only images is computationally expensive. This drawback can be alleviated using parallel processing via a graphic processing unit or by machine-learning-related approaches.99–101 To reconstruct a high-fidelity phase map, Fourier ptychography requires data redundancy in the Fourier domain.102 The Fourier spectrum of the measured image must be overlapped with the adjacent measurement—each data point in the Fourier domain needs to be included in at least two measurements to avoid ambiguity in the phase-retrieval process,103 a fact that substantially increases the data acquisition time. In addition, the small collection aperture of the lens limits the range of the measurable Fourier spectrum, leading to a reduced signal level under dark-field illumination. A long exposure time is thus required to capture images with a high SNR. To alleviate this problem, Tian et al.17 developed a multiplexed illumination strategy that illuminates the sample with beams at multiple, randomly selected incident angles.17 They demonstrated that the total number of images can be significantly reduced without sacrificing the reconstructed image quality. Alternatively, nonuniform Fourier sampling104 and data-driven approaches can be employed to reduce the number of image acquisitions.99,100,105 The measurement of a complex-amplitude image in Fourier ptychography enables great flexibility for postacquisition processing. For example, both the aberrations of the objective lens [Fig. 4(c)]15,80,106–111 and the defocus of the sample112,113 can be numerically corrected for, even under severe conditions.94,114 Based on this principle, Chung et al.115 reported a Fourier ptychographic retinal imaging method that can correct for eye lens aberrations and thereby enable full-resolution imaging of the retina. Similarly, postacquisition digital refocusing can be used to extend the depth of field for imaging microfilters containing captured tumor cells,97 96-well plate,112 blood smear,109 and pathological slides.113 One major limitation of Fourier ptychography is its reliance on angled illumination—for a three-dimensional (3D) object, tilting the illumination would change the object’s spectrum rather than just shifting it in the Fourier domain. As such, Fourier ptychography has been primarily used in imaging optically thin samples in transmission mode. To handle 3D thick specimens, it is possible to employ fixed illumination and modulate the light waves in the detection path.16,116 In this case, the recovered image represents the exiting wavefront of the object, which can then be digitally propagated back to any plane along the optical axis. The object thickness becomes irrelevant in the modeling. Also, recent advances in light scattering models have enabled reflection-mode Fourier ptychography,117,118 which can be further integrated with the modulation concept for deep tissue imaging.111 It is worth noting that Fourier ptychography is inapplicable to fluorescent samples because the fluorescence emission is generally isotropic and independent of illumination angles.119,120 4.2.Structured Illumination MicroscopyStructured illumination microscopy20,121,122 is also a frequency-domain method based on incoherent imaging. However, unlike Fourier ptychography, structured illumination microscopy shifts the frequency representation of an object through patterned illumination,20,122,123 making it suitable for fluorescence imaging. In a typical setup, the sample is illuminated by a striped pattern of a specific frequency, . The resultant image is the product of the object function and the illumination pattern. The frequency shifting property of the Fourier transform implies that the frequency representation of the image is shifted by with respect to the original spectrum. Therefore, the high frequencies beyond the aperture of the imaging system can be collected [Fig. 5(a)]. The structured illumination microscopy is primarily implemented in the epi-illumination mode. Because the illumination and imaging paths share the same objective lens, the upper illumination pattern frequency is bounded by the bandwidth of the objective lens for linear imaging, leading to a maximal resolution improvement. With nonlinear excitation21 or plasmonic substrates,125 the resolution can be further enhanced. In this case, the image is the product of the object function and the illumination function to the power of , where , and it describes the nonlinear dependence of the emitted light on illumination. Again, based on the frequency shifting property, the frequency representation of the object is shifted by , thereby increasing the observable frequency radial range by a factor of . By rotating the illumination pattern and varying its frequency, one can record all the frequency components of the object within an area of , expanding the SBP of the objective lens by a factor of . Using this strategy, Rego et al.124 demonstrated a 50-nm resolution within an FOV of [Fig. 5(b)]. It is worth noting that the resolution of structured illumination microscopy cannot be increased arbitrarily. Instead, given a limited photon budget of a fluorescent sample, an achievable resolution is limited by SNR.58 Based on the principle of structured illumination microscopy, various super-resolution imaging systems have been developed to increase the spatial resolution without sacrificing the FOV. For example, scanning structured illumination microscopy126 increases the spatial resolution of laser scanning microscopy with patterned illumination or detection.127–132 Image scanning microscopy133–135 increases the resolution of confocal microscopy136–138 by a factor of simply by replacing the point detector of the confocal microscope with an array detector. Moreover, these super-resolution scanning microscopy methods provide optical sectioning, thereby enabling imaging of relatively thick biological samples. It is worth noting that structured illumination microscopy has been traditionally used as a super-resolution imaging technique. Further increasing the resolution and optical sectioning ability is an important direction. However, imaging across a large FOV has not been pursued actively, and only recently has its potential as a high SBP imager with millimeter-scale FOV been discussed.139 5.High SBP Imaging: Wavefront-Engineering-Based MethodsBoth spatial- and frequency-domain methods leverage the advantage of small-aperture optics in managing the lens aberrations. By contrast, wavefront-engineering-based methods utilize large-aperture lenses. The correction for the lens aberrations is accomplished by wavefront modulation through either hardware or computation. 5.1.Hardware ApproachesTo modulate the wavefront, the hardware approaches use devices such as a deformable mirror140,141 or a liquid-crystal spatial light modulator (SLM).142 Because aberrations depend on the field height, the corresponding distorted wavefronts must be corrected for at individual field points/areas. Therefore, the hardware approaches are commonly implemented in scanning-based systems, sequentially acquiring image patches in which the aberrations can be considered homogeneous. Within this category, the most important method is adaptive scanning optical microscopy26,143,144 [Fig. 6(a)]. A custom large-aperture objective lens collects light emitted from an object. A galvanometric scanning mirror is placed at the back aperture of the object lens. Because the chief rays associated with different field heights are incident on the galvanometric scanning mirror at varied angles, scanning these rays in the angular domain passes the correspondent field areas to the following imaging optics in a sequential manner. A deformable mirror is placed at a conjugated pupil plane, adding the precalibrated phase delays to the wavefront and thereby compensating for the aberrations at the field location scanned [Fig. 6(b)]. The use of adaptive optics releases the design constraint on the large-aperture objective lens because the deformable mirror readily compensates for low-order wavefront distortions characterized by Zernike modes.145,146 Using this system, the team designed a system with a resolution (0.21 NA at 510 nm wavelength) across an FOV of , leading to an SBP of 2.7 billion. The system is highly stable because there is no mechanical translation of the sample or imaging optics. In addition, the system can image selected sub-FOVs at a high-frame rate, instead of imaging the entire FOV. Using this strategy, Potsaid et al. demonstrated real-time imaging of multiple live C. elegans worms [Fig. 6(c)].143 Although not being demonstrated by the authors, the system can scan the image in the axial direction by superimposing a quadratic phase map on the wavefront using a high-resolution SLM. Adaptive scanning optical microscopy requires the precalibration of the system at each field location. Therefore, the target of interest must be directly accessible to the microscope. If there is a layer of substance with unknown aberrations between the target and the microscope, such as a coverslip, an immersion medium, or a heterogeneous structure, this method will fail to acquire the aberration-free images. To correct for the sample-induced aberrations, the system must also use a wavefront sensor, such as a Shack–Hartmann sensor, to measure the wavefront aberration, followed by correction using the wavefront modulator. The resultant systems are particularly useful for quasiballistic imaging of volumetric samples. For example, adaptive optics optical coherence tomography has been demonstrated in retinal imaging, providing a single-cell resolution across different retinal layers.147–149 The correction of sample aberrations in such systems not only increases the resolution but also improves the SNR and thereby the penetration depth.150,151 A more specific article about this topic can be found elsewhere.35 The current hardware approaches cannot compensate for high-order wavefront distortions beyond the pixel count of the SLM. Also, the compensation pattern is bandlimited by the finite pixel pitch of the SLM. Therefore, in a highly aberrated imaging system, the imaging resolution is often lower than the diffraction limit even after wavefront correction. To solve this problem, Jang et al.27 demonstrated a wavefront-engineering system using a disorder-engineered metasurface and an SLM [Figs. 6(d) and 6(e)]. The metasurface consists of a subwavelength array of nanopillars with various widths that scatter light at very large angles up to 0.9 NA. By controlling the incident wavefront on the metasurface through the SLM, the team fully utilized the large scattering angle of the metasurface for tight focusing across a large FOV. This is equivalent to using a wavefront modulator with a reduced pixel size and an increased pixel count at the expense of decreased contrast. Using this system, the team demonstrated high-resolution () large-FOV (8 mm in diameter) scanning fluorescence microscopy. The corresponding SBP is 0.22 billion, which is much greater than those of conventional objective lenses. However, the phase map on the SLM must be updated sequentially during scanning, leading to a slow acquisition speed. 5.2.Computational ApproachesWith recent progress on computational optics, the geometrical aberrations can also be numerically corrected for in postprocessing, increasing the SBP of the system at a given geometry. Because this approach does not increase the hardware complexity, it can be readily implemented in off-the-shelf imaging systems. Based on Fourier optics principles, the complex generalized pupil function is proportional to the scaled optical transfer function, which is related to the point-spread-function (PSF) through the Fourier transform. Aberrations thus can be described as a phase term inside the generalized pupil function in a single-pass system. Given a unit magnification, the image, , is a convolution of the object function, , with the system’s aberrated PSF : where is the noise term. Transforming Eq. (3) to the Fourier domain gives where and are axes in the spatial frequency domain. , , , and are the Fourier transforms of , , , and , respectively. Because the system’s field-dependent can be measured as a prior, in the frequency domain, the image can be estimated as The propagation of the noise is determined by the condition number of , and the solution of Eq. (5) is well-posed only when .152For coherent imaging, the coherent transfer function is complex, and it is described as Aberrations of the system will change only the phase term . Because 152 within the objective’s bandwidth, its effect on the image can be readily reversed by multiplying with the complex conjugate of . The complex coherent transfer function can be measured using an interferometric setup. An example is shown in Fig. 7(a).30For incoherent imaging, the numerical correction of the aberrated pupil function is nontrivial. In this case, the incoherent PSF equals The corresponding incoherent optical transfer function is the normalized autocorrelation of its coherent counterpart:153 Given a circular aperture, has twice the bandwidth of , and its modulus monotonically decreases within this range, leading to a large condition number. Therefore, the solution of Eq. (8) is ill-posed.To deconvolve the incoherent PSF, conventional methods use regularization or statistical algorithms.152 However, the results are sensitive to noise, and the improvement in resolution is often limited. To overcome these problems, Zheng et al.31,80 developed a multiplane method. Rather than capturing only one in-focus image, they captured multiple defocused images at varying depths, followed by retrieving the phase with an iterative algorithm.154 Using this method, the team demonstrated the computational correction of spatially varying aberrations of an objective lens (, Olympus) in a large FOV (13 mm in diameter). They recovered the aberrated pupil functions at 350 field locations and numerically remedied the associated wavefront distortions, leading to a diffraction-limited resolution within the entire FOV [Fig. 7(b)]. In general, the acquisition speed of computational wavefront engineering is faster than that of the hardware-based approach. In the computational approach, one can calculate the aberrated pupil function and perform corrections in postprocessing [Fig. 7(c)] by simply dividing the FOV into smaller segments in which the aberrations can be considered homogenous. By contrast, the hardware approach requires scanning and updating of the phase pattern on the wavefront modulator during the measurement, resulting in a slow acquisition. 6.Comparative AdvantagesIn this review, we categorize high-SBP bioimagers into spatial-domain methods, frequency-domain methods, and wavefront-engineering-based methods (Fig. 8). We reviewed representative works in each category and compared their achievable SBP in Fig. 9. Here, the SBP of the state-of-the-art microscope objectives serves as the baseline (dashed curve), representing the limit that conventional optics can reach. All modalities marked on this graph surpass this baseline, pushing the SBP limit toward the giga scale (dot-dashed line). To compare these methods, we use the spatial resolution (i.e., reciprocal of bandwidth), FOV, and temporal resolution as the metrics. So far, spatial-domain methods9,10,14 and wavefront-engineering-based methods26–28,31 have been mainly used to expand the FOV with a moderate spatial resolution. It is challenging for these two categories of techniques to reach a resolution comparable to that of high-NA objective lenses (, , and ) in Fig. 9. For array microscopy, although we can replace each lens with a high NA objective, the practical hindrance lies in the trade-off between the NA and the depth of focus of individual lenses—the higher the NA, the smaller the depth of focus, the more sensitive the instrument to misalignment. For multiscale optical systems, the NA of the primary lens at the object side is proportional to its NA at the image side, which must in turn match with that of the secondary lenses. To increase the NA of the primary lens at the object side, we must increase the apertures of the secondary lenses as well, a fact that diminishes the advantage of using small aperture lenses in correcting for aberrations. Although we can use multilevel structures to further step down the aperture size, it increases the system complexity. For wavefront-engineering-based methods, with a given number of degrees of freedom to modulate the wavefront, the higher the NA, the lower the phase sampling density at the pupil plane. Therefore, high-order wavefront distortions beyond the degree of freedom of the wavefront modulator cannot be corrected for, resulting in a degraded resolution. In frequency-domain methods, Fourier ptychography and structured illumination microscopy provide complementary capabilities of pushing the FOV and resolution beyond the limit of conventional optics. On the one hand, Fourier ptychography has been primarily used to image a large FOV where a small aperture lens collects the light. The frequency bandwidth is limited by the maximum illumination angle, which is in the air. Therefore, the maximum collection NA is less than one without an oil/water-immersion condenser.19 On the other hand, structured illumination microscopy has been predominantly used to boost the resolution of high-NA lenses, such as oil/water immersion microscope objectives, doubling their effective NA in linear imaging. However, when being applied to large-FOV imaging with a small aperture lens, it is not as effective as Fourier ptychography in expanding the frequency bandwidth because the maximum frequency of the illumination pattern in the linear scheme is limited by the lens’ small NA. To compare the temporal resolution, we define a snapshot factor, , as the ratio of the SBP that is seen by the instrument at a time to the complete measurable SBP. A larger indicates a more time-efficient measurement and thereby a higher temporal resolution. For array microscopy, equals the FOV of an individual lens divided by its geometrical size. Given the FN, lens pitch , and magnification , the snapshot factor is . The geometrical factor of given by the circular FOV of the lens is neglected for simplicity. Therefore, the larger the , the smaller the , the lower the temporal resolution. For multiscale optical systems, , and the entire SBP can be acquired in a snapshot. For Fourier ptychography, given the bandwidth of a collecting objective, the number of required illumination angles is , where is the target frequency bandwidth, and the factor two is due to the oversampling requirement for phase recovery. Since the scanning is performed in the frequency domain, equals the inverse of , i.e., . Therefore, the larger the , the smaller the , the lower the temporal resolution. It is worth noting that the trade-off between and can be alleviated by employing multiplexed illumination.17 For structured illumination microscopy, has the same form as that in Fourier ptychography. For hardware wavefront-engineering-based approaches, is inversely proportional to the number of image patches in which the aberrations can be corrected for using a single-phase pattern displayed on the wavefront modulator. We summarize the comparative advantages discussed in Table 1. Table 1Comparative advantages of high-SBP imaging techniques. The FOV and temporal resolution of computational wavefront-engineering-based methods vary with the imager used. Therefore, we did not make conclusive comments.
The snapshot factor quantitatively describes how fast a system can image. The imaging speed of a snapshot system () is limited by only the readout speed of the camera, making it suitable for dynamic imaging of live biosamples. By contrast, the time-sequential methods () generally take larger-sized images at the expense of a reduced temporal resolution. Therefore, they are more suitable for imaging fixed specimens. The SNR of a system is also closely related to the snapshot factor, . Given the same number of exposures, the higher the snapshot factor, the higher the SNR. For example, when imaging a scene at a given frame rate, the SNR of a snapshot imager is times higher than that of scanning-based systems with . Noteworthily, although we divide the high-SBP imagers into three categories, techniques are not mutually exclusive. There is an interesting trend to build hybrid imagers that cross these barriers. For example, Chan et al.112 developed parallel Fourier ptychography microscopy by combining the array of microscopes with Fourier ptychography. The optical system consists of 96 microscopy units. They improved the NA of each unit from 0.23 to 0.3 through Fourier ptychography. The team demonstrated this system in imaging a 96-well plate with an extended depth of field at 0.7 frames per second. As another example, the wavefront-engineering-based methods can be combined with array microscopy or multiscale optical imaging to reduce the residual aberrations and thereby further improve the resolution. We envision that an ideal high-SBP imager probably combines various techniques in a single device. 7.Outlook7.1.Toward High SpeedThe traditional definition of SBP does not account for the temporal dimension. For bioimaging, the ability to observe fast dynamics is as critical as having a large FOV and a high resolution, particularly for in vivo or live-cell imaging applications.155–157 In this review, we characterized the temporal resolution using a snapshot factor, . To incorporate the time dimension, it is rationale to revise SBP as space-bandwidth- product,158 which quantifies the information flux. For scanning-based high-SBP imagers, the acquisition of abundant space-bandwidth information usually comes at the expense of a reduced , as shown in Table. 1. In contrast, multiscale optical imaging offers the snapshot advantage, . A large snapshot factor is crucial for high-speed bioimaging because the SNR decreases with the frame rate at a given photon flux. For example, using a snapshot multiscale microscope for the first time, Fan et al. demonstrated cortex-wide structural and functional calcium imaging at a video rate (30 fps).14 The high space-bandwidth- product image provides valuable information about the long-range connectivity of neurons across the whole brain. However, the temporal resolution is still insufficient to image the propagation of cellular action potential—the fundamental phenomenon of transmitting information through neural networks, which rises and decays within milliseconds.159 In their system, further improving the frame rate is limited by the electronic data transfer rate from cameras to the host computers. Using burst imaging and storing the images on the camera board can potentially boost the frame rate up to several thousand frames per second, though the synchronization among cameras will be challenging. For high space-bandwidth- product imaging, optimizing the data processing pipeline is equally important as acquisition.78 Given an enormous information flux, the extraction of the useful bio-information and relating it to cell/tissue physiology require new computational tools, such as multidimensional image analysis.160,161 The insights so obtained can potentially address fundamental questions such as how sensory inputs are dynamically mapped onto the functional activity of neural populations and how their processing leads to cognitive functions and behavior. Despite the substantial amount of studies, the exact mechanisms still remain elusive.162,163 7.2.Toward Super-ResolutionSo far, most high-SBP imaging has been performed at scales from microscopic to macroscopic, with a resolution being fundamentally limited by diffraction. An imaging system that pushes the resolution toward the subdiffraction limit while maintaining a large FOV will serve as a vital tool to explore the connection between the molecular building blocks and overall tissue/cell functionalities. For example, large-FOV super-resolution imaging is instrumental to the study that relates assembly and disassembly of intracellular actin filaments with macroscopic behavior of complex biosystems or tissues.164 As another example, individual protein folding generally occurs on nanoscopic scales, but its energy landscape is modulated by myriad interactions at the whole-cell level.165 Large-FOV super-resolution imaging will be the enabling tool to reveal the spatial pattern of folding/unfolding in response to various cellular effectors, such as cellular water and transport machinery.166 As shown in Table 1, current high-SBP imagers face challenges in this realm. Only a structured illumination microscopy approach can provide such a high resolution but within a moderate FOV. A possible solution to implement high-SBP imaging is expansion microscopy,167–169 a technique that physically enlarges the sample in each dimension by chemical approaches, thereby unravelling the nanoscale information. At this expanded scale, the large-FOV imagers such as the microscope array, multiscale optical systems, and Fourier ptychography become applicable. 7.3.Toward 3D ImagingCurrently, high-SBP imagers have been mainly used for two-dimensional (2D) planar imaging. However, because most biosystems possess 3D structures, directly applying the high-SBP techniques to optically thick samples will lead to a reduced contrast and resolution. Therefore, implementing high-SBP imaging in 3D microscopy represents a cutting-edge direction. For 3D imaging, we can still use the conventional definition of SBP in 2D [Eq. (1)] but must associate it with a specific depth plane. Among all modalities discussed, only frequency-domain methods have been exploited for 3D imaging. For example, 3D Fourier ptychography has been demonstrated based on single-scattering models.170,171 However, these methods work for only optically thin samples, in which the first Born or the first Rytov approximation is valid.172 For optically thick samples, multiple light scattering makes it challenging to solve the associated 3D inverse problem, resulting in inaccurate reconstruction as well as a missing cone issue—the inaccessibility of central low spatial frequency information in the 3D Fourier spectrum along the optical axis of the imaging system.173 The multislice beam propagation model has emerged to be a promising computational technique for imaging highly scattering biological samples.174–178 Alternatively, structured illumination microscopy has been long used for 3D imaging of biostructures.179 Nonetheless, it suffers from an amplified noise attributed by the out-of-focus light,180 which reduces achievable resolutions.58 To expand the arsenal of high-SBP imaging tools applicable to 3D microscopy, one promising direction is to combine existing planar high-SBP techniques with light-sheet microscopy.181–183 The superior optical sectioning capability of light-sheet microscopy enables imaging of thick and inhomogeneous samples. For example, Liu et al.38 reported a high-SBP, 3D recording of a live zebrafish embryo by integrating light-sheet microscopy with adaptive optics. Yet, the FOV is limited by the detecting objective lens even with galvanometric scanning. With this regard, the integration of a multiscale microscopy system with light-sheet illumination will offer a significantly expanded FOV. Also, the large light collection efficiency of multiscale microscopy () will enable high-speed scanning of 3D samples, though a degraded resolution due to sample-induced aberrations is expected. Tailoring a large-sized light-sheet excitation beam is another issue; this may be accomplished with large-scale metasurfaces184 or wavefront shaping systems.27,185,186 Also, handling of extremely large 3D or four-dimensional data will be challenging.187 AcknowledgmentsThis work was supported partially by the National Institutes of Health (R01EY029397, R35GM128761) and the National Science Foundation (1652150). J. P. acknowledges support from the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2019R1A6A3A03031505). L. T. acknowledges support from the National Science Foundation (1846784). ReferencesJ. N. Stirman et al.,
“Wide field-of-view, multi-region, two-photon imaging of neuronal activity in the mammalian brain,”
Nat. Biotechnol., 34
(8), 857
–862
(2016). https://doi.org/10.1038/nbt.3594 NABIF9 1087-0156 Google Scholar
A. W. Lohmann et al.,
“Space–bandwidth product of optical signals and systems,”
J. Opt. Soc. Am. A, 13
(3), 470
–473
(1996). https://doi.org/10.1364/JOSAA.13.000470 JOAOD6 0740-3232 Google Scholar
D. Mendlovic, A. W. Lohmann and Z. Zalevsky,
“Space–bandwidth product adaptation and its application to superresolution: examples,”
J. Opt. Soc. Am. A, 14
(3), 563
–567
(1997). https://doi.org/10.1364/JOSAA.14.000563 JOAOD6 0740-3232 Google Scholar
“Canon 2U250MRXS 250MP CMOS sensor,”
https://canon-cmos-sensors.com/canon-2u250mrxs-250mp-cmos-sensor/ Google Scholar
“ISOCELL Bright HMX | mobile image sensor | Samsung official,”
https://www.samsung.com/semiconductor/image-sensor/mobile-image-sensor/S5KHMX/ Google Scholar
A. W. Lohmann,
“Scaling laws for lens systems,”
Appl. Opt., 28
(23), 4996
–4998
(1989). https://doi.org/10.1364/AO.28.004996 APOPAI 0003-6935 Google Scholar
R. S. Weinstein et al.,
“An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study,”
Hum. Pathol., 35
(11), 1303
–1314
(2004). https://doi.org/10.1016/j.humpath.2004.09.002 HPCQA4 0046-8177 Google Scholar
B. Wilburn et al.,
“High performance imaging using large camera arrays,”
in ACM SIGGRAPH Papers,
765
–776
(2005). Google Scholar
A. Orth and K. Crozier,
“Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,”
Opt. Express, 20
(12), 13522
–13531
(2012). https://doi.org/10.1364/OE.20.013522 OPEXFF 1094-4087 Google Scholar
A. Orth and K. Crozier,
“Gigapixel fluorescence microscopy with a water immersion microlens array,”
Opt. Express, 21
(2), 2361
–2368
(2013). https://doi.org/10.1364/OE.21.002361 OPEXFF 1094-4087 Google Scholar
G. Holzner et al.,
“An optofluidic system with integrated microlens arrays for parallel imaging flow cytometry,”
Lab Chip, 18
(23), 3631
–3637
(2018). https://doi.org/10.1039/C8LC00593A LCAHAM 1473-0197 Google Scholar
D. J. Brady and N. Hagen,
“Multiscale lens design,”
Opt. Express, 17
(13), 10659
–10674
(2009). https://doi.org/10.1364/OE.17.010659 OPEXFF 1094-4087 Google Scholar
D. J. Brady et al.,
“Multiscale gigapixel photography,”
Nature, 486
(7403), 386
–389
(2012). https://doi.org/10.1038/nature11150 Google Scholar
J. Fan et al.,
“Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution,”
Nat. Photonics, 13
(11), 809
–816
(2019). https://doi.org/10.1038/s41566-019-0474-7 NPAHBY 1749-4885 Google Scholar
G. Zheng, R. Horstmeyer and C. Yang,
“Wide-field, high-resolution Fourier ptychographic microscopy,”
Nat. Photonics, 7
(9), 739
–745
(2013). https://doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885 Google Scholar
S. Dong et al.,
“Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,”
Opt. Express, 22
(11), 13586
–13599
(2014). https://doi.org/10.1364/OE.22.013586 OPEXFF 1094-4087 Google Scholar
L. Tian et al.,
“Multiplexed coded illumination for Fourier ptychography with an LED array microscope,”
Biomed. Opt. Express, 5
(7), 2376
–2389
(2014). https://doi.org/10.1364/BOE.5.002376 BOEICL 2156-7085 Google Scholar
L. Tian et al.,
“Computational illumination for high-speed in vitro Fourier ptychographic microscopy,”
Optica, 2
(10), 904
–911
(2015). https://doi.org/10.1364/OPTICA.2.000904 Google Scholar
J. Sun et al.,
“Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,”
Sci. Rep., 7 1187
(2017). SRCEC3 2045-2322 Google Scholar
M. G. L. Gustafsson,
“Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,”
J. Microsc., 198
(2), 82
–87
(2000). https://doi.org/10.1046/j.1365-2818.2000.00710.x JMICAR 0022-2720 Google Scholar
M. G. L. Gustafsson,
“Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,”
Proc. Natl. Acad. Sci. U. S. A., 102
(37), 13081
–13086
(2005). https://doi.org/10.1073/pnas.0406877102 PNASA6 0027-8424 Google Scholar
T. R. Hillman et al.,
“High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,”
Opt. Express, 17
(10), 7873
–7892
(2009). https://doi.org/10.1364/OE.17.007873 OPEXFF 1094-4087 Google Scholar
S. Chowdhury, A.-H. Dhalla and J. Izatt,
“Structured oblique illumination microscopy for enhanced resolution imaging of non-fluorescent, coherently scattering samples,”
Biomed. Opt. Express, 3
(8), 1841
–1854
(2012). https://doi.org/10.1364/BOE.3.001841 BOEICL 2156-7085 Google Scholar
W. Luo et al.,
“Synthetic aperture-based on-chip microscopy,”
Light Sci. Appl., 4
(3), e261
(2015). https://doi.org/10.1038/lsa.2015.34 Google Scholar
S. Chowdhury et al.,
“Refractive index tomography with structured illumination,”
Optica, 4
(5), 537
–545
(2017). https://doi.org/10.1364/OPTICA.4.000537 Google Scholar
B. Potsaid, Y. Bellouard and J. T. Wen,
“Adaptive scanning optical microscope (ASOM): a multidisciplinary optical microscope design for large field of view and high resolution imaging,”
Opt. Express, 13
(17), 6504
–6518
(2005). https://doi.org/10.1364/OPEX.13.006504 OPEXFF 1094-4087 Google Scholar
M. Jang et al.,
“Wavefront shaping with disorder-engineered metasurfaces,”
Nat. Photonics, 12
(2), 84
–90
(2018). https://doi.org/10.1038/s41566-017-0078-z NPAHBY 1749-4885 Google Scholar
N. J. Sofroniew et al.,
“A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,”
eLife, 5 e14472
(2016). https://doi.org/10.7554/eLife.14472 Google Scholar
Z. Kam et al.,
“Computational adaptive optics for live three-dimensional biological imaging,”
Proc. Natl. Acad. Sci. U. S. A., 98
(7), 3790
–3795
(2001). https://doi.org/10.1073/pnas.071275698 PNASA6 0027-8424 Google Scholar
S. G. Adie et al.,
“Computational adaptive optics for broadband optical interferometric tomography of biological tissue,”
Proc. Natl. Acad. Sci. U. S. A., 109
(19), 7175
–7180
(2012). https://doi.org/10.1073/pnas.1121193109 PNASA6 0027-8424 Google Scholar
G. Zheng et al.,
“Characterization of spatially varying aberrations for wide field-of-view microscopy,”
Opt. Express, 21
(13), 15131
–15143
(2013). https://doi.org/10.1364/OE.21.015131 OPEXFF 1094-4087 Google Scholar
N. D. Shemonski et al.,
“Computational high-resolution optical imaging of the living human retina,”
Nat. Photonics, 9
(7), 440
–443
(2015). https://doi.org/10.1038/nphoton.2015.102 NPAHBY 1749-4885 Google Scholar
Z. Kam et al.,
“Modelling the application of adaptive optics to wide-field microscope live imaging,”
J. Microsc., 226
(1), 33
–42
(2007). https://doi.org/10.1111/j.1365-2818.2007.01751.x JMICAR 0022-2720 Google Scholar
X. Tao et al.,
“Adaptive optics confocal microscopy using direct wavefront sensing,”
Opt. Lett., 36
(7), 1062
–1064
(2011). https://doi.org/10.1364/OL.36.001062 OPLEDP 0146-9592 Google Scholar
M. J. Booth,
“Adaptive optical microscopy: the ongoing quest for a perfect image,”
Light Sci. Appl., 3
(4), e165
(2014). https://doi.org/10.1038/lsa.2014.46 Google Scholar
J.-H. Park et al.,
“Large-field-of-view imaging by multi-pupil adaptive optics,”
Nat. Methods, 14
(6), 581
–583
(2017). https://doi.org/10.1038/nmeth.4290 1548-7091 Google Scholar
N. Ji,
“Adaptive optical fluorescence microscopy,”
Nat. Methods, 14
(4), 374
–380
(2017). https://doi.org/10.1038/nmeth.4218 1548-7091 Google Scholar
T.-L. Liu et al.,
“Observing the cell in its native state: imaging subcellular dynamics in multicellular organisms,”
Science, 360
(6386), eaaq1392
(2018). https://doi.org/10.1126/science.aaq1392 SCIEAS 0036-8075 Google Scholar
W. Bishara et al.,
“Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,”
Opt. Express, 18
(11), 11181
–11191
(2010). https://doi.org/10.1364/OE.18.011181 Google Scholar
A. Greenbaum et al.,
“Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,”
Nat. Methods, 9
(9), 889
–895
(2012). https://doi.org/10.1038/nmeth.2114 1548-7091 Google Scholar
S. O. Isikman et al.,
“Giga-pixel lensfree holographic microscopy and tomography using color image sensors,”
PLoS One, 7
(9), e45044
(2012). https://doi.org/10.1371/journal.pone.0045044 POLNCL 1932-6203 Google Scholar
A. Greenbaum et al.,
“Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,”
Sci. Rep., 3
(1), 1717
(2013). https://doi.org/10.1038/srep01717 SRCEC3 2045-2322 Google Scholar
A. Greenbaum et al.,
“Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,”
Sci. Transl. Med., 6
(267), 267ra175
(2014). https://doi.org/10.1126/scitranslmed.3009850 STMCBQ 1946-6234 Google Scholar
E. McLeod and A. Ozcan,
“Microscopy without lenses,”
Phys. Today, 70
(9), 50
–56
(2017). https://doi.org/10.1063/PT.3.3693 PHTOAD 0031-9228 Google Scholar
M. Roy et al.,
“A review of recent progress in lens-free imaging and sensing,”
Biosens. Bioelectron., 88 130
–143
(2017). https://doi.org/10.1016/j.bios.2016.07.115 BBIOE4 0956-5663 Google Scholar
T. Matsuyama, Y. Ohmura and D. M. Williamson,
“The lithographic lens: its history and evolution,”
Proc. SPIE, 6154 615403
(2006). https://doi.org/10.1117/12.656163 PSISDG 0277-786X Google Scholar
G. McConnell et al.,
“A novel optical microscope for imaging large embryos and tissue volumes with sub-cellular resolution throughout,”
eLife, 5 e18659
(2016). https://doi.org/10.7554/eLife.18659 Google Scholar
G. McConnell and W. B. Amos,
“Application of the mesolens for subcellular resolution imaging of intact larval and whole adult Drosophila,”
J. Microsc., 270
(2), 252
–258
(2018). https://doi.org/10.1111/jmi.12693 JMICAR 0022-2720 Google Scholar
E. Armstrong,
“Relative brain size and metabolism in mammals,”
Science, 220
(4603), 1302
–1304
(1983). https://doi.org/10.1126/science.6407108 SCIEAS 0036-8075 Google Scholar
G. T. di Francia,
“Degrees of freedom of an image,”
J. Opt. Soc. Am. A, 59
(7), 799
–804
(1969). https://doi.org/10.1364/JOSA.59.000799 JOAOD6 0740-3232 Google Scholar
J. W. Goodman, Introduction to Fourier Optics, Roberts and Company Publishers(2005). Google Scholar
J. Ellenberg et al.,
“A call for public archives for biological image data,”
Nat. Methods, 15
(11), 849
–854
(2018). https://doi.org/10.1038/s41592-018-0195-8 1548-7091 Google Scholar
T. M. Cover and J. A. Thomas, Elements of Information Theory, John Wiley & Sons(1999). Google Scholar
P. B. Fellgett, E. H. Linfoot and R. O. Redman,
“On the assessment of optical images,”
Philos. Trans. R. Soc. London Ser. A, 247
(931), 369
–407
(1955). https://doi.org/10.1098/rsta.1955.0001 Google Scholar
I. J. Cox and C. J. R. Sheppard,
“Information capacity and resolution in an optical system,”
J. Opt. Soc. Am. A, 3
(8), 1152
–1158
(1986). https://doi.org/10.1364/JOSAA.3.001152 Google Scholar
M. A. Neifeld,
“Information, resolution, and space–bandwidth product,”
Opt. Lett., 23
(18), 1477
–1479
(1998). https://doi.org/10.1364/OL.23.001477 OPLEDP 0146-9592 Google Scholar
M. G. Somekh, K. Hsu and M. C. Pitter,
“Resolution in structured illumination microscopy: a probabilistic approach,”
J. Opt. Soc. Am. A, 25
(6), 1319
–1329
(2008). https://doi.org/10.1364/JOSAA.25.001319 JOAOD6 0740-3232 Google Scholar
S. Ram, E. S. Ward and R. J. Ober,
“Beyond Rayleigh’s criterion: a resolution measure with application to single-molecule microscopy,”
Proc. Natl. Acad. Sci. U. S. A., 103
(12), 4457
–4462
(2006). https://doi.org/10.1073/pnas.0508047103 PNASA6 0027-8424 Google Scholar
K. Kuniyoshi et al.,
“A foveated wide angle lens for active vision,”
2982
–2988
(1995). https://doi.org/10.1109/ROBOT.1995.525707 Google Scholar
N. Hagen and T. S. Tkaczyk,
“Foveated endoscopic lens,”
J. Biomed. Opt., 17
(2), 021104
(2012). https://doi.org/10.1117/1.JBO.17.2.021104 JBOPFO 1083-3668 Google Scholar
Z. Bian et al.,
“Autofocusing technologies for whole slide imaging and automated microscopy,”
J. Biophotonics, 13
(12), e202000227
(2020). https://doi.org/10.1002/jbio.202000227 Google Scholar
S. Al-Janabi, A. Huisman and P. J. V. Diest,
“Digital pathology: current status and future perspectives,”
Histopathology, 61 1
–9
(2012). https://doi.org/10.1111/j.1365-2559.2011.03814.x HISTDD 1365-2559 Google Scholar
N. Farahani, A. Parwani and L. Pantanowitz,
“Whole slide imaging in pathology: advantages, limitations, and emerging perspectives,”
Pathol. Lab. Med. Int., 7 23
–33
(2015). https://doi.org/10.2147/PLMI.S59826 Google Scholar
L. Barisoni et al.,
“Digital pathology and computational image analysis in nephropathology,”
Nat. Rev. Nephrol., 16
(11), 669
–685
(2020). https://doi.org/10.1038/s41581-020-0321-6 Google Scholar
B. McCall et al.,
“Toward a low-cost compact array microscopy platform for detection of tuberculosis,”
Tuberculosis, 91 S54
–S60
(2011). https://doi.org/10.1016/j.tube.2011.10.011 Google Scholar
B. McCall et al.,
“Evaluation of a miniature microscope objective designed for fluorescence array microscopy detection of Mycobacterium tuberculosis,”
Arch. Pathol. Lab. Med., 138
(3), 379
–389
(2014). https://doi.org/10.5858/arpa.2013-0146-OA APLMAS 0003-9985 Google Scholar
B. McCall et al.,
“Miniature objective lens for array digital pathology: design improvement based on clinical evaluation,”
Proc. SPIE, 9791 97910K
(2016). https://doi.org/10.1117/12.2217430 PSISDG 0277-786X Google Scholar
M. J. Kidger, Fundamental Optical Design, SPIE Press(2001). Google Scholar
B. H. Walker, Optical Engineering Fundamentals, SPIE Press(2008). Google Scholar
H. S. Son et al.,
“A multiscale, wide field, gigapixel camera,”
in Computational Optical Sensing and Imaging,
(2011). Google Scholar
D. L. Marks et al.,
“Microcamera aperture scale in monocentric gigapixel cameras,”
Appl. Opt., 50
(30), 5824
–5833
(2011). https://doi.org/10.1364/AO.50.005824 APOPAI 0003-6935 Google Scholar
E. J. Tremblay et al.,
“Design and scaling of monocentric multiscale imagers,”
Appl. Opt., 51
(20), 4691
–4702
(2012). https://doi.org/10.1364/AO.51.004691 APOPAI 0003-6935 Google Scholar
W. Pang and D. J. Brady,
“Galilean monocentric multiscale optical systems,”
Opt. Express, 25
(17), 20332
–20339
(2017). https://doi.org/10.1364/OE.25.020332 OPEXFF 1094-4087 Google Scholar
I. Stamenov, I. P. Agurok and J. E. Ford,
“Optimization of two-glass monocentric lenses for compact panoramic imagers: general aberration analysis and specific designs,”
Appl. Opt., 51
(31), 7648
–7661
(2012). https://doi.org/10.1364/AO.51.007648 APOPAI 0003-6935 Google Scholar
S. Karbasi et al.,
“Curved fiber bundles for monocentric lens imaging,”
Proc. SPIE, 9579 95790G
(2015). https://doi.org/10.1117/12.2188901 PSISDG 0277-786X Google Scholar
M. S. Kim et al.,
“An aquatic-vision-inspired camera based on a monocentric lens and a silicon nanorod photodiode array,”
Nat. Electron., 3
(9), 546
–553
(2020). https://doi.org/10.1038/s41928-020-0429-5 NEREBX 0305-2257 Google Scholar
D. J. Brady et al.,
“Parallel cameras,”
Optica, 5
(2), 127
–137
(2018). https://doi.org/10.1364/OPTICA.5.000127 Google Scholar
X. Ou et al.,
“High numerical aperture Fourier ptychography: principle, implementation and characterization,”
Opt. Express, 23
(3), 3472
–3491
(2015). https://doi.org/10.1364/OE.23.003472 OPEXFF 1094-4087 Google Scholar
X. Ou, G. Zheng and C. Yang,
“Embedded pupil function recovery for Fourier ptychographic microscopy,”
Opt. Express, 22
(5), 4960
–4972
(2014). https://doi.org/10.1364/OE.22.004960 OPEXFF 1094-4087 Google Scholar
G. Zheng,
“Breakthroughs in photonics 2013: Fourier ptychographic imaging,”
IEEE Photonics J., 6 0701207
(2014). https://doi.org/10.1109/JPHOT.2014.2308632 Google Scholar
W. Hoppe and G. Strube,
“Diffraction in inhomogeneous primary wave fields. 2. Optical experiments for phase determination of lattice interferences,”
Acta Crystallogr A, 25
(4), 502
–507
(1969). https://doi.org/10.1107/S0567739469001057 Google Scholar
H. M. L. Faulkner and J. M. Rodenburg,
“Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,”
Phys. Rev. Lett., 93
(2), 023903
(2004). https://doi.org/10.1103/PhysRevLett.93.023903 PRLTAO 0031-9007 Google Scholar
M. Ryle and A. Hewish,
“The synthesis of large radio telescopes,”
Mon. Not. R. Astron. Soc., 120
(3), 220
–230
(1960). https://doi.org/10.1093/mnras/120.3.220 MNRAA4 0035-8711 Google Scholar
T. M. Turpin et al.,
“Theory of the synthetic aperture microscope,”
Proc. SPIE, 2566 230
–240
(1995). https://doi.org/10.1117/12.217378 PSISDG 0277-786X Google Scholar
T. S. Ralston et al.,
“Interferometric synthetic aperture microscopy,”
Nat. Phys., 3
(2), 129
–134
(2007). https://doi.org/10.1038/nphys514 NPAHAX 1745-2473 Google Scholar
T. Gutzler et al.,
“Coherent aperture-synthesis, wide-field, high-resolution holographic microscopy of biological tissue,”
Opt. Lett., 35
(8), 1136
–1138
(2010). https://doi.org/10.1364/OL.35.001136 OPLEDP 0146-9592 Google Scholar
Y. Baek et al.,
“Kramers–Kronig holographic imaging for high-space-bandwidth product,”
Optica, 6
(1), 45
–51
(2019). https://doi.org/10.1364/OPTICA.6.000045 Google Scholar
L. Taylor,
“The phase retrieval problem,”
IEEE Trans. Antennas Propag., 29
(2), 386
–391
(1981). https://doi.org/10.1109/TAP.1981.1142559 IETPAK 0018-926X Google Scholar
J. R. Fienup,
“Phase retrieval algorithms: a comparison,”
Appl. Opt., 21
(15), 2758
–2769
(1982). https://doi.org/10.1364/AO.21.002758 APOPAI 0003-6935 Google Scholar
J. R. Fienup,
“Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,”
J. Opt. Soc. Am. A, 4
(1), 118
–123
(1987). https://doi.org/10.1364/JOSAA.4.000118 JOAOD6 0740-3232 Google Scholar
R. W. Gerchberg,
“A practical algorithm for the determination of phase from image and diffraction plane pictures,”
Optik, 35 237
–246
(1972). OTIKAJ 0030-4026 Google Scholar
G. Zheng, Fourier Ptychographic Imaging: A MATLAB Tutorial, Morgan & Claypool Publishers(2016). Google Scholar
T. Aidukas et al.,
“Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware,”
Sci. Rep., 9
(1), 7457
(2019). https://doi.org/10.1038/s41598-019-43845-9 SRCEC3 2045-2322 Google Scholar
S. Dong et al.,
“FPscope: a field-portable high-resolution microscope using a cellphone lens,”
Biomed. Opt. Express, 5
(10), 3305
–3310
(2014). https://doi.org/10.1364/BOE.5.003305 BOEICL 2156-7085 Google Scholar
R. Horstmeyer et al.,
“Digital pathology with Fourier ptychography,”
Comput. Med. Imaging Graph., 42 38
–43
(2015). https://doi.org/10.1016/j.compmedimag.2014.11.005 Google Scholar
A. J. Williams et al.,
“Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,”
J. Biomed. Opt., 19
(6), 066007
(2014). https://doi.org/10.1117/1.JBO.19.6.066007 JBOPFO 1083-3668 Google Scholar
H. Zhang et al.,
“Field-portable quantitative lensless microscopy based on translated speckle illumination and sub-sampled ptychographic phase retrieval,”
Opt. Lett., 44
(8), 1976
–1979
(2019). https://doi.org/10.1364/OL.44.001976 OPLEDP 0146-9592 Google Scholar
Y. Xue et al.,
“Reliable deep-learning-based phase imaging with uncertainty quantification,”
Optica, 6
(5), 618
–629
(2019). https://doi.org/10.1364/OPTICA.6.000618 Google Scholar
T. Nguyen et al.,
“Deep learning approach for Fourier ptychography microscopy,”
Opt. Express, 26
(20), 26470
–26484
(2018). https://doi.org/10.1364/OE.26.026470 OPEXFF 1094-4087 Google Scholar
S. Jiang et al.,
“Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,”
Biomed. Opt. Express, 9
(7), 3306
–3319
(2018). https://doi.org/10.1364/BOE.9.003306 BOEICL 2156-7085 Google Scholar
S. Dong et al.,
“Sparsely sampled Fourier ptychography,”
Opt. Express, 22
(5), 5455
–5464
(2014). https://doi.org/10.1364/OE.22.005455 OPEXFF 1094-4087 Google Scholar
O. Bunk et al.,
“Influence of the overlap parameter on the convergence of the ptychographical iterative engine,”
Ultramicroscopy, 108
(5), 481
–487
(2008). https://doi.org/10.1016/j.ultramic.2007.08.003 ULTRD6 0304-3991 Google Scholar
K. Guo et al.,
“Optimization of sampling pattern and the design of Fourier ptychographic illuminator,”
Opt. Express, 23
(5), 6171
–6180
(2015). https://doi.org/10.1364/OE.23.006171 OPEXFF 1094-4087 Google Scholar
F. Shamshad, F. Abbas and A. Ahmed,
“Deep Ptych: subsampled Fourier ptychography using generative priors,”
7720
–7724
(2019). https://doi.org/10.1109/ICASSP.2019.8682179 Google Scholar
Z. Bian, S. Dong and G. Zheng,
“Adaptive system correction for robust Fourier ptychographic imaging,”
Opt. Express, 21
(26), 32400
–32410
(2013). https://doi.org/10.1364/OE.21.032400 OPEXFF 1094-4087 Google Scholar
J. Chung et al.,
“Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,”
Biomed. Opt. Express, 7
(2), 352
–368
(2016). https://doi.org/10.1364/BOE.7.000352 BOEICL 2156-7085 Google Scholar
J. Chung et al.,
“Computational aberration compensation by coded-aperture-based correction of aberration obtained from optical Fourier coding and blur estimation,”
Optica, 6
(5), 647
–661
(2019). https://doi.org/10.1364/OPTICA.6.000647 Google Scholar
P. Song et al.,
“Full-field Fourier ptychography (FFP): spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,”
APL Photonics, 4
(5), 050802
(2019). https://doi.org/10.1063/1.5090552 Google Scholar
R. Horstmeyer et al.,
“Overlapped Fourier coding for optical aberration removal,”
Opt. Express, 22
(20), 24062
–24080
(2014). https://doi.org/10.1364/OE.22.024062 OPEXFF 1094-4087 Google Scholar
C. Shen et al.,
“Computational aberration correction of VIS-NIR multispectral imaging microscopy based on Fourier ptychography,”
Opt. Express, 27
(18), 24923
–24937
(2019). https://doi.org/10.1364/OE.27.024923 OPEXFF 1094-4087 Google Scholar
A. C. S. Chan et al.,
“Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),”
Sci. Rep., 9
(1), 11114
(2019). https://doi.org/10.1038/s41598-019-47146-z SRCEC3 2045-2322 Google Scholar
R. Claveau et al.,
“Digital refocusing and extended depth of field reconstruction in Fourier ptychographic microscopy,”
Biomed. Opt. Express, 11
(1), 215
–226
(2020). https://doi.org/10.1364/BOE.11.000215 BOEICL 2156-7085 Google Scholar
T. Kamal, L. Yang and W. M. Lee,
“In situ retrieval and correction of aberrations in moldless lenses using Fourier ptychography,”
Opt. Express, 26
(3), 2708
–2719
(2018). https://doi.org/10.1364/OE.26.002708 OPEXFF 1094-4087 Google Scholar
J. Chung, R. W. Horstmeyer and C. Yang,
“Fourier ptychographic retinal imaging methods and systems,”
(2017). Google Scholar
P. Song et al.,
“Super-resolution microscopy via ptychographic structured modulation of a diffuser,”
Opt. Lett., 44
(15), 3645
–3648
(2019). https://doi.org/10.1364/OL.44.003645 OPLEDP 0146-9592 Google Scholar
A. Matlock et al.,
“Inverse scattering for reflection intensity phase microscopy,”
Biomed. Opt. Express, 11
(2), 911
–926
(2020). https://doi.org/10.1364/BOE.380845 BOEICL 2156-7085 Google Scholar
C. Yurdakul et al.,
“High-throughput, high-resolution interferometric light microscopy of biological nanoparticles,”
ACS Nano, 14
(2), 2002
–2013
(2020). https://doi.org/10.1021/acsnano.9b08512 ANCAC3 1936-0851 Google Scholar
Handbook of Biomedical Fluorescence, CRC Press, Taylor & Francis Group(2003). Google Scholar
S. Dong et al.,
“High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,”
Opt. Express, 22
(17), 20856
–20870
(2014). https://doi.org/10.1364/OE.22.020856 OPEXFF 1094-4087 Google Scholar
M. A. A. Neil, R. Juškaitis and T. Wilson,
“Method of obtaining optical sectioning by using structured light in a conventional microscope,”
Opt. Lett., 22
(24), 1905
–1907
(1997). https://doi.org/10.1364/OL.22.001905 OPLEDP 0146-9592 Google Scholar
R. Heintzmann and C. G. Cremer,
“Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,”
Proc. SPIE, 3568 185
–196
(1999). https://doi.org/10.1117/12.336833 PSISDG 0277-786X Google Scholar
L. Shao et al.,
“I5S: wide-field light microscopy with 100-nm-scale resolution in three dimensions,”
Biophys. J., 94
(12), 4971
–4983
(2008). https://doi.org/10.1529/biophysj.107.120352 BIOJAU 0006-3495 Google Scholar
E. H. Rego et al.,
“Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution,”
Proc. Natl. Acad. Sci. U. S. A., 109
(3), E135
–E143
(2012). https://doi.org/10.1073/pnas.1107547108 PNASA6 0027-8424 Google Scholar
F. Wei and Z. Liu,
“Plasmonic structured illumination microscopy,”
Nano Lett., 10
(7), 2531
–2536
(2010). https://doi.org/10.1021/nl1011068 NALEFD 1530-6984 Google Scholar
J. Lu et al.,
“Super-resolution laser scanning microscopy through spatiotemporal modulation,”
Nano Lett., 9
(11), 3883
–3889
(2009). https://doi.org/10.1021/nl902087d NALEFD 1530-6984 Google Scholar
R.-W. Lu et al.,
“Super-resolution scanning laser microscopy through virtually structured detection,”
Biomed. Opt. Express, 4
(9), 1673
–1682
(2013). https://doi.org/10.1364/BOE.4.001673 BOEICL 2156-7085 Google Scholar
Y. Zhi et al.,
“Rapid super-resolution line-scanning microscopy through virtually structured detection,”
Opt. Lett., 40
(8), 1683
–1686
(2015). https://doi.org/10.1364/OL.40.001683 OPLEDP 0146-9592 Google Scholar
B. E. Urban et al.,
“Super-resolution two-photon microscopy via scanning patterned illumination,”
Phys. Rev. E, 91
(4), 042703
(2015). https://doi.org/10.1103/PhysRevE.91.042703 Google Scholar
P. Gao and G. U. Nienhaus,
“Confocal laser scanning microscopy with spatiotemporal structured illumination,”
Opt. Lett., 41
(6), 1193
–1196
(2016). https://doi.org/10.1364/OL.41.001193 OPLEDP 0146-9592 Google Scholar
C. Kuang et al.,
“Virtual k-space modulation optical microscopy,”
Phys. Rev. Lett., 117
(2), 028102
(2016). https://doi.org/10.1103/PhysRevLett.117.028102 PRLTAO 0031-9007 Google Scholar
H. Ni et al.,
“Lateral resolution enhancement of confocal microscopy based on structured detection method with spatial light modulator,”
Opt. Express, 25
(3), 2872
–2882
(2017). https://doi.org/10.1364/OE.25.002872 OPEXFF 1094-4087 Google Scholar
C. R. Sheppard,
“Super-resolution in confocal imaging,”
Optik, 80 53
–54
(1988). OTIKAJ 0030-4026 Google Scholar
C. B. Müller and J. Enderlein,
“Image scanning microscopy,”
Phys. Rev. Lett., 104
(19), 198101
(2010). https://doi.org/10.1103/PhysRevLett.104.198101 PRLTAO 0031-9007 Google Scholar
J. Huff,
“The Airyscan detector from ZEISS: confocal imaging with improved signal-to-noise ratio and super-resolution,”
Nat. Methods, 12
(12), i
–ii
(2015). https://doi.org/10.1038/nmeth.f.388 1548-7091 Google Scholar
A. G. York et al.,
“Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,”
Nat. Methods, 9
(7), 749
–754
(2012). https://doi.org/10.1038/nmeth.2025 1548-7091 Google Scholar
A. G. York et al.,
“Instant super-resolution imaging in live cells and embryos via analog image processing,”
Nat. Methods, 10
(11), 1122
–1126
(2013). https://doi.org/10.1038/nmeth.2687 1548-7091 Google Scholar
M. Ingaramo et al.,
“Two-photon excitation improves multifocal structured illumination microscopy in thick scattering tissue,”
Proc. Natl. Acad. Sci. U. S. A., 111
(14), 5254
–5259
(2014). https://doi.org/10.1073/pnas.1314447111 PNASA6 0027-8424 Google Scholar
J. Joseph et al.,
“Improving the space-bandwidth product of structured illumination microscopy using a transillumination configuration,”
J. Phys. D, 53
(4), 044006
(2019). https://doi.org/10.1088/1361-6463/ab4e68 Google Scholar
L. J. Hornbeck,
“Deformable-mirror spatial light modulators,”
Proc. SPIE, 1150 86
–103
(1990). https://doi.org/10.1117/12.962188 PSISDG 0277-786X Google Scholar
C. Paterson, I. Munro and J. C. Dainty,
“A low cost adaptive optics system using a membrane mirror,”
Opt. Express, 6
(9), 175
–185
(2000). https://doi.org/10.1364/OE.6.000175 OPEXFF 1094-4087 Google Scholar
G. D. Love,
“Wave-front correction and production of Zernike modes with a liquid-crystal spatial light modulator,”
Appl. Opt., 36
(7), 1517
–1524
(1997). https://doi.org/10.1364/AO.36.001517 APOPAI 0003-6935 Google Scholar
B. Potsaid, F. P. Finger and J. T. Wen,
“Automation of challenging spatial-temporal biomedical observations with the adaptive scanning optical microscope (ASOM),”
IEEE Trans. Autom. Sci. Eng., 6
(3), 525
–535
(2009). https://doi.org/10.1109/TASE.2009.2021358 1545-5955 Google Scholar
, “Thorlabs catalog: adaptive scanning optical microscope,”
(2011) https://www.thorlabs.com/catalogPages/582.pdf Google Scholar
B. C. Platt and R. Shack,
“History and principles of Shack–Hartmann wavefront sensing,”
J. Refract. Surg., 17
(5), S573
–S577
(2001). https://doi.org/10.3928/1081-597X-20010901-13 JRSUEY 0883-0444 Google Scholar
J.-W. Cha, J. Ballesta and P. T. So,
“Shack–Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,”
J. Biomed. Opt., 15
(4), 046022
(2010). https://doi.org/10.1117/1.3475954 Google Scholar
R. J. Zawadzki et al.,
“Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging,”
Opt. Express, 13
(21), 8532
–8546
(2005). https://doi.org/10.1364/OPEX.13.008532 OPEXFF 1094-4087 Google Scholar
Y. Zhang et al.,
“Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina,”
Opt. Express, 13
(12), 4792
–4811
(2005). https://doi.org/10.1364/OPEX.13.004792 OPEXFF 1094-4087 Google Scholar
D. R. Williams,
“Imaging single cells in the living retina,”
Vision Res., 51
(13), 1379
–1396
(2011). https://doi.org/10.1016/j.visres.2011.05.002 VISRAM 0042-6989 Google Scholar
B. Hermann et al.,
“Adaptive-optics ultrahigh-resolution optical coherence tomography,”
Opt. Lett., 29
(18), 2142
–2144
(2004). https://doi.org/10.1364/OL.29.002142 OPLEDP 0146-9592 Google Scholar
M. Pircher and R. J. Zawadzki,
“Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging,”
Biomed. Opt. Express, 8
(5), 2536
–2562
(2017). https://doi.org/10.1364/BOE.8.002536 BOEICL 2156-7085 Google Scholar
M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, CRC Press, Taylor & Francis Group(1998). Google Scholar
D. G. Smith, Field Guide to Physical Optics, SPIE Press(2013). Google Scholar
L. J. Allen and M. P. Oxley,
“Phase retrieval from series of images obtained by defocus variation,”
Opt. Commun., 199
(1-4), 65
–75
(2001). https://doi.org/10.1016/S0030-4018(01)01556-5 OPCOB8 0030-4018 Google Scholar
“Fluorescence lifetime measurements and biological imaging | chemical reviews,”
https://pubs.acs.org/doi/pdf/10.1021/cr900343z Google Scholar
S. Ebbinghaus and M. Gruebele,
“Protein folding landscapes in the living cell,”
J. Phys. Chem. Lett., 2
(4), 314
–319
(2011). https://doi.org/10.1021/jz101729z JPCLCD 1948-7185 Google Scholar
E. M. Hillman et al.,
“High-speed 3D imaging of cellular activity in the brain using axially-extended beams and light sheets,”
Curr. Opin. Neurobiol., 50 190
–200
(2018). https://doi.org/10.1016/j.conb.2018.03.007 COPUEN 0959-4388 Google Scholar
L.-H. Yeh, S. Chowdhury and L. Waller,
“Computational structured illumination for high-content fluorescence and phase microscopy,”
Biomed. Opt. Express, 10
(4), 1978
–1998
(2019). https://doi.org/10.1364/BOE.10.001978 BOEICL 2156-7085 Google Scholar
T. C. Südhof,
“Neurotransmitter release: the last millisecond in the life of a synaptic vesicle,”
Neuron, 80
(3), 675
–690
(2013). https://doi.org/10.1016/j.neuron.2013.10.022 NERNET 0896-6273 Google Scholar
J. P. Cunningham and B. M. Yu,
“Dimensionality reduction for large-scale neural recordings,”
Nat. Neurosci., 17
(11), 1500
–1509
(2014). https://doi.org/10.1038/nn.3776 NANEFN 1097-6256 Google Scholar
R. C. Williamson et al.,
“Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction,”
Curr. Opin. Neurobiol., 55 40
–47
(2019). https://doi.org/10.1016/j.conb.2018.12.009 COPUEN 0959-4388 Google Scholar
A. P. Alivisatos et al.,
“The brain activity map,”
Science, 339
(6125), 1284
–1285
(2013). https://doi.org/10.1126/science.1236939 SCIEAS 0036-8075 Google Scholar
W. Koroshetz et al.,
“The state of the NIH BRAIN Initiative,”
J. Neurosci., 38
(29), 6427
–6438
(2018). https://doi.org/10.1523/JNEUROSCI.3174-17.2018 JNRSDS 0270-6474 Google Scholar
C. Copos et al.,
“Connecting actin polymer dynamics across multiple scales,”
(2020). Google Scholar
H. Gelman and M. Gruebele,
“Fast protein folding kinetics,”
Q. Rev. Biophys., 47
(2), 95
–142
(2014). https://doi.org/10.1017/S003358351400002X QURBAW 0033-5835 Google Scholar
S. Ebbinghaus et al.,
“Protein folding stability and dynamics imaged in a living cell,”
Nat. Methods, 7
(4), 319
–323
(2010). https://doi.org/10.1038/nmeth.1435 1548-7091 Google Scholar
F. Chen, W. Tillberg and E. S. Boyden,
“Expansion microscopy,”
Science, 347
(6221), 543
–548
(2015). https://doi.org/10.1126/science.1260088 SCIEAS 0036-8075 Google Scholar
T. Ku et al.,
“Multiplexed and scalable super-resolution imaging of three-dimensional protein localization in size-adjustable tissues,”
Nat. Biotechnol., 34
(9), 973
–981
(2016). https://doi.org/10.1038/nbt.3641 NABIF9 1087-0156 Google Scholar
A. T. Wassie, Y. Zhao and E. S. Boyden,
“Expansion microscopy: principles and uses in biological research,”
Nat. Methods, 16
(1), 33
–41
(2019). https://doi.org/10.1038/s41592-018-0219-4 1548-7091 Google Scholar
R. Ling et al.,
“High-throughput intensity diffraction tomography with a computational microscope,”
Biomed. Opt. Express, 9
(5), 2130
–2141
(2018). https://doi.org/10.1364/BOE.9.002130 BOEICL 2156-7085 Google Scholar
J. Li et al.,
“High-speed in vitro intensity diffraction tomography,”
Adv. Photonics, 3 066004
(2019). https://doi.org/10.1117/1.AP.1.6.066004 AOPAC7 1943-8206 Google Scholar
B. Chen and J. J. Stamnes,
“Validity of diffraction tomography based on the first Born and the first Rytov approximations,”
Appl. Opt., 37
(14), 2996
–3006
(1998). https://doi.org/10.1364/AO.37.002996 APOPAI 0003-6935 Google Scholar
J. Lim et al.,
“Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,”
Opt. Express, 23
(13), 16933
–16948
(2015). https://doi.org/10.1364/OE.23.016933 OPEXFF 1094-4087 Google Scholar
L. Tian and L. Waller,
“3D intensity and phase imaging from light field measurements in an LED array microscope,”
Optica, 2
(2), 104
–111
(2015). https://doi.org/10.1364/OPTICA.2.000104 Google Scholar
U. S. Kamilov et al.,
“Learning approach to optical tomography,”
Optica, 2
(6), 517
–522
(2015). https://doi.org/10.1364/OPTICA.2.000517 Google Scholar
S. Chowdhury et al.,
“High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,”
Optica, 6
(9), 1211
–1219
(2019). https://doi.org/10.1364/OPTICA.6.001211 Google Scholar
J. Lim et al.,
“High-fidelity optical diffraction tomography of multiple scattering samples,”
Light Sci. Appl., 8
(1), 1
(2019). https://doi.org/10.1038/s41377-018-0109-7 Google Scholar
M. Chen et al.,
“Multi-layer Born multiple-scattering model for 3D phase microscopy,”
Optica, 7
(5), 394
–403
(2020). https://doi.org/10.1364/OPTICA.383030 Google Scholar
F. Kraus et al.,
“Quantitative 3D structured illumination microscopy of nuclear structures,”
Nat. Protoc., 12
(5), 1011
–1028
(2017). https://doi.org/10.1038/nprot.2017.020 1754-2189 Google Scholar
N. Hagen, L. Gao and T. S. Tkaczyk,
“Quantitative sectioning and noise analysis for structured illumination microscopy,”
Opt. Express, 20
(1), 403
–413
(2012). https://doi.org/10.1364/OE.20.000403 OPEXFF 1094-4087 Google Scholar
P. J. Keller et al.,
“Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,”
Science, 322
(5904), 1065
–1069
(2008). https://doi.org/10.1126/science.1162493 SCIEAS 0036-8075 Google Scholar
B.-C. Chen et al.,
“Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,”
Science, 346
(6208), 1257998
(2014). https://doi.org/10.1126/science.1257998 SCIEAS 0036-8075 Google Scholar
R. M. Power and J. Huisken,
“A guide to light-sheet fluorescence microscopy for multiscale imaging,”
Nat. Methods, 14
(4), 360
–373
(2017). https://doi.org/10.1038/nmeth.4224 1548-7091 Google Scholar
P. Genevet et al.,
“Recent advances in planar optics: from plasmonic to dielectric metasurfaces,”
Optica, 4
(1), 139
–152
(2017). https://doi.org/10.1364/OPTICA.4.000139 Google Scholar
H. Yu et al.,
“Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields,”
Nat. Photonics, 11
(3), 186
–192
(2017). https://doi.org/10.1038/nphoton.2016.272 NPAHBY 1749-4885 Google Scholar
J. Park, K. Lee and Y. Park,
“Ultrathin wide-angle large-area digital 3D holographic display using a non-periodic photon sieve,”
Nat. Commun., 10 1304
(2019). https://doi.org/10.1038/s41467-019-09126-9 NCAOBW 2041-1723 Google Scholar
F. Amat et al.,
“Efficient processing and analysis of large-scale light-sheet microscopy data,”
Nat. Protoc., 10
(11), 1679
–1696
(2015). https://doi.org/10.1038/nprot.2015.111 1754-2189 Google Scholar
|