We present a cross-modality super-resolution microscopy method based on the generative adversarial network (GAN) framework. Using a trained convolutional neural network, our method takes a low-resolution image acquired with one microscopic imaging modality, and super-resolves it to match the resolution of the image of the same sample captured with another higher resolution microscopy modality. This cross-modality super-resolution method is purely data-driven, i.e., it does not rely on any knowledge of the image formation model, or the point-spread-function. First, we demonstrated the success of our method by super-resolving wide-field fluorescence microscopy images captured with a low-numerical aperture (NA=0.4) objective to match the resolution of images captured with a higher NA objective (NA=0.75). Next, we applied our method to confocal microscopy to super-resolve closely spaced nano-particles and Histone3 sites within HeLa cell nuclei, matching the resolution of stimulated emission depletion (STED) microscopy images of the same samples. Our method was also verified by super-resolving the diffraction-limited total internal reflection fluorescence (TIRF) microscopy images, matching the resolution of TIRF-SIM (structured illumination microscopy) images of the same samples, which revealed endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo. The super-resolved object features in the network output show strong agreement with the ground truth SIM reconstructions, which were synthesized using 9 diffraction-limited TIRF images, each with structured illumination. Other than resolution enhancement, our method also offers an extended depth-of-field and improved signal-to-noise ratio (SNR) in the network inferred images compared against the corresponding ground truth images.
KEYWORDS: Microscopy, Super resolution, Luminescence, Image processing, Image resolution, Confocal microscopy, Neural networks, Gallium nitride, Diffraction, Signal to noise ratio
We present a deep learning-based framework for super-resolution image transformations across multiple fluorescence microscopy modalities. By training a neural network using a generative adversarial network (GAN), a single low-resolution image is transformed into a high-resolution image that surpasses the diffraction limit. The deep network’s output also demonstrates improved signal-to-noise ratio and extended depth-of-field. This framework is solely data-driven which means that it does not rely on any physical models of the imaging formation process, and instead learns a statistical transformation from the training image datasets. The inference process is non-iterative and does not require sweeping over parameters to achieve optimal results, in contrast to state-of-the-art deconvolution methods. The success of this framework is demonstrated by super-resolving wide-field images captured with low-numerical aperture objective-lenses to match the resolution of images captured with high-numerical aperture objectives. In another example, we demonstrate the transformation of confocal microscopy images into images that match the performance of stimulated emission depletion (STED) microscopy, by super-resolving the distributions of Histone 3 sites within cell nuclei. We also applied this framework to total-internal-reflection fluorescence (TIRF) microscopy and super-resolved TIRF images to match the resolution of TIRF-based structured illumination microscopy (TIRF-SIM). Our super-resolved TIRF images/movies reveal endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo, providing a very good match to TIRF-SIM images/movies of the same samples. Our experimental results demonstrate that the presented data-driven super resolution approach generalizes to new types of images and super-resolves objects that were not present in the training stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.