Fourier ptychography is a promising computational imaging technique that has been successfully applied in various fields, such as quantitative phase imaging, three-dimensional imaging and remote imaging. It scans a series of low-resolution images and then combines them in the Fourier domain to reconstruct a high-resolution image, achieving both wide field of view and high resolution. In this work, we report a high-fidelity Fourier ptychographic reconstruction technique based on a complex-valued channel-wise attention network (CANet) in the plug-and-play (PnP) optimization framework. Following the PnP framework, the optimization objective is decomposed into two sub-problems, including physical model constraint and statistical prior regularization. We utilize physical model constraints to ensure fidelity, and apply CANet to attenuate noise and recover fine details for the prior regularization, achieving both high fidelity and high efficiency. In the network, we introduce the channel attention module that treats amplitude and phase features as a token and calculates self-attention along the channel dimension, thereby exploiting the intrinsic relationship between amplitude and phase information. Meanwhile, we employ multi-stage channel attention modules to extract multi-resolution contextual information, improving the network’s robustness. Experiments validate that the reported technique achieves an average improvement of 1.2dB in PSNR over the compressive sensing method under different scenarios.
Optical flow estimation is widely used in various fields, such as motion scene understanding, autonomous driving, and object tracking. Despite great advances over the last decade, handling illumination variations in optical flow remains an open problem. Conventional optical flow estimation techniques build upon the brightness constancy assumption. Thus, variation in the lighting within the scene can affect the accuracy of optical flow algorithms. To tackle this challenge, here we report a plug-and-play illumination correction technique for robust optical flow estimation under variant illumination scenarios. This technique includes a motion-illumination decoupling strategy for the two images used to compute the optical flow, and an image brightness correction strategy. We trained a UNET-based neural network with Swin Transformer layer as basic block to decouple the motion and luminance information of the two images. Then we applied the decoupled luminance information from the reference image to the source image and adjusted its brightness. This technique makes both images maintain the same luminance for robust optical flow estimation under variant illumination conditions. We applied the reported technique on both traditional optical flow algorithms and deep learning-based optical flow algorithms, and the experiment results validated that it enables to enhance the algorithms’ illumination robustness, and achieves competitive evaluation results on the MPI-Sintel dataset and real-captured data.
Poor lighting conditions in the real world may lead to ill-exposure in captured images which suffer from compromised aesthetic quality and information loss for post-processing. Recent exposure correction works address this problem by learning the mapping from images of multiple exposure intensities to well-exposed images. However, it requires a large number of paired training data, which is hard to implement for certain data-inaccessible scenarios. This paper presents a highly robust exposure correction method based on self-supervised learning. Specifically, two sub-networks are designed to deal with under- and over-exposed regions in ill-exposed images respectively. This hybrid architecture enables adaptive ill-exposure correction. Then, a fusion module is employed to fuse the under-exposure corrected image and the over-exposure corrected image to obtain a well-exposed image with vivid color and clear textures. Notably, the training process is guided by histogram-equalized images with the application of histogram equalization prior (HEP), which means that the presented method only requires ill-exposed images as training data. Extensive experiments on real-world image datasets validate the robustness and superiority of this technique.
Fourier single-pixel imaging (FSI) acquisition time is tied to the number of modulations. FSI has a tradeoff between efficiency and accuracy. This work reports a mathematical analytic tool for efficient sparse FSI sampling. It is an efficient and adjustable sampling strategy to capture more information about scenes with reduced modulation times. Specifically, we first conduct the statistical importance ranking of Fourier coefficients of natural images. We design a sparse sampling strategy for FSI with a polynomially decent probability of the ranking. The sparsity of the captured Fourier spectrum can be adjusted by altering the polynomial order. We utilize a compressive sensing (CS) algorithm for sparse FSI reconstruction. From quantitative results, we have obtained the experiential rules of optimal sparsity for FSI under different noise levels and at different sampling ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.