Satellite image restoration from its degraded observation has a significant impact on the remote sensing industry. There are many potential applications that can directly benefit from this technique. A convolutional neural network (CNN) has been recently explored in image restoration and achieved remarkable performance. However, most deep CNN architectures in the literature do not properly handle the inherent trade-off between localization accuracy and the use of global context, which is vital for satellite images covering a ultrabroad area. We present a stacked lossless deconvolutional network (SLDN) for remote sensing image restoration. We fully exploit global context information while guaranteeing the recovery of fine details. Specifically, we design a lossless pooling by reformulating the pixel shuffle operator and incorporate it with a shallow deconvolutional network. The resulting lossless deconvolution blocks are stacked one by one to enlarge the receptive fields without any information loss. We further propose an attentive skip connection and progressive learning scheme to improve gradient flows throughout the SLDN. The SLDN can reconstruct high-quality satellite images without noticeable artifacts. An extensive ablation study is also provided to show that all the components proposed are useful for remote sensing image restoration. Experimental comparisons on various restoration tasks, including super-resolution, denoising, and compression artifact reduction, demonstrate the superiority of the proposed method over state-of-the-art methods both qualitatively and quantitatively.
Super-resolving a satellite imagery from its low-resolution one has a significant impact on the remote sensing industry. There are many potential applications that can directly benefit from this technique. A convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR). However, most deep CNN architectures do not properly handle the inherent trade-off between localization accuracy and the use of global context. In this paper, we propose a stacked lossless deconvolutional network (SLDN) for remote sensing SR. We fully exploit global context information while guaranteeing the recovery of fine details. Specifically, we design a lossless pooling by reformulating the pixel shuffle operator, and incorporate it with a shallow deconvolutional network. The resulting lossless deconvolution blocks (LDBs) are stacked one by one to enlarge the receptive fields without any information loss. We further design an attentive skip connection to improve gradient flows throughout the LDB. The SLDN can reconstruct high-quality satellite images without noticeable artifacts. We also provide an extensive ablation study showing that all the components proposed in this paper are useful for the remote sensing SR. Experimental comparisons demonstrate the superiority of the proposed method over state-of-the-art methods both qualitatively and quantitatively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.