For satellite imaging instruments, the tradeoff between spatial and temporal resolution leads to the spatial–temporal contradiction of image sequences. Spatiotemporal image fusion (STIF) provides a solution to generate images with both high-spatial and high-temporal resolutions, thus expanding the applications of existing satellite images. Most deep learning-based STIF methods throw the task to network as a whole and construct an end-to-end model without caring about the intermediate physical process. This leads to high complexity, less interpretability, and low accuracy of the fusion model. To address this problem, we propose a two-stream difference transformation spatiotemporal fusion (TSDTSF), which includes transformation and fusion streams. In the transformation stream, an image difference transformation module reduces the pixel distribution difference of images from different sensors with the same spatial resolution, and a feature difference transformation module improves the feature quality of low-resolution images. The fusion stream focuses on feature fusion and image reconstruction. The TSDTSF shows superior performance in accuracy, vision quality, and robustness. The experimental results show that TSDTSF achieves the effect of the average coefficient of determination (R2 = 0.7847) and the root mean square error (RMSE = 0.0266), which is better than the suboptimal method average (R2 = 0.7519) and (RMSE = 0.0289). The quantitative and qualitative experimental results on various datasets demonstrate our superiority over the state-of-the-art methods.
Mobile source pollution has become an important source of air pollution in large and medium cities, and an important cause of fine particulate matter and photochemical smog pollution. There is an urgent need for suitable and effective emissions prediction tools in both scientific research and industry. In recent years, deep learning has outperformed traditional models in many machine learning tasks as the size and dimensionality of data volumes have increased. Many deep neural network models have been successfully applied to solve microscopic and macroscopic emissions modeling. In this paper, we provide a comprehensive review of recent work on mobile source emissions prediction using deep learning methods. Finally, we provide a deeper discussion of the future prospects and ongoing challenges.
Pan-sharpening is an important image preprocessing technique for remote sensing that aims to enhance spatial resolution of multispectral (MS) images under the guidance of panchromatic (PAN) image while preserving spectral properties. The existing pan-sharpening methods usually adopt the globally consistent detail-injection models, neglecting the detail differences between spectral channels, which leads to imprecise spatial details and distorted spectral properties of pan-sharpening results. We propose a sparse representation-based detail-injection model for pan-sharpening that utilizes the structure similarity and detail differences between PAN and low-resolution multispectral (LRM) images at each channel, to improve the performance of pan-sharpening. Specifically, to better express the inherent detail properties of the MS image, the overcomplete dictionary of each channel is constructed from synthesized high-resolution multispectral (HRM) images. Moreover, the most proposed methods require that the spectral responses of the PAN image and the MS image cover the same wavelength range; nevertheless, most sensors cannot match this condition. To address this problem, we propose constructing coupled low-resolution and high-resolution dictionaries from LRM and synthesized HRM images so that the structure similarities can be used for detail injection. The qualitative and quantitative experimental results on various data sets demonstrate the superiority of our proposed method over the state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.