Super-resolution remote sensing data fusion aims to compose the output image with the higher spatial and spectral resolution from the lower resolution input ones captured for the same territory. The images used for fusion are usually multi-temporal. However, existing multi-temporal image fusion methods exploit only cloud-free images that might be difficult to obtain for some territories where the weather conditions are moderately cloudy during the observation period. In this paper, the clouds and their shadows are considered as scene distortions i.e. significant local changes in brightness caused by some opaque objects or their shadows partially overlapping the scene at the moment of image registration. Here, we propose a multi-temporal remote sensing data fusion method adapted to the dataset containing images partially occupied by scene distortions. The method is based on gradient descent optimization procedure with scene distortion elimination in each iteration. The experiments with the modeled data revealed that our method provides spatial and spectral super-resolution even for datasets including images with scene distortions. In comparison with the scenedistortion free case, the proposed method reduces a root mean square error of the resulting image from 2 to 4% on average in the case of the mixed data sets with few undistorted images (from two to six). The overall research has shown that in the case of lack of the data without scene distortions, additional partially distorted images can be used to obtain better fusion results.
Multi-temporal Earth remote sensing images of the same territory may include random scene-distortions coming from the different natural phenomena, for example, clouds or shadows. These distortions are time dependent and may appear only in several images of the analyzed image set. Thus, they define irrelevant image parts that should be eliminated in the further data fusion process. In this article, we suggest an algorithm for detecting such scene distortions using a series of multi-temporal remote sensing images. The algorithm is based on super-pixel segmentation and anomaly detection methods. The algorithm produces a mask of random scene-distortions for each of the images in the analyzed series. The resulting mask could be used to take into account only the scene-relevant parts in the data fusion methods. The proposed approach allows processing images with different spectral and spatial sampling simultaneously that is very useful for multi-sensor data fusion. We tested an algorithm quality by modeling a series of multispectral images with different parameters of spectral and spatial sampling under the different conditions of cloudiness and cloud shadows as an example of random distortions in the scene. As a result, it was shown that the algorithm provides the accuracy of scene distortion detection about 90% and false detection about 10%.
Multi-sensor remote sensing image super-resolution aims to provide better characteristics for different types of resolution and compensate the limitations of the particular imaging systems. However, existing super-resolution techniques consider spectral and spatial resolution enhancement separately, i.e. only spatial or only spectral resolution can be enhanced. Among spatial super-resolution methods maximum a posteriori estimation approach with B-TV regularization stands out as one of the best method for spatial resolution enhancement. But existing implementations were designed only for RGB and grayscale photographic imagery. Unlike photographic RGB imagery, multispectral remote sensing images captured by optical sensors often contain more than three spectral channels (red, green and blue) and, moreover, different remote sensing systems produce a different spectral response for the similar spectral components. Therefore, a more complex image acquisition model should be regarded to take into account the variations in bandwidth and number of spectral channels in the case of remote sensing images. In this article, we propose an algorithm aiming to provide spectral-spatial multi-sensor remote sensing image super-resolution. We apply a joint spectral-spatial image acquisition model, that is typical for remote sensing systems, and investigate the super-resolution algorithm streaming from this model and the maximum a posteriori estimation approach with B-TV regularization. We propose a simple way to adapt B-TV regularization in the case of multiple spectral channels. Our experimental results confirm the enhancement in the spectral and spatial resolution of the output image in comparison with the input images. The results of our research demonstrate that the proposed method achieves both spectral and spatial super-resolution.
The paper presents a modification of the method of parametric estimation of atmospheric distortion in MODTRAN model as well as experimental research of the method. The experimental research showed that the base method does not take into account physical meaning of atmospheric spherical albedo parameter and presence of outliers in source data that results to overall atmospheric correction accuracy decreasing. Proposed modification improves the accuracy of atmospheric correction in comparison with the base method. The modification consists in the addition of nonnegativity constraint on the atmospheric spherical albedo estimated value and the addition of preprocessing stage aimed to adjust source data.
The paper presents a method of atmospheric correction of remote sensing hyperspectral images. The method based on approximate solution of MODTRAN transmittance equation using simultaneous analysis of remote sensing hyperspectral image and “ideal” hyperspectral image which is free from atmospheric distortions. Experimental results show that proposed method is applicable to perform atmospheric correction.