Standard deep-learning (DL) architectures do not optimize the use of the spatial and spectral information in the multi-spectral images but often consider only one of the two components. Two-stream DL architectures split and process them separately. However, the fusing of the output of the two streams is a challenging task. 3D-CNN processes spatial and spectral information together at the cost of a large number of parameters. To overcome these limitations, we propose a novel DL data structure that re-organizes the spectral and spatial information in remote-sensing (RS) images and process them together. Representing a RS image I as a data cube, we handle the spatial and spectral information by reducing the spectral bands from N to M, where M can drop out to one. The spectral information is projected in the spatial dimensions and re-organized in 2-dimensional B blocks. The proposed approach analyzes the spectral information of each block by using 2-dimensional convolutional kernels of appropriate size and stride. The output represents the relationship between the spectral bands of the input image and preserves the spatial relationship between its neighboring pixels. The spatial relationships are analyzed by processing the output of the previous layer with standard 2D-CNNs. Experiments by using images acquired by Sentinel-2 and Landsat-8 data and the labels of the LUCAS database released in 2018 provide promising results.
Change detection (CD) benefits of the capability of deep-learning (DL) methods of exploiting complex temporal behaviors in a large amount of data. Unsupervised CD DL methods are preferred since they do not require labeled data. Unsupervised CD methods use autoencoders (AE) or convolutional AE (CAE) for CD. However, features provided by the CAE hidden layers tend to degrade the geometrical information during the encoding. To mitigate this effect, we propose an unsupervised CD exploiting a multilayer CAE trained by a hierarchical loss function. This loss function guarantees a better trade-off between noise reduction and preservation of geometrical details at each hidden layer of the CAE. On the contrary to standard CAE, the proposed novel loss function considers input/output specular pairs of multiple hidden layers. These layers are analyzed by considering encoder/decoder pairs that work at corresponding geometrical resolution and show similar spatialcontext information. Single-layer loss functions are defined by comparing the specular corresponding encoder and decoder pairs then aggregated to design a multilayer loss function. The proposed hierarchical loss function allows for a layer-by-layer control of the training and improvement of the reconstruction quality of the hidden layers that better preserves the geometrical details while reducing noise. The CD is performed by processing bi-temporal remote sensing images with the CAE. A detail-preserving multi-scale CD process exploits the most informative features for bi-temporal images to compute the change map. Preliminary experimental results conducted on a couple of Landsat-8 multitemporal images acquired before and after a fire near Granada, Spain of July 8th, 2015, provided promising results.
Rapid identification of areas affected by changes is a challenging task in many remote sensing applications. Sentinel-1 (S1) images provided by the European Space Agency (ESA) can be used to monitor such situations due to its high temporal and spatial resolution and indifference to weather. Though a number of deep learning based methods have been proposed in the literature for change detection (CD) in multi-temporal SAR images, most of them require labeled training data. Collecting sufficient labeled multi-temporal data is not trivial, however S1 provides abundant unlabeled data. To this end, we propose a solution for CD in multi-temporal S1 images based on unsupervised training of deep neural networks (DNNs). Unlabeled single-time image patches are used to train a multilayer convolutional-autoencoder (CAE) in unsupervised fashion by minimizing the reconstruction error between the reconstructed output and the input. The trained multilayer CAE is used to extract multi-scale features from both the pre and post change images that are analyzed for CD. The multi-scale features are fused according to a detail-preserving scale-driven approach that allows us to generate change maps by preserving details. The experiments conducted on a S1 dataset from Brumadinho, Brazil confirms the effectiveness of the proposed method.