The regular monitoring of agricultural areas is extremely important for mitigating food insecurity risks and for planning government interventions. In the literature, several deep learning algorithms have been recently proposed to perform land cover/ land use classification by using multispectral optical images. However, most of the considered deep learning models, such as the standard Convolutional Neural Networks (CNN), rely on mono-temporal images, focusing on spectral and textural features while discarding the temporal component, which is crucial for the accurate crop type mapping. In this work, we exploit a Long Short Term Memory (LSTM) deep learning classification architecture to characterize agricultural area dynamics by using the multitemporal multispectral information provided by satellite multispectral sensor Sentinel 2. Instead of considering a pre-trained network and applying to it a fine-tuning, the proposed architecture is trained from scratch in order to be tailored to the specific properties of the long time series of Sentinel 2 multispectral images. To face the lack of labeled training database, existing crop type maps available at the country level are used to generate a large set of weak reference data. First, the proposed method automatically extracts a large training dataset from existing crop type maps, by detecting those samples having the highest probability of being correctly classified. Then, the weak labeled samples extracted are used to train the deep LSTM architecture on a time series of Sentinel 2 images acquired over an entire year. The preliminary results obtained demonstrate the effectiveness of the proposed approach, which is promising at large scale.
Sentinel-2 and Landsat satellites provide huge amount of optical images with high spatial and temporal resolution. These dense Time Series (TS) of multispectral data are used for a wide range of applications enabling multi-temporal monitoring of physical phenomena. Nevertheless, one of the main challenges in their usage is related to missing information caused by cloud occlusions. In the literature, many cloud restoration approaches have been proposed. However, to properly recover missing information, sophisticated and usually computationally intensive techniques should be used. In this work, we consider the deep Long Short Term Memory (LSTM) classifier which is very promising for classification of dense time series of images, and investigate its robustness to the cloud presence without any cloud restoration. Indeed, this classifier has proven to be able to handle the presence of clouds. However, no work which extensively analyzes the robustness of LSTM to clouds can be found in the literature. In this study, we aim to quantitatively asses the capability of the network of handling different amount of cloud coverage under different lengths of the TS. In greater detail, we analyze the effect of the cloud coverage on the classification maps produced by the LSTM by considering: (i) simulated cloud values, (ii) detected clouds represented by zeros values, and (iii) restored images by simple linear temporal gap filling (i.e., average of the spectral values acquired in the previous and following cloud-free images in the TS). The obtained results demonstrate that the capability of the LSTM to handle the cloud cover depends on: (i) the length of the TS, (ii) the position of the cloudy images in the TS, and (iii) the cloud representation values. For example, when clouds are restored with very simple and fast linear temporal gap filling, the map agreement between the cloud-free and the cloudy map is 96% even when the 40% of images in the TS are covered with clouds, regardless of their position.
The accurate monitoring and understanding of glacier dynamics are of high relevance for climate science and water-resources management. The glacier parameters are typically estimated by data assimilation methods which inject field measurements into the numerical simulations with the aim of improving the physical model estimates. However, these methods often are not able to capture and model the complexity of the estimation problem. To solve this problem, this paper proposes a method that integrates remote sensing (RS) data, in-situ observations and a physical-based model to accurately estimate the Glacier Mass Balance (GMB). The RS data are used to represent the physical properties of the glaciers by characterizing their topography and spectral properties. Instead of assimilating the observations into the model, the in-situ measurements are used to perform a data-driven correction of the GMB estimates derived from the physically-based simulations in the informative RS feature space. The method is applied to the Alpine MUltiscale Numerical Distributed Simulation ENgine (AMUNDSEN) hydro-climatological model. In the experimental analysis, the multispectral images used to define the feature space are high-resolution Sentinel-2 images. The method is validated on three glaciers in Tyrol (Hintereis, Kasselwand and Varnagt glaciers), in 2015 and 2016. The obtained results show the effectiveness of the method in improving the GMB estimates.
Tree species information is crucial for accurate forest parameter estimation. Small footprint high density multireturn Light Detection and Ranging (LiDAR) data contain a large amount of structural details for modelling and thus distinguishing individual tree species. To fully exploit the potential of these data, we propose a data-driven tree species classification approach based on a volumetric analysis of single-tree-point-cloud that extracts features that are able to characterize both the internal and the external crown structure. The method captures the spatial distribution of the LiDAR points within the crown by generating a feature vector representing the threedimensional (3D) crown information. Each element in the feature vector uniquely corresponds to an Elementary Quantization Volume (EQV) of the crown. Three strategies have been defined to generate unique EQVs that model different representations of the crown components. The classification is performed by using a Support Vector Machines (C-SVM) classifier using the histogram intersection kernel that has the enhanced ability to give maximum preference to the key features in high dimensional feature space. All the experiments were performed on a set of 200 trees belonging to Norway Spruce, European Larch, Swiss Pine, and Silver Fir (i.e., 50 trees per species). The classifier is trained using 120 trees and tested on an independent set of 80 trees. The proposed method outperforms the classification performance of the state-of-the-art method used for comparison.
Conference Committee Involvement (3)
Image and Signal Processing for Remote Sensing XXVII
13 September 2021 | Madrid, Spain
Image and Signal Processing for Remote Sensing XXVI
21 September 2020 | Online Only, United Kingdom
Image and Signal Processing for Remote Sensing XXV