Sentinel-2 and Landsat satellites provide huge amount of optical images with high spatial and temporal resolution. These dense Time Series (TS) of multispectral data are used for a wide range of applications enabling multi-temporal monitoring of physical phenomena. Nevertheless, one of the main challenges in their usage is related to missing information caused by cloud occlusions. In the literature, many cloud restoration approaches have been proposed. However, to properly recover missing information, sophisticated and usually computationally intensive techniques should be used. In this work, we consider the deep Long Short Term Memory (LSTM) classifier which is very promising for classification of dense time series of images, and investigate its robustness to the cloud presence without any cloud restoration. Indeed, this classifier has proven to be able to handle the presence of clouds. However, no work which extensively analyzes the robustness of LSTM to clouds can be found in the literature. In this study, we aim to quantitatively asses the capability of the network of handling different amount of cloud coverage under different lengths of the TS. In greater detail, we analyze the effect of the cloud coverage on the classification maps produced by the LSTM by considering: (i) simulated cloud values, (ii) detected clouds represented by zeros values, and (iii) restored images by simple linear temporal gap filling (i.e., average of the spectral values acquired in the previous and following cloud-free images in the TS). The obtained results demonstrate that the capability of the LSTM to handle the cloud cover depends on: (i) the length of the TS, (ii) the position of the cloudy images in the TS, and (iii) the cloud representation values. For example, when clouds are restored with very simple and fast linear temporal gap filling, the map agreement between the cloud-free and the cloudy map is 96% even when the 40% of images in the TS are covered with clouds, regardless of their position.
The accurate monitoring and understanding of glacier dynamics are of high relevance for climate science and water-resources management. The glacier parameters are typically estimated by data assimilation methods which inject field measurements into the numerical simulations with the aim of improving the physical model estimates. However, these methods often are not able to capture and model the complexity of the estimation problem. To solve this problem, this paper proposes a method that integrates remote sensing (RS) data, in-situ observations and a physical-based model to accurately estimate the Glacier Mass Balance (GMB). The RS data are used to represent the physical properties of the glaciers by characterizing their topography and spectral properties. Instead of assimilating the observations into the model, the in-situ measurements are used to perform a data-driven correction of the GMB estimates derived from the physically-based simulations in the informative RS feature space. The method is applied to the Alpine MUltiscale Numerical Distributed Simulation ENgine (AMUNDSEN) hydro-climatological model. In the experimental analysis, the multispectral images used to define the feature space are high-resolution Sentinel-2 images. The method is validated on three glaciers in Tyrol (Hintereis, Kasselwand and Varnagt glaciers), in 2015 and 2016. The obtained results show the effectiveness of the method in improving the GMB estimates.