Sentinel-2 and Landsat satellites provide huge amount of optical images with high spatial and temporal resolution. These dense Time Series (TS) of multispectral data are used for a wide range of applications enabling multi-temporal monitoring of physical phenomena. Nevertheless, one of the main challenges in their usage is related to missing information caused by cloud occlusions. In the literature, many cloud restoration approaches have been proposed. However, to properly recover missing information, sophisticated and usually computationally intensive techniques should be used. In this work, we consider the deep Long Short Term Memory (LSTM) classifier which is very promising for classification of dense time series of images, and investigate its robustness to the cloud presence without any cloud restoration. Indeed, this classifier has proven to be able to handle the presence of clouds. However, no work which extensively analyzes the robustness of LSTM to clouds can be found in the literature. In this study, we aim to quantitatively asses the capability of the network of handling different amount of cloud coverage under different lengths of the TS. In greater detail, we analyze the effect of the cloud coverage on the classification maps produced by the LSTM by considering: (i) simulated cloud values, (ii) detected clouds represented by zeros values, and (iii) restored images by simple linear temporal gap filling (i.e., average of the spectral values acquired in the previous and following cloud-free images in the TS). The obtained results demonstrate that the capability of the LSTM to handle the cloud cover depends on: (i) the length of the TS, (ii) the position of the cloudy images in the TS, and (iii) the cloud representation values. For example, when clouds are restored with very simple and fast linear temporal gap filling, the map agreement between the cloud-free and the cloudy map is 96% even when the 40% of images in the TS are covered with clouds, regardless of their position.