In the last decade, the number of forest fires events is growing due to the fast change of earth’s climate. Hence, more automatized fire fighting actions had become necessary. Deep learning had drawn interesting results for pixel level classification for smoke detection, but few systems are proposed for fire flame detection. In this paper, a semantic segmentation approach using Deeplabv3+ architecture for wildfire detection is proposed. The network uses Deeplabv3 architecture as encoder and Atrous Spatial Pyramid Pooling (ASPP) which allows to encode multi-scale information and boost the network performance. In fact, the ASPP block concatenates a stack of parallel Atrous convolutions with graduating rates, which produces multi-scale feature map that is further resized. The tests were performed on a public dataset, Corsican fire dataset, which contains 1135 RGB images and 640 infrared pictures. The experiments were conducted on two customized datasets, one using the whole dataset within a single channel information (red and infra-red), and another using only the RGB images set that contains information coded in 3 channels. The used dataset is unbalanced, which could induces high precision with very low sensitivity. Therefore, to measure the performance Dice similarity and Tversky loss functions with cross-entropy are adopted. The capability of the Deeplabv3+ was tested with two different backbones, ResNet18 and ResNet-50, and compared to a very simple Convolutional Neural Network (CNN) architecture with dilated convolution. Four different metrics were used to evaluate the segmentation capability: Accuracy, mean Intersection over Union (IoU), Mean Boundary F1 (BF) Score, and Mean Dice coefficient. The experimental results demonstrate that the Deeplabv3+ with ResNet-50 backbone and a loss function type Dice or Tversky can accurately detect the fire flame, the given results are very encouraging for further study using deep learning approaches.