Fog dramatically compromises the overall visibility of any scene, critically affecting features such as objects' illumination, contrast, and contours. The decrease in visibility compromises the performance of Computer Vision algorithms such as pattern recognition and segmentation, some of them very relevant to decision-making in the rise of autonomous-driven vehicles. Many dehazing methods have been proposed. However, to the best of our knowledge, all currently used metrics do compare the defogged image to its ground truth, usually the same scene on a non-foggy day, or estimate physical parameters from the scene. This hinders progress in the field, as obtaining proper ground truth images is not always possible and becomes costly and time-consuming because physical parameters greatly depend on the scene conditions. This work aims to tackle this issue by proposing a real-time operating defogging network that only takes an RGB image of the fogged scene as input, performs the defogging, and uses a contour-based metric for Single Image Defogging evaluation even when the ground truth is not available, which is the most common situation. The proposed metric only requires the original hazy image and the image after the defogging procedure. We trained our network using a novel two-stage pipeline with the DENSE dataset and compared our method and metric with currently used metrics and other defogging techniques with the NTIRE 2018 defogging challenge to prove their effectiveness.
|