Composite structures are subjected to internal defects and damages such as delamination and voids, rendering visual inspection techniques ineffective. Due to the benefits of non-contact and large area inspection1, active infrared thermography (AIT) is gaining popularity to identify, localize and evaluate sub-surface defects in composite structures. However, images of defects are not always obvious and interpretation of the data by human inspectors varies among individuals, and creates differences in the outcome. Therefore, it is highly desired to develop computerized methods so that consistent analysis of results can be automatically obtained. In this work, convolutional neural networks (CNN) and computer vision were employed to implement two CNN based models for detecting structural defects in samples made of composite materials. The aim is to integrate such deep learning (DL) models to enable interpretation of thermal images automatically. That requires achieving object detection with high enough accuracy so that they can be used to assist human inspectors. The recent success of DL in computer vision tasks such as face recognition among others motivates us to apply DL for boosting the performance of thermal imaging inspections. DL methods were recently evaluated for defect detection in AIT of carbon fiber reinforced plastic (CFRP) composites with handmade defects2. The input for that framework were thermal images acquired during the cooling down process. In our work, we will apply similar concepts to detect and classify void and delamination defects in composites so as to reduce reporting errors and improve consistency.
|