In order to prevent head injuries caused by falling objects or collision with objects when entering construction sites or production workshops, it is stipulated that helmets should be worn when entering the relevant occasions. However, due to the lack of safety supervision system in some construction sites and workshops and the personnel's sense of chance, it is difficult to make all the employees wear safety helmets, which will lay a hidden safety hazard. Therefore, it is of great significance to detect whether the employees wear helmets or not. Aiming at the problems of low accuracy, complex model and large computation of many target detection algorithms for detecting wearing helmets, this paper proposes a high-precision lightweight helmet target detection algorithm based on YOLOv8. The use of deformable convolutional (DCNv3) to replace the residual structure of the original backbone network C2f module effectively improves the limitations of fixed convolutional computation, reduces the number of model parameters and enhances the ability to detect the target modelling, and improves the effect of target detection for helmets and other targets. Moreover, a novel efficient multi-scale (EMA) attention mechanism is introduced, which reduces the computational resources required without reducing the channel dimensions, and improves the feature image pixel-level attention, so that the model pays more attention to the target information of helmets. The model is validated on a publicly available dataset, and the mAP value of the detection accuracy is improved by 2.1% compared to the original YOLOv8n, and the complexity and computation of the model are also reduced. It is also compared with other state-of-the-art target detection algorithms under the same conditions, and the results show that the DE-YOLOv8 algorithm proposed in this paper performs better.
|