Enterprises are often concerned about disasters, particularly accidents involving personnel. Therefore, there is ongoing research dedicated to safeguarding the safety of employees within enterprises. In recent years, with the continuous development of machine vision technology, industry and academia have been working to adopt machine vision methods to address safety hazards in workers' production. Machine vision applications are still being researched for specific sectors and lack generalization. Algorithms commonly face issues of high computational complexity and demanding hardware requirements. This paper adopts the lightweight YOLOV5 as the baseline algorithm and enhances its accuracy using a receptive field attention mechanism. SeNet is introduced to improve the generalization of object detection, and IDetect Head is employed to increase the efficiency of the detection head. Ultimately, the algorithm's accuracy is enhanced by 3.7%, and mAP50 is increased by 3.0%. This algorithm can be deployed to Internet of Things (IoT) machine vision terminals, reducing deployment costs and improving monitoring efficiency.
Protecting the personal safety of on-site workers is an important task in enterprise production. In order to achieve widespread deployment to edge computing terminals, a lightweight object detection algorithm based on YOLOv5 is used to implement the personal safety detection task for workers. To achieve a lightweight task, PConv is utilized as the convolutional layer to decrease computational complexity, while Bi-Level Routing Attention is incorporated to enhance model accuracy. Furthermore, four detection heads are employed to improve object recognition capabilities. After experimentation, the precision can be improved by 3.4% compared with the baseline model, the parameters are reduced by 1.91MB, and the model size is decreased by 3.2MB.
With the rapid development of artificial intelligence, the demand for Graphics Processing Unit (GPU) resources is also increasing rapidly. To improve the efficiency and utilization of GPU resources, virtualization technology has been widely used in the field of GPUs. This article reviews the evolution from GPU virtualization to resource pooling, including device simulation, GPU pass-through, hardware-assisted virtualization, GPU full virtualization, GPU remote-sharing, and GPU resource pools. The advantages and disadvantages of each stage and the challenges faced during the evolution process are also analyzed. This article discusses the difficulties and solutions of GPU resource pool construction, as well as the construction steps and framework, providing a reference for the research and application of GPU resource pooling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.