KEYWORDS: Field programmable gate arrays, Interfaces, Human-machine interfaces, Design, Data conversion, Control systems, Data transmission, Data communications
Photoelectric platform has important applications in many fields, and the spatial orientation ability is an important function of photoelectric platform, when used in industrial and military areas. As the encoder is a key component in the control loop of the Photoelectric platform, different encoder principles bring different application methods, and a variety of interface forms have to be considered when design the Photoelectric platform. The traditional method is new hardware for each new platform, so there is duplicate design among projects, and the design of multi-axis platforms with multiple interfaces will be more complicated. This article tries to tackle the communication problem of control board with multiple types of encoder with none or little hardware modification. In this paper, the hardware and protocol characteristics of three common encoder interfaces, including ENDAT, SSI and UART, are studied, and an encoder interface control scheme suitable for multi-axis platforms is proposed and verified, which takes FPGA as the core ,can support communicating with multiple devices asynchronously at the same time, and brings good scalability. In this paper, the communication capability of this scheme ,with three common interfaces ,is verified, and ENDAT and SSI timing rates are above 1Mhz, and the maximum baud rate of UART is 921600bps.
This paper proposes an enhanced target detection algorithm based on background subtraction, designed to facilitate the rapid identification of weak flash targets in complex backgrounds. The algorithm establishes a background model devoid of foreground elements by subtracting the current frame from the background model, thereby facilitating the iterative updating of the background model and enabling the reflection of the overall contour of the moving object in a more comprehensive manner. Additionally, the algorithm exhibits a fast detection speed, a low false alarm rate, a significant improvement in ghosting and voiding, and a superior effect in moving target detection.
In the distributed optoelectronic system, when the optoelectronic reconnaissance equipments cooperates to locate the target, it needs to use the azimuth and elevation information of the optoelectronic reconnaissance equipments, which usually has measurement errors. This paper proposes a distributed optoelectronic system collaborative positioning model with measurement errors, and analyzes the influence of the measurement errors of azimuth and elevation on the target positioning error, At the same time, the influence of the baseline between the optoelectronic reconnaissance equipment on the target positioning error is analyzed. the influence of the included angle between and the positioning lines on the target positioning error is analyzed. The modeling analysis shows that the smaller the measurement error of azimuth and elevation angle are, the smaller the target positioning error is; The longer the baseline is, the smaller the target positioning error is; The closer the included angle of the positioning line is to 4/π, the smaller the target positioning error is. It provides a basis for the selection of angle measuring sensors in the distributed optoelectronic system and the layout of optoelectronic reconnaissance equipment in the distributed optoelectronic system.
By combining artificial neural network with deep learning technology, convolution neural network is characterized by local perception, adaptive feature extraction and end-to-end application, etc., and it has been used in image recognition and target detection more and more in recent years. Problems are existing widely in the traditional safety helmet detection algorithm generally such as the severe background interference, complex computing, high time-complexity and largely fluctuant accuracy. A detective method for safety helmet based on deep convolution network was proposed in this paper, which first decoded the acquired video monitoring data for a number of YUV images, then to determine the detecting area in the image, and transfer the YUV component image in the detecting area to the RGB image data; then in which to determine the training set and detecting set; finally, based on the constructed convolution neural network model to compute and process to acquire the ultimate detective results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.