Several measuring instruments, including CCD photography stations and velocity-coordinate measuring equipment, were arranged at various distances in the ballistic range. To ensure consistency and accuracy, the coordinates of each instrument need to be unified into the ballistic range coordinate system. This paper presents a new method that utilizes a high-precision total station for measuring and positioning, establishing a spatial benchmark system for the ballistic range. The three-dimensional coordinate values of all the equipment's datum points were obtained through the total station and the measuring equipment. The transformation matrix, derived from the coordinates of three non-collinear points in the calibration system, is used to convert between the two coordinate systems. The 3D coordinate transformation process is implemented using the Rodrigues matrix, resulting in direct calculation formulas for the seven parameters. The calibration bracket was used to calibrate all the measuring equipment in the ballistic range, ultimately leading to the establishment of a 1,000-meter benchmark system for light weapons. Experimental results indicate that the spatial benchmark system, based on the total station, effectively eliminates errors caused by human operation and achieves a spatial coordinate precision of less than 15mm. The method's simplicity and efficiency render it suitable for 3D coordinate transformation at any given angle.
With new depth sensing technology such as Kinect providing high quality synchronized RGB and depth images (RGB-D data), learning rich representations efficiently plays an important role in multi-modal recognition task, which is crucial to achieve high generalization performance. To address this problem, in this paper, we propose an effective multi-modal convolutional extreme learning machine with kernel (MMC-KELM) structure, which combines advantages both the power of CNN and fast training of ELM. In this model, CNN uses multiple alternate convolution layers and stochastic pooling layers to effectively abstract high level features from each modality (RGB and depth) separately without adjusting parameters. And then, the shared layer is developed by combining these features from each modality. Finally, the abstracted features are fed to the extreme learning machine with kernel (KELM), which leads to better generalization performance with faster learning speed. Experimental results on Washington RGB-D Object Dataset show that the proposed multiple modality fusion method achieves state-of-the-art performance with much less complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.