In the process of 3D information extraction using sequence images collected by vehicle-based mobile measurement system, epipolar image eliminates the vertical disparity of image pairs. Furthermore, it converts the search area of the correspondences from two-dimensional plane to one-dimensional line in dense matching, thus improving the matching accuracy and efficiency. However, unlike the nearly horizontal epipolar lines of aerial images, the epipolar lines of the vehicle-based sequence image pairs are distributed radially on the images, which makes it difficult for the vehicle-based sequence images to generate epipolar images. To solve the above problems, the fundamental matrix is used to determine the geometrical relation of the image pair to epipolar line. Then the epipolar image is generated by the fan-shaped circular epipolar model. First, high-precision correspondences are obtained by sparse matching for fundamental matrix estimation. Then we use the fundamental matrix to map the nuclear line quickly and determine the region of the epipolar line. At last, the image of the epipolar line area is resampled by bilinear interpolation in the direction of the epipolar line. Experiments were conducted using multiple sets of vehicle-based sequence images, and the average vertical disparity of the correspondences of the generated epipolar image is 0.63 pixels. The results show that the vertical disparity of the epipolar image generated by the proposed method is smaller, which verifies the correctness of the process.
The fusion of monopulse laser rangefinder and optical camera information requires the acquisition of their relative pose and attitude parameters. This paper focuses on the relative pose calibration of monopulse laser rangefinder (MLRF) and visible light array camera. Two calibration methods are proposed. The image plane of the infrared camera and the physical plane whose pose of the visible array camera is known are locally registered to solve the difficulty of the calibration of the laser spot relative to the invisible of the visible array camera. Then evaluate the stability of the pose solution of exceed the calibration distance. The experimental results show that the reprojection error of the relative pose solution obtained by finite number of measurements is less than 1 pixel. The position error within the calibration distance is less than 4 mm; Beyond the calibration distance, the position error increases by less than 3 mm for per 10 m.
The ultrahigh resolution of unmanned aerial vehicle (UAV) remote sensing images and tilting photography with multiple perspectives provide complete and detailed ground observation data for various engineering applications. However, noise and interference information make learning the typical features of ground objects difficult for current deep learning semantic segmentation networks. The hierarchical cognitive structure of human vision and the information transmission modes of retinal cone and rod cells were used to design a two-pathway anti-interference network for retinal perception mechanism simulation (RPMS). In the first pathway, the hierarchical cognition of cone cells was simulated by a one-to-one connected multiscale dilated convolution structure. In the second pathway, the hierarchical cognition of rod cells was simulated by a multiscale pyramid structure with many-to-one connections. With the one-to-one connection, the ability of RPMS to recognize detailed edges was strengthened. Furthermore, the many-to-one connection helped RPMS resist the disturbance from noise and interference. By combining the feature maps of the two paths, RPMS exhibited stronger noise resistance, better texture recognition, and better detail recognition compared with other semantic segmentation networks in the classification experiments. Thus this technique is suitable for UAV remote sensing image classification and has a broad application potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.