With the progress of technology, 3D printing technology has gradually developed and matured, and has been widely used in aerospace, game artifacts, medical industry and heritage conservation. As a reverse engineering technology of 3D printing, 3D reconstruction technology can reconstruct a 3D mesh model from several images of typical target entities. Monocular 3D reconstruction has the advantages of easy access and low cost. Therefore, this paper improves a multi-view stereo reconstruction technique based on monocular camera: DC-MVSNet. Firstly, the collected multi-view images are sparse reconstructed by colmap to obtain sparse point clouds and camera poses. Then, it is input into DC-MVSNet network and output to get the depth map corresponding to the reference image. Finally, the 3D point cloud model is obtained by deep fusion. In the feature extraction module and depth map refinement module, a densenet and a coordinate attention mechanism are added respectively to improve the feature extraction ability. The proposed method is compared with previous works. The results show that the reconstruction completeness of the algorithm for weakly textured objects is improved in both objective quantitative indexes and subjective perception. The study can be deployed on win10, linux and embedded systems, working reliably and practically. It is of significant reference value for 3D printing reverse engineering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.