Panoramic 3D measurement becomes increasingly important for fringe projection profilometry (FPP). Traditional physical markers-assisted (PMA) method suffers from inefficiencies and non-complete measurement. An optical markers-assisted (OMA) panoramic 3D method has been recently proposed, which enables accurate, efficient and non-destructive panoramic 3D measurement. In this paper, we give a comprehensive comparison between OMA and PMA, which provides reasonable suggestions for different panoramic 3D measurement applications.
In the recording process of phase-shifting profilometry, intensity fluctuation caused by uorescent light source instability may occur and then introduce a non-ignorable phase error. More importantly, the selection of sampling speed will also affect the value of the phase error, which even up to 0.12 rad. To suppress this problem, a deep learning-based fluorescent light error suppression (DLFLES) method is proposed to achieve high-precise measurement under fluorescent light. Experiments demonstrate that the shapes of the reconstructed 3-D images are more precise using the proposed method. Our research would promote the development of accurate 3-D measurement under the interference of external light sources by using deep learning.
Fringe projection profilometry (i.e., FPP) has been one of the most popular techniques in three-dimensional (i.e., 3-D) measurement. In FPP, it is necessary to obtain accurate desired phase by using a small number of fringes in dynamic measurement. Recently, fringe pattern transformation method (i.e., FPTM) is proposed based on deep learning, which can achieve accurate 3-D measurement using a single fringe, but the phase error is still higher than the phase-shifting algorithm. In this paper, the phase error of FPTM is analyzed and the relationship between it and local depth change rate is illustrated firstly. Then, the accuracy of FPTM can be improved by using more fringes. Compared with traditional methods, FPTM can achieve higher precision 3-D measurement when less fringes are used.
This paper presents a new stereo based on effective cost aggregation. Firstly, a color-based image segmentation algorithm is used to segment the reference image into blocks with different colors, and they are taken as the matching window to search the similar region in the target image. Secondly, in order to eliminate the mismatching of brightness difference and noise, Census transform is conducted with both reference image and target image to obtain the bit strings. At the same time, for the sake of matching efficiency, it is of great necessary to rectify the dynamic disparity scopes of the pixel to be matched by the disparity of neighborhood pixels. At last, the bit strings can be used to execute the cost calculation by a new kind of cost aggregation function. By moving the window in the horizontal direction, we can figure out the percentage of the mismatch pixels in every segmented block respectively. The minimum value means which corresponding the horizontal shift is expressed as the best disparity of this segmented block. The experimental results demonstrate that our method can not only improve the accuracy of the depth discontinuities, but also has the high calculation efficiency. Besides, it has good robustness in different brightness environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.