Calibration is essential for the three-dimensional light field endoscope, and the two-step calibration method based on line features was proposed by us to accomplish it before. In the second step of the two-step calibration method, the relationship between the projections of virtual feature points on the microlens image and the central sub-aperture image was used to calibrate the parameters about the microlens array, but we find that the feasibility of the calibration method has a depth constraint on the border of the checkerboard. In this paper, we deduce and demonstrate this constraint, and an optimization algorithm is designed to improve the accuracy of the second step calibration due to the inaccurate black-and-white boundary detection. Experimental results show that our method is effective and accurate for the calibration of the three-dimensional light field endoscope.
Three-dimensional light field imaging in laparoscopic surgery is an emerging technology, which has the potential to enable three-dimensional imaging. Calibration of the three-dimensional light field endoscope (3D LFE) is essential but challenging as the disparity is much smaller than that of the conventional light field camera. The geometrical model for 3D LFE is established, and a calibration method based on virtual objective lens and virtual feature points is proposed. First, the virtual objective lens is introduced and the parameters about it are calibrated using corner features in center subaperture images. Second, two types of virtual feature points are proposed to calibrate the parameters about the microlens array, one is on the black-and-white board line and the other is selectively determined but can be anywhere on the checkerboard. Moreover, the relationship between the virtual feature points mapping in the microlens image and the virtual feature points mapping in the central subaperture image is deduced to overcome tiny light field disparity. Experimental results verify the performance of our calibration method.
In this paper, a three-dimensional (3D) shape measurement method based on structured light field imaging is proposed. Generally, light field imaging is challenging to accomplish the 3D shape measurement accurately, as the slope estimation method based on radiance consistency is inaccurate. Taking into account the special modulation of structured light field, the phase information is derived with Fourier transform profilometry, which is utilized to substitute the phase consistency for the radiance consistency in epipolar image (EPI) at first. Therefore, the 3D coordinates are derived after light field calibration, but the results are coarse due to slope estimation error and need to be corrected. Furthermore, the 3D coordinates refinement is performed based on relationship between the structured light field image and DMD image of the projector, which allows to improve the performance of the 3D shape measurement. The necessary light field camera calibration is described to generalize its application. Subsequently, the effectiveness of the proposed method is demonstrated with a sculpture and compared to the results of a conventional PMP system.
The Lenselet-Based Plenoptic has recently drawn a lot of attention in the field of computational photography. The additional information inherent in light field allows a wide range of applications, but some preliminary processing of the raw image is necessary before further operations. In this paper, an effective method is presented for the rotation rectification of the raw image. The rotation is caused by imperfectly position of micro-lens array relative to the sensor plane in commercially available Lytro plenoptic cameras. The key to our method is locating the center of each microlens image, which is projected by a micro-lens. Because of vignetting, the pixel values at centers of the micro-lens image are higher than those at the peripheries. A mask is applied to probe the micro-lens image to locate the center area by finding the local maximum response. The error of the center coordinate estimate is corrected and the angle of rotation is computed via a subsequent line fitting. The algorithm is performed on two images captured by different Lytro cameras. The angles of rotation are -0.3600° and -0.0621° respectively and the rectified raw image is useful and reliable for further operations, such as extraction of the sub-aperture images. The experimental results demonstrate that our method is efficient and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.