For traditional camera calibration methods, the calibration accuracy of camera parameters is highly dependent on the feature extraction of the control points of the calibration target. However, due to problems such as perspective and lens distortion, the commonly used checkerboards, circular dots, and other calibration patterns will inevitably undergo large deformation, resulting in a serious impact on the accuracy of the extraction of control points. To solve this problem, a camera calibration method based on active phase calibration targets is proposed. The method uses the MI Pad that can automatically display the structured light patterns as the calibration target. Compared with the method of using a checkerboard calibration target, this method does not require pattern detection and avoids the process of manual marking. Also, based on the phase information of the structured light gratings, a large number of dense, high-precision control point coordinates can be obtained even in areas where the image edge distortion is severe. Not only is it highly automated, but it is also suitable for defocused cameras. Experimental results show that the reprojection error of our method is only onetenth of the comparison method, and it has better robustness and accuracy.
Monitoring the static and dynamic displacements of large engineering structures, such as buildings and bridges, can provide quantitative information for evaluating structural safety and maintenance purposes. Camera calibration is a key process in vision-based sensor systems for remote displacement measurement. Due to the large field of view of engineering structures, conventional camera calibration methods using precise calibration boards are difficult to apply. A modified calibration method for a binocular stereo vision system based on the epipolar constraint relationship is proposed to simplify the calibration process. Due to the absence of reference points in outdoor applications, an unmanned aerial vehicle that carries a reference marker is adopted. During its flight in the field, sequential images are captured simultaneously with the left and right imaging stations. An alternative to determine the scale factor is also proposed, which provides adequate precision for camera calibration. Two important issues are discussed, including the number of reference points and their selection with regards to the depth of view. The experimental results show that the proposed method is convenient to apply in outdoor situations and can achieve high accuracy in displacement measurement.
A good similarity measure is the key to robust template matching. In this paper, we present a Similarity-Transform invariant Best-Buddies Similarity (SiTi-BBS) to deal with the template matching with obvious geometric distortion. Similar to the BBP, SiTi-BBS still adopts Best-Buddies Pair (BBP) to vote. However, differing from the classic BBS acquiring the point pair via bidirectional matching in xyRGB space, SiTi-BBS takes only the color information (RGB components) to acquire BBPs, while the position information (xy components) of each BBP is employed to calculate the geometric distortion between the template and matching window. To further improve the robustness of template matching, we novelly take advantage of the interval voting to accommodate the case where the two images do not strictly satisfy the similarity transformation. Therefore, SiTi-BBS, to a certain extent, can be applied to the affine and perspective transformation. In this way, the highest number of votes is taken as the similarity measure between the two images. Mathematical analysis indicates that the proposed method is capable of dealing with the case of obvious geometric distortion between images. Furthermore, the test results of simulated and real challenging images show the outstanding performance of the proposed similarity measure for template matching.
To some extent, the mapping relationship between the camera and projector images determines the precision of surface
reconstruction in digital close-range photogrammetry. In this paper, a new method is presented to achieve sub-pixel-level
mapping between the camera and projector images. Instead of mapping the stripe from the camera to the projector, which
is pixel-precision-based, a set of pixels that share the same decoded number were picked out on the camera images and
their barycenter was calculated to mapped onto the pixel on the projector images. In most cases, the calculation of the
barycenter is able to achieve sub-pixel precision. Compared with existing approaches based on the direct mapping of the
stripe on the camera image to the projector image, the proposed method is characterized by higher accuracy in mapping
the points and thus the surface reconstruction performance. The experimental results are presented to show the
effectiveness of the proposed method in the improvement of the accuracy of shape reconstruction.
To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras’ positions. Then the camera attitude and the interested points’ image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.
KEYWORDS: 3D modeling, 3D acquisition, Visual process modeling, Model-based design, 3D vision, Target detection, Image segmentation, Sensors, Edge detection, 3D image processing
In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.
This paper designs a multiple reflectors based autocollimator, and proposes a direct linear solution for three-dimensional (3D) angle measurement with the observation vectors of the reflected lights from the reflectors. In the measuring apparatus, the multiple reflectors is fixed with the object to be measured and the reflected lights are received by a CCD camera, then the light spots in the image are extracted to obtain the vectors of the reflected lights in space. Any rotation of the object will induce a change in the observation vectors of the reflected lights, which is used to solve the rotation matrix of the object by finding a linear solution of Wahba problem with the quaternion method, and then the 3D angle is obtained by decomposing the rotation matrix. This measuring apparatus can be implemented easily as the light path is simple, and the computation of 3D angle with observation vectors is efficient as there is no need to iterate. The proposed 3D angle measurement method is verified by a set of simulation experiments.
Grid algorithm is a classical star identification algorithm based on star pattern. A three-dimensional grid algorithm for
all-sky autonomous star identification is proposed, which is associated with the information of star view magnitude. In
contrast with traditional grid algorithm that constructs the grid cells on two-dimensional plane (e.g. x-y coordinate plane),
the proposed approach makes use of the star view magnitudes of the neighboring stars as the third dimension (e.g. z-axis).
A pattern is generated for each of its three-dimensional grid cells that contain neighboring star are 1, and those without
are 0. The progress of star identification is to determine which pattern in the database is associated with the particular
sensor pattern. Simulation shows that this method can achieve identification rate of 98.0% while the standard deviation
of star position error and star view magnitudes are 1 pixel and 0.3Mv respectively. Compared with the traditional grid
algorithm, the identification rate is higher, and the average runtime is 50 percent shorter.
We have developed a calibration approach for a star tracker camera. A modified version of the least-squares iteration algorithm combining Kalman filter is put forward, which allows for autonomous on-orbit calibration of the star tracker camera even with nonlinear camera distortions. In the calibration approach, the optimal principal point and focal length are achieved at first via the modified algorithm, and then the high-order focal-plane distortions are estimated using the solution of the first step. To validate this proposed calibration approach, the real star catalog and synthetic attitude data are adopted to test its performance. The test results have demonstrated the proposed approach performs well in terms of accuracy, robustness, and performance. It can satisfy the autonomous on-orbit calibration of the star tracker camera.
In order to provide the beneficial expression of lens distortion for star trackers, the numerical results of the grid
distortions of four typical star tracker lens systems have been investigated, and then data fitting is done with combined
multinomial of different number of terms and powers of radial radius. The results indicate that the expression of relative
distortion including terms of the one, two, three and four powers of the radial distortion is beneficial in terms of
accuracy, and the fitting function with high-order is not quite helpful in providing a more accurate expansion. In order to
further validate this distortion model, a star tracker camera calibration approach has been simulated with the data
obtained by ray tracing via the Non-Sequential Components of ZEMAX, with imperfect alignment and assembly taken
into account. Simulation results also indicate that the four-order of the radial distortion is beneficial.
An autonomous star tracker is an opto-electronic instrument used to provide the absolute three-axis attitude of a spacecraft utilizing star observations. The precise calibration of the measurement model is crucial, as the performance of the star tracker is highly dependent on the star camera parameters. We focus on proposing a simple and available calibration approach for a star tracker with wide field of view. The star tracker measurement model is described, and a novel approach for laboratory calibration is put forward. This approach is based on a collimator, a two-dimensional adjustable plane mirror, and other ordinary instruments. The calibration procedure consists of two steps: (1) the principal point is estimated using autocollimation adjustment; and (2) the other camera parameters, mainly the principal distance and distortions, are estimated via least-squares iteration, taking into account the extrinsic parameters. To validate this proposed calibration method, simulations with synthetic data are used to quantify its performance considering the errors of the distortion model and calibration data. The theoretical analysis and simulation results indicate that the uncertainties of the measured star direction vectors are less than 4.0×10−5 rad after calibration, and this can be further improved.
Due to the residual chromatic aberration of lens in star tracker, the position accuracy of the star image decrease with the
increase of the field of view (FOV). The spectral distribution characteristics of guide star catalog including about 4600
stars are analyzed statistically, and the function model of stellar spectrums is established in this paper. The centroid
position for each of the guide star images is a function of its color type and the radial distance to the center of the FOV.
The principle of calibration of the centroid error is to make the weighted polynomial, and use a least square fitting
approach to obtain the best values of the position errors compensatory parameters for star image considered in a wide
field of view (FOV) and with different color temperature. As an example, at the 2.5 DEGREES (FOV) star position
errors for Spectral types F, G and K are 10.80μm, 6.5174μm and 4.3479μm respectively. The star position RMS error is
reduced from 1.06 pixels to 0.13 pixels, after implementing the spectral compensation scheme for the lens system of a
star tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.