Open Access
8 February 2020 High accuracy linear method for axis measurement
Lijun Zhong, Zhang Li, Xiaohu Zhang, Yang Shang, Qifeng Yu
Author Affiliations +
Abstract

The attitude parameter is an important state parameter for a long axisymmetric target. The plane intersection (PI) method is a commonly used method for attitude estimation. However, this method only uses planes’ information in the object space under a multiple camera system (more than two cameras simultaneously observing a target). We propose two methods to address the aforementioned issue. One method involves minimizing the square of the object-space angle residual (OAR) and the other method involves minimizing the square of the image-space angle residual (IAR). The linear optimization method is used for the above minimizing problems. The simulation results demonstrate that the IAR method has higher accuracy than the PI and OAR methods under multiple and dual camera systems because it incorporates information of a pair of corresponding image points. Furthermore, our experiments have shown that the linear method is generally faster, and it has an equivalent accuracy compared to the iterative method.

1.

Introduction

The axis attitude is an important parameter for the motion state analysis, performance measurement, and aerodynamic parameter identification of rockets or missiles. Optical measurement is the main measurement method of a shooting range due to its noncontact, passive, high frequencies and high accuracy characteristics. Generally, a shooting range uses theodolites to record the target launching and running process, followed by image processing to obtain the target yaw angle and the pitch angle.

The pose measurement of the target mainly has the following types of methods: The first method is to use plane intersection (PI) to obtain the target’s central axis attitude,1 This method assumes that the target is a symmetrical object, and two cameras are used to simultaneously record the target. The central axis of the target can then be extracted from the image. Based on the principle of light propagation along a straight line, the target axis must be located on a plane consisting of the optical center and the target central axis in the image. The intersection line of the planes of the two cameras can be calculated to obtain the target axis representation. This method gives more accurate measurements than the method based on endpoints, and the form is intuitive. This method also shows that the longer the target axis in an image, the higher the pose measurement accuracy. Many researchers have extended this method to other applications.24 The accuracy of target axis extraction in an image plays an important role in improving the accuracy of the PI method.5 This kind of method can obtain only the target’s central axis attitude, not the target’s position or the roll angle. The second method is to use the information of the target’s structure and then match its projection on the image with the contour extracted from the image.616 Zhao et al.7 proposed a pose estimation algorithm based on the inner angle and the triangle constraint. Liu and Hu8 gave a pose estimation method based on a monocular image for a rotating solid shape spacecraft that utilized the inherent characteristics of a spacecraft. However, this method was not applicable for other types of targets. Peng et al.9,10 proposed some methods for non-cooperative space targets that were based on only stereo cameras or fusion with laser radar. Becke and Schlegl11 proposed a method for estimating a target axis based on the contours under monocular or multi-image conditions. Hanek et al.15 used the linear extreme contours of points, lines, and cylinders to estimate the position and orientation of a camera in the world coordinate system. The third method is to use some cooperation signs (points or lines) on the target. First, we achieve the cooperation signs’ coordinate in the target’s coordinate system, and then, we solve the pose from the target to the camera’s coordinate system.1726 Horn et al.20 utilized the point cloud of the side surface, which achieved by a laser profile sensor to measure the pose of cylindrical components, but this method cannot work at high frequencies. Some researchers estimated camera’s pose by the correspondence of lines or epipolar constraint.2125 Zhou et al.26 proposed a method to track and estimate the pose of known rigid objects efficiently in a complex environment, which based on a 3D particle filter with M-estimation optimization. The fourth method is to directly achieve the point cloud by a laser profile sensor and then use the point cloud to estimate the target’s attitude.23

For long cylindrical targets, the second method cannot achieve a high precision while the cameras are at a long distance of more than 100 m. For the third method, it is difficult to obtain the coordinates of the cooperation signs in the target coordinate system with high precision. There are two main methods for this. One method is based on model labeling, and the other method involves measurement with a total station after the mark is attached. With the first method, it is difficult to guarantee the accuracy of labeling when the target is large. In the total station measurement process, since it is necessary to move multiple positions to measure all of the feature points, it is easy to introduce conversion errors and affect the measurement accuracy. The fourth method cannot work at high frequencies. Later, in this paper, we will describe the accuracy advantage of the axis method relative to the feature points-based method in the measurement of the axis attitude for long symmetric targets. So, in this paper, we propose a method to improve the PI’s accuracy.

For a multiple camera system (more than two cameras simultaneously observing a target), the planes given by the cameras from the optical center and the target axis in the image cannot intersect at the same straight line. This is caused by the inevitable errors of the camera parameter and the central axis. The solution for the PI method is to derive the direction vectors of the target using all of the plane equations. Then, the target attitude angle can be derived according to the definition of decomposition. However, the PI method lacks a geometric explanation when there are more than two cameras. In this paper, we propose two optimization methods to address this issue. The first method is based on minimizing the object-space angle residual (OAR), and the other method is based on minimizing the image-space angle residual (IAR). We also provide linear solutions and iterative solutions for the OAR and the IAR, called the OARL, IARL, OARI, and IARI, respectively. The simulation results showed the IAR obtained a higher accuracy than the PI and the OAR. Furthermore, the IARL was generally faster than the IARI.

2.

Plane Intersection Method

2.1.

Target Attitude Representation

For an axisymmetric long strip target, the pose of the target is usually determined by its symmetry axis direction, which is generally represented by the direction vector or the angles. Alternatively, the yaw and pitch angles could be used to represent the direction vector, as shown in Fig. 1. For the shooting range, X faces north, Y faces upward, and Z faces east. OA is the target axis, OA’ is the projection of OA on the XOZ plane, the angle between the X axis and OA’(ω) is the yaw angle, and the angle between OA and OA’(φ) is the pitch angle.

Fig. 1

Axis direction and attitude angle definition. For the shooting range, X faces north, Y faces upward, and Z faces east. OA is the target axis, OA’ is the projection of OA on the XOZ plane, the angle between the X-axis and OA’(ω) is the yaw angle, and the angle between OA and OA’(φ) is the pitch angle.

OE_59_2_024101_f001.png

2.2.

Coordinate System Definition

The world coordinate system W is a right-handed coordinate system, as shown in Fig. 2. The origin of the camera coordinate system is the optical center. The Z axis is pointing in the positive direction of the optical axis, the X axis is perpendicular to the Z axis horizontally to the right, and the Y axis is perpendicular to the X axis and to the Z axis, whose direction is determined by the right-handed definition criterion. The origin of the image coordinate system is the upper left corner.

Fig. 2

Schematic diagram of PI. The three cameras C1, C2, and C3 observe the same target AB from different directions, and the projections on the image are A1B1, A2B2, and A3B3, respectively. Theoretically, the three planes C1A1B1, C2A2B2, and C3A3B3 intersect in a straight line.

OE_59_2_024101_f002.png

2.3.

Plane Representation by a Single Camera

As shown in Fig. 2, AB is the target axis, C1 and C2 are the optical centers of the two cameras, A1B1 is the projection of the target axis on camera C1, and A2B2 is the projection of the target axis on camera C2. Assume that the equation of the target central axis in the i’th image coordinate system is

Eq. (1)

AiTX=0,
where Ai=[aibici]T and X=[xy1]T. The image plane can be seen as the z=f plane in the camera coordinate system, where f is the physical focal length of the camera. The Z axis of the camera coordinate system passes through the principal point of the image plane (Cx,Cy), so the transformation from the image coordinate system to the camera coordinate system is determined as follows:

Eq. (2)

{xc=(xCx)dxyc=(yCy)dy,
where (xc,yc) is in the camera coordinate system, (x,y) is in the image coordinate system, and (dx,dy) is the actual physical size of the camera pixel.

The central axis of the target in the image can be expressed in the camera coordinate system as follows:

Eq. (3)

{ai(xc/dx+Cx)+bi(yc/dy+Cy)+c=0zf=0.

The plane equation (composed of the axis and the optical center of the camera) can be defined as follows:

Eq. (4)

aixc/dx+biyc/dy+(aCx+bCy+c)zc/f=0.

The equivalent focal length of the camera is abbreviated as follows:

Eq. (5)

{Fx=f/dx,Fy=f/dy.

The plane equation can be summarized as follows:

Eq. (6)

aiFxxc+biFyyc+(aiCx+biCy+ci)zc=0.

Given the parameter matrix ni=[n1n2n3]T=[aiFxbiFyaiCx+biCy+ci]T, the plane equation is then rewritten as follows:

Eq. (7)

niTPc=0,
where Pc=[xcyczc]T is a point from the camera coordinate system and n is the normal vector of the plane. It is assumed that the external parameters of the i’th camera in the world coordinate system are Ri, ti, where Ri and ti are the rotation matrix and the translation vector from the world coordinate system to the camera coordinate system. Therefore, the point in the world coordinate system can be transformed to the camera coordinate system using Eq. (8):

Eq. (8)

Pc=RiPw+ti,
where Pw=[xwywzw]T is a point in the world coordinate system. The plane equation in the world coordinate system can then be derived as follows:

Eq. (9)

niTRiPw+niTti=0.

So far, the expression of the plane (composed of optical center of the camera and the central axis of the target in the image) has been obtained in the world coordinate system.

2.4.

Principle of Dual Camera Plane Intersection

Supposing that the plane equations of the two cameras are

Eq. (10)

{n1TPw+d1=0n2TPw+d2=0,
where n1 and n2 are the normal vectors of the two planes. We do not consider the atmospheric refraction in this paper, so the direction vector of the intersection line I=(l,m,n)T must be perpendicular to both normal vectors, it can be solved as

Eq. (11)

I=n1T×n2T=[ijkn11n12n13n21n22n23].

The yaw angle ω and pitch angle φ of the target can be determined as follows:

Eq. (12)

{ω=tan1n/lφ=tan1m/l2+n22.

It can be seen that the representation of the direction vector obtained by the final intersection of this method is independent of d and only d for the plane parameter that is related to the translation vector of the camera. This means that the final direction vector is independent of the translation vector T. It can be concluded that the target position is related only to the attitude of the camera.

3.

Our Method

3.1.

Multicamera Intersection Scenario

As shown in Fig. 2, the three cameras C1, C2, and C3 observe the same target AB from different directions, and the projections on the image are A1B1, A2B2, and A3B3, respectively. Theoretically, the three planes C1A1B1, C2A2B2, and C3A3B3 should intersect in a straight line. However, due to the errors of the camera parameters and the axis, the direction vectors given by each set of two cameras are not completely consistent. Therefore, it is important to derive the optimal direction vector.

3.2.

Optimization Conditions

This paper proposes two optimization methods for the axis measurement problem. These two methods are based on minimizing residuals:

  • 1. In a similar manner to the object-space residual method for camera calibration, the target axis extracted from the image was used as the true value. Firstly, we calculated the angle between the target central axis and each plane that consisted of the optical center and the central axis. Secondly, we minimized the square sum of all of the angles. In this paper, this angle is called the OAR. As shown in Fig. 3, the C2A2B2 plane is the plane composed of the optical center and the target axis in the image; A′B′ is the target axis, and B′D is the projection of A′B′ on the plane C2A2B2. α is the OAR.

  • 2. In a similar manner to the image-space residual method for camera calibration, the target axis was projected to the image plane according to the theoretical model, and the angle between the projection and the axis extracted on the image plane was considered the angle residual. We also minimized the square sum of all of the angles. In this paper, this term is called the IAR. As shown in Fig. 4, the C2A2B2 plane is the plane composed of the optical center and the axis of the camera; A′B′ is the target axis, B2A3 is the projection of A′B′ on the image plane, and α is the image angle residual.

Fig. 3

Schematic diagram of the OAR. The C2A2B2 plane is the plane composed of the optical center and the target axis in the image; AB is the target axis; A′B′ is the measured result of the target axis, and B′D is the projection of A′B′ on the plane C2A2B2. α is the OAR.

OE_59_2_024101_f003.png

Fig. 4

Schematic diagram of the IAR. The C2A2B2 plane is the plane composed of the optical center and the axis of the camera; AB is the target axis; A′B′ is the measured result of the target axis, B2A3 is the projection of A′B′ on the image plane, and α is the IAR.

OE_59_2_024101_f004.png

3.3.

Object-Space Angle Residual

3.3.1.

Definition of the optimization problem

Supposing that multiple cameras simultaneously observe a target, one can extract the target axis in the image. The plane equations of the camera’s optical center and the target axis in the image are expressed in the world coordinate system as follows:

Eq. (13)

niTPw+di=0.

Supposing that the target axis direction is I=(l,m,n)T, then the angle between the central axis and the plane is

Eq. (14)

αi=sin1(niTIniI).

For the convenience of calculation, the linear direction and the plane parameters are normalized, i.e., 

Eq. (15)

{ni=1I=1.

Then, αi can be expressed as follows:

Eq. (16)

αi=sin1(niTI).

From the geometric understanding of the PI, one can use this angle as the residual of the obtained central axis relative to each camera plane. In a similar manner to the spatial residual for the camera calibration, minimizing this residual can provide the optimal solution:

Eq. (17)

mini=1Nαi2,
where N is the number of cameras. For a small angle, αi;could be approximated by niTI, therefore, Eq. (17) could be rewritten as follows:

Eq. (18)

mini=1N(niTI)2,
where (niTI) is within the interval of [1,1].

3.3.2.

Linear method

The Levenberg–Marquardt22 can be used to solve Eq. (17) and keep low computation efficiency. The following method provides a linear solution to the minimum value by deriving

Eq. (19)

y=i=1N(niTI)2.

Since y is a function of (l,m,n)T, letting the partial derivative be 0:

Eq. (20)

{yl=i=1N(ni1niTI)=0ym=i=1N(ni2niTI)=0yn=i=1N(ni3niTI)=0.

We can solve the equations by the singular value decomposition (SVD)27 of the coefficient matrix.

Theorem 1:

It is assumed that ni are the coefficients of the spatial plane niTPw+di=0. The necessary and sufficient condition for rank=2 of the matrix in Eq. (21) is that at least two of the planes are not parallel, and all of the planes are parallel to the same straight line.

The coefficient matrix of Eq. (20) can be expressed as the form of Eq. (21):

Eq. (21)

[n1n2nN][n1n2nN]T.

Proof of the theorem’s sufficiency:

Supposing two planes p1 and p2 are not parallel, and n11n21,n12n22,n13n23 are not all equal, then the matrix [n1n2] has at least one second-order subdetermination, that is, not 0, i.e., the rank is equal to 2. One can then conclude that the rank of [n1n2nN] is larger than or equal to 2.

Since all planes are parallel to the same straight line, all planes can be made across the same straight line by changing d. Then, all planes form a planar cluster. It can be known from the properties of planar clusters that other planes can be linearly represented by p1 and p2:

Eq. (22)

ω(n1TPw+d1)+μ(n2TPw+d2)=0.

It can be seen that the rank of [n1n2nN] is 2. From R(A)=R(AT*A),28 the rank of the formula (21) is 2. The sufficiency certificate is completed.

Next, the necessity of the theorem is proven.

It is known from R(A)=R(AT*A) that the rank of [n1n2nN] is 2. This means that at least two planes are not parallel. Without loss of generality, p1 and p2 are chosen for the following analysis. All other planes can be represented linearly by p1 and p2, i.e., 

Eq. (23)

ωn1TPw+μn2TPw=0.

Since D affects only the plane position, it does not affect the plane direction, so all planes are parallel to the straight line represented by Eq. (24):

Eq. (24)

{n1TPw=0n2TPw=0.

The necessity certificate is completed.

According to the above Theorem 1, if there are only two cameras (not parallel), then the equation has a unique nonzero solution, and the geometric meaning of the solution is consistent with the PI method. If there are more than two cameras, unless the condition of Theorem 1 is satisfied, the system of equations does not have a strict nonzero solution, and only the least-squares solution (l,m,n)T can be obtained as the optimal solution.

3.3.3.

Efficiency analysis

The linear method for minimizing the object angle residual is as follows.

  • 1. Solve each plane parameter with time complexity of O(n).

  • 2. Solve the coefficient matrix with time complexity of O(n).

  • 3. Solve the final result in a fixed time.

It can be seen from the above analysis that the method can be solved only by calculating the coefficient matrix and solving equations with the least square method. Therefore, its time complexity is O(n).

3.4.

Image-Space Angle Residual

3.4.1.

Definition of the optimization problem

It is assumed that the final central axis direction is I=(l,m,n)T and the axis passes through a point P0(x0,y0,z0)T in the world coordinate system. It is easy to see that the straight line passes through the point P1(x0+l,y0+m,z0+n)T. In the world coordinate system, P0 can be obtained from the intersection of the pair of corresponding image point on the central axis. The plane consisting of the straight line and the optical center of the i’th camera can be expressed as follows:

Eq. (25)

|xwywzw1OxiOyiOzi1x0y0z01x0+ly0+mz0+n1|=0,
where (Oxi,Oyi,Ozi) is the optical center of the i’th camera. The plane equation after simplification is

Eq. (26)

nwiTPw+di=0,
where nwi is the normal vector of the plane and the subscript w means that it is from the world coordinate system. It can be seen from Eq. (25) that nwi can be represented linearly by I. Letting the relationship matrix be M, i.e.,

Eq. (27)

nwiT=ITMi.

It is then easy to prove that M is a full-rank matrix. The transformation matrix from the world coordinate system to the camera coordinate system is [Riti01], which is also a full-rank matrix. The plane is expressed in the camera coordinate system as

Eq. (28)

nciT=nwiT[Riti01],
where nci is the normal vector of the plane and the subscript c means that it is from the camera coordinate system. From the image plane equation z=f, the direction of the intersection line is Ii=(nci1,nci2,0). Then, the angle between the axis of the target in the image plane and the intersection line is

Eq. (29)

αi=cos1(Ii[ai  bi]TIiai  bi),
where ai and bi are coefficients of the linear equation [aibici][xy1]T=0 in the image plane.

Therefore, the optimized objective function is

Eq. (30)

mini=1Nαi2,
where N is the number of cameras.

3.4.2.

Linear method

A method of linearly solving for the minimum value is given by derivation below.

The original expression (30) is complicated, especially the partial derivative expression. The numerator and the denominator contain (l,m,n)T, which cannot be solved linearly. Therefore, the expression is deformed.

If the two 2D lines I1=(a1,b1,c1)T, I2=(a2,b2,c2)T are nearly parallel, the angle αi=tan1a1/b1tan1a2/b2 is a small angle and we have |αi||tan(αi)|=|a1b1a2b21+a1a2b1b2|=|a1b2a2b1a1a2+b1b2|. For the sake of discussion, we divide a1,b1 and a2,b2 by the larger one, respectively, so that the maximum value is 1. Then, 1|a1a2+b1b2||a12+b12|2, so |αi| is positively correlated with |a1b2a2b1|. |α| is the absolute value of α. Then, Eq. (31) can be used instead of Eq. (28) as the optimization solution objective function:

Eq. (31)

min[i=1N(ainci2binci1)2].

From the previous derivation, nci1,nci2 is a linear function of I=(l,m,n)T. Then, the equation can be recorded as i=1N(miTI)2 where mi are the coefficients of I in Eq. (31) and we can think of it as representing a normal vector to a plane too.

This can be thought of as a function of (l,m,n)T. Clearly, this function is a basic elementary function, so it is continuous and differentiable in the domain of the definition, and the minimum value must exist. At the point where the minimum value is obtained, the partial derivative of (l,m,n)T exists and it is 0, and the partial derivative is obtained separately:

Eq. (32)

{yl=i=1N(mi1miTI)=0ym=i=1N(mi2miTI)=0yn=i=1N(mi3miTI)=0.

We can solve the equations by the SVD27 decomposition of the coefficient matrix.

It can be understood from Theorem 1 that only two planes represented by mi need to be nonparallel and all planes are parallel to the same line. The rank of this matrix is 2, which indicates that the system of equations has a unique nonzero solution.

Since all planes are parallel to the straight line (x,y,z), (x+l,y+m,z+n), and only the optical center of the two cameras and the target line is not coplanar, then the rank of the matrix of Eq. (32) is 2, and the direction solved for by the image angle residual can be used to obtain the global minimum.

3.4.3.

Efficiency analysis

The linear method for minimizing the image angle residual is as follows.

  • 1. Solve the 3D coordinate of the corresponding image points with time complexity of O(n).

  • 2. Solve the straight-line parameters projected onto the image plane with time complexity of O(n).

  • 3. Solve the coefficient matrix with time complexity of O(n).

  • 4. Find the final result in a fixed time.

It can be seen from the above analysis that the image angle residual method can be solved for only by calculating the 3D point coordinate, the projection line parameter, the solution coefficient matrix, and the least squares solution, so the time complexity is O(n), which is the same as that of the OARL method.

4.

Experiments

Because the ground truth was not easy to obtain in the real experiment, we used only the real data to verify the validity of the algorithm, and then we used the simulation to verify the accuracy of the algorithm.

4.1.

Simulation

4.1.1.

Simulation environment

The simulation platform was Windows 7, and the processor was an Intel(R) Core TM i7-6820HQ 2.7 GHz.

Simulation conditions: The equivalent focal length was (2181.8, 2181.8) and the image main point was (1023.5, 1023.5). The cameras were arranged in a circle around the target, and the radius of the circle was 4.5 m. The origin of the world coordinate system was the center of the circle. The two endpoints of the target axis were (0.5,0.5,0.5) and (1.5, 1.5, 1.5). The top view of simulation scenario is shown in Fig. 5.

Fig. 5

The top view of the simulation scenario.

OE_59_2_024101_f005.png

4.1.2.

Simulation description

  • 1. The corresponding image points of the image-space residual method were obtained by adding the same error in the image to the endpoint (0.5, 0.5, 0.5). The corresponding image points were also affected by the error, ensuring that the image-space residual method was also solved under the same error conditions.

  • 2. When there were only two cameras, the angle between the two cameras and the target center was 120 deg.

  • 3. The PI, OARI, OARL, IARI, and IARL were used for each simulation. We used the angle between the result and the true value to evaluate the accuracy. All of the simulations were calculated 1000 times and the root mean square (RMS) of the angular error was obtained. The units were degrees except for simulation condition 7 (see below).

Eq. (33)

E=cos1IItIIt.

Simulations were conducted for the following conditions, separately:

  • 1. Number of cameras.

  • 2. Axis extraction error.

  • 3. Error of the camera angle.

  • 4. Error of the camera optical center.

  • 5. Errors of all camera external parameters.

  • 6. All camera external parameters and extraction errors.

  • 7. Solving for all external parameters and extraction errors and comparing the running time.

4.1.3.

Simulation results

The different simulations and the related results are given below.

Simulation 1: A Gaussian error with a mean of 0 was added to the optical center of the camera, with a 10 mm RMS value. A Gaussian error with a mean of 0 was added to the camera angle, with a 0.5 deg RMS value. A Gaussian error with a mean of 0 was added to the x and y directions of the two endpoints of the axis with a 1-pixel RMS value. To verify the solution of this method in the case of only two cameras, the number of cameras ranged from two to nine. The cameras were arranged on a circle with a radius of 4.5 m, the center of the circle was the origin of the world coordinate system. The angle was calculated according to the average number of cameras. The corresponding image points of the IAR were obtained through the intersection of the head points with the error added. The simulation results for the different numbers of cameras are as follows.

  • 1. If there were only two cameras, the geometric meaning of the OAR method was the same as that of the PI method, so, the result was the same, and the IAR used the corresponding image point to minimize the image residual, so, the geometric meaning was inconsistent and the accuracy was higher.

  • 2. As the number of cameras increased, the accuracy of all of the methods increased gradually, indicating that these methods effectively utilized the constraints of the multi-camera.

  • 3. It is observable from Table 1 that the accuracy of the OAR and PI was the same, and the IAR achieved the highest accuracy.

  • 4. The accuracy of the linear solution and iterative solution of the OAR and the IAR was the same, which shows that the linear method accuracy was equivalent to the iterative method.

Table 1

Relationship between angle error and number of cameras.

NumberPIOARIOARLIARIIARL
21.161.161.047
30.610.610.610.5630.563
40.5940.5940.5940.490.49
50.4730.4730.4730.3810.38
60.4350.4350.4350.3550.355
70.4020.4020.4020.3180.318
80.3710.3710.3710.2930.293
90.3510.3510.3510.2730.273
Note: The bold means the best results.

In the subsequent simulation results, the linear and iterative results were essentially the same. For the convenience of comparison, only the OARL and IARL are discussed for the following experiment (except the running time comparison in simulation 7).

Simulation 2: A Gaussian error with a mean of 0 was added to the X and Y directions at the head and the tail of the axis with 0 to 4 pixels of RMS. We fixed the number of cameras at N=5. The simulation results are shown in Fig. 6.

Fig. 6

Relationship between the angular error and the axis extraction error. It is clear that the accuracy of the three methods was similar. As the error increased, the errors of the three methods increased.

OE_59_2_024101_f006.png

Simulation 3: A Gaussian error with a mean of 0 was added to the three angles of the external parameter with 0.1 to 1 deg of RMS. The step was 0.05 deg, and the number of cameras was 5. The simulation results are shown in Fig. 7.

Fig. 7

Relationship between the angular error and the camera angle error (in degrees). It is apparent that the accuracy of the OAR and the PI was same, and the IAR achieved higher accuracy.

OE_59_2_024101_f007.png

Simulation 4: A Gaussian error with a mean of 0 was added to the three values of the camera’s optical center, with 1 to 50 mm of RMS. The step size was 5 mm and the number of cameras was 5. The simulation results are shown in Fig. 8.

Fig. 8

Relationship between the angular error and the optical center error (in millimeters). It is evident for the case of only the optical center error, the error was 0 since the OAR and the PI did not use the optical center. However, the IAR method produced a certain error.

OE_59_2_024101_f008.png

Simulation 5: A Gaussian error with a mean of 0 was added to the optical center of the camera, with 1 to 50 mm of RMS, and the step size was 1 mm. A Gaussian error with a mean of 0 was added to the camera angle, with 0.05 to 2.5 deg of RMS, and the number of cameras was 5. The simulation results are shown in Fig. 9.

Fig. 9

Relationship between the angular error and the camera external parameter error. For the case of only an external parameter error, the accuracy of the OAR and PI was same, and the IAR had a higher accuracy.

OE_59_2_024101_f009.png

Simulation 6: A Gaussian error with a mean of 0 was added to the optical center of the camera, with 1 to 50 mm of RMS, and the step size was 1 mm. A Gaussian error with a mean of 0 was added to the camera angle, with 0.05 to 2.5 deg of RMS. A Gaussian error with a mean of 0 was added to the x and y directions of the two endpoints of the axis, with 0.1 to 5 pixels of RMS, and the number of cameras was 5. The simulation results are shown in Fig. 10.

Fig. 10

Relationship between the angular error and the camera external parameters and the axis extraction error. The accuracy of the OAR and PI was same, and the IAR achieved a higher accuracy.

OE_59_2_024101_f010.png

Simulation 7: The same error was added as in case 1. The number of cameras was increased from 3 to 100. The simulation was run 10,000 times. The results are shown in Fig. 11.

Fig. 11

(a) and (b) Relationship between the running time (in seconds) and the number of cameras. (1) The overall running time of the iterative method was higher than that of the other methods, especially the IARI, and its time-consuming instability was mainly caused by the inconsistent number of iterations for different random errors. (2) With the increase in the number of cameras, the cost times of the PI, OARL, and IARL were similar, because all of these methods were linear methods.

OE_59_2_024101_f011.png

4.1.4.

Simulation conclusion

We can conclude from the above simulations:

  • 1. The accuracy of the IAR was higher than that of the OAR and PI.

  • 2. The linear methods presented in this paper achieved consistent accuracy for the iterative method, and the computation burden was much lower than that of the iterative method.

4.2.

Experiment Verification

Next, we will demonstrate the PI’s advantage for the measurement of a long axisymmetric target with physical experiment 1 and demonstrate the correctness of the method proposed in this paper with physical experiment 2.

4.2.1.

Physical experiment 1

Physical environment

To verify the accurate performance of the central axis intersection algorithm in the axial attitude measurement of the symmetric target, the design experiment was verified and the four commonly used methods were tested, namely

  • OPNP + OI: single camera position calculation method (OPNP17 + OI18).

  • GOIAMCS: generalized orthogonal iterative Algorithm of multiple camera systems (GOIAMCS).19

  • DMPI+AO: dual-camera multiple points intersection 21 + absolutely orientation20 (DMPI + AO).

  • PI: plane intersection.1

The test scenario was shown in Fig. 12. The target was a cylindrical barrel. The points on the barrel were used as marker points on the target for pose calculation. The points on the wall and on the pillar were calibration control points. The optical axes of the two cameras all faced the barrel, with the center of the barrel as the intersection point. The angle was 90  deg, and the optical center was about 1.5 m from the barrel. The target filled the width of the field of view as much as possible. A column of points was arranged in the direction of the camera and for 45 deg on both sides. The main purpose of this was to simultaneously improve the coverage area of the marker points and ensure the extraction accuracy of the points as much as possible. At the same time, some points were arranged in the common area of the two cameras for method 3. Some cooperative signs were also arranged on the walls around the cameras to convert the control points and marking points to the same coordinate system when the total station frame was in two different positions.

Fig. 12

(a), (b) Physical scene graphs are two images captured by two cameras from different positions, and the barrel is the target to be measured. The points on the barrel served as marker points on the target for pose calculation. The points on the wall and on the pillar are calibration control points.

OE_59_2_024101_f012.png

The auxiliary measuring equipment consisted of a Leica TS30 and a Leica TM5100A. The Leica TS30 was used to obtain the coordinates of the control points for calibration and the coordinates of the target’s points for posture calculation. Its accuracy was 0.6 mm. The Leica TM5100A was used to measure the change in the angle of the barrel. Its accuracy was 0.5 arc sec, and the accuracy of the vertical lifting platform to which the Leica TM5100A was attached was 2 arc sec.

Experimental procedure

The barrel was attached to a platform that could precisely control the direction of rotation, and a plane mirror was fixed on the barrel as the measuring reference of the Leica TM5100A.

The measurement steps were as follows:

  • 1. The Leica TS30 was used to measure the markers on the target and the fixed point, and the fixed point was used to calibrate the camera.

  • 2. The common points on the wall were used to unify the mark points on the target in the same coordinate system.

  • 3. Four methods were used to measure the initial pose of the target at the initial position.

  • 4. The platform was used to control the barrel to move in the same direction five times, with 1 deg of movement for each motion. The image was captured by two cameras and measured by the Leica TM5100A at the same time. The measured result of the Leica TM5100A was used as the true value of the actual motion. The results are presented in Table 2.

  • 5. Four methods were used to solve for the target pose. The r-matrix obtained by the above three methods was decomposed to obtain its Y axis direction, and the Y axis direction angle obtained by the same method in the third step was calculated. For the angle obtained by the PI method, the Y axis angle obtained by the third step PI method was directly calculated. To measure the variation as a measure of the accuracy of the measured results, as presented in Table 2, the unit was degrees.

Table 2

True values and algorithm results (unit: degrees).

True valueOPNP + OIGOIAMCSDMPI + AOPI
1.01351.05731.01641.01791.0146
2.04322.05172.03612.03172.0298
3.06723.06873.053513.04663.0555
4.09544.07184.074064.06574.0846
5.11545.08605.095295.08205.0986
RMS of error0.0260.0150.0230.012
Note: The bold means the best results.

The RMS of error is 15(IcIt)2, where Ic is the value got by algorithm and It is the truth value got by the total station. It is evident from Table 2 that the RMS of attitude error measured in this experimental scenario from low to high was PI, GOIAMCS, DMPI + AO, and OPNP + OI. Therefore, it can be proven that the PI method had advantages in the axis attitude measurement of a long symmetric target.

4.2.2.

Physical experiment 2

To verify the correctness of the algorithm, the method was also verified by physical experiments.

Physical environment

The physical environment for physical experiment 2 was shown in Fig. 13. The target was placed on the board in the center and the diagonal markers on the side were the calibration control points. The camera’s pixel size was 4288×2848, and the equivalent focal distance was 5200.

Fig. 13

Physical scene graph 2. The target was placed on the board in the center and the diagonal markers on the side were the calibration control point.

OE_59_2_024101_f013.png

Experimental procedure

The experiment was conducted as follows:

  • 1. The total station was used to obtain the coordinates of the calibration control points in the total station coordinate system. The total station coordinate system was used as the world coordinate system. The coordinate system direction was Y vertically upward and XZ horizontal, and the system constituted a right-hand coordinate system.

  • 2. The same camera was used to take photos from nine different locations. The calibration control points were extracted from the image and the 3D coordinates were used to calibrate all of the cameras for a unified world coordinate system.

  • 3. The target central axis was extracted from the image.

  • 4. The PI, OARI, OARL, IARI, and IARL were used separately to obtain the results. All results are shown in Table 3. It can be seen from the table that the results of the five methods were essentially the same, which proved the correctness of the method.

Table 3

Experimental results (unit: degrees).

ParameterPIOARIOARLIARIIARL
Yaw−0.97−0.95−0.97−1.02−1.02
Pitch−89.31−89.31−89.31−89.09−89.09

5.

Conclusion

This paper proposed two methods, which were based on minimizing the residuals of the object angle and image angle. These two methods can provide geometric explanations that are lacking in the PI method. The simulation demonstrated that in the case of only two cameras, the results of the OAR method were consistent with those of the PI method, and the IAR method achieved higher accuracy than the PI and OAR method using corresponding image point information. It was not difficult to extract a pair of corresponding image points on the target axis in the actual task, so the method had important practical significance for the case of only two cameras. For a multiple camera system, the accuracies of the OAR and the PI were the same, and the accuracy of the IAR was higher than the accuracies of the OAR and the PI. Furthermore, the linear method was compared with the iterative method. We proved that the IARL was generally faster than the IARI. The IARL had great significance in the actual task measurement scenario.

However, in the same manner as the PI, all of these proposed methods need the target’s center axis for measurement, so they are suitable only for the measurement of an axisymmetric target.

Acknowledgments

The research was supported by the National Natural Science Foundation of China (Grant Nos. 11727804, 11872070, and 11802321).

References

1. 

Q. F. Yu, X. Y. Sun and G. J. Chen, “A new method of measure the pitching and yaw of the axes symmetry object through the optical image,” J. Natl. Univ. Def. Technol., 22 (2), 15 –19 (2000). Google Scholar

2. 

Y. Zhang et al., “Attitude measurement method research for missile launch,” China Opt., 8 (6), 997 –1003 (2015). https://doi.org/10.3788/CO.20150806.0997 Google Scholar

3. 

K. Luo et al., “Measuring technology on elevation angle and yawing angle of space target based on optical measurement method,” J. Changchun Univ. Sci. Technol., (2007). Google Scholar

4. 

C. Wang, Study on Methods to Improve the Attitude Measurement Precision of the Rotary Object, (2013). Google Scholar

5. 

S. W. Ding, Research on Axis Extraction Methods for Target’s Attitude Measurement, (2013). Google Scholar

6. 

L. Liu and Z. Zhao, “A new approach for measurement of pitch, roll and yaw angles based on a circular feature,” Trans. Inst. Meas. Control, 35 (3), 384 –397 (2013). https://doi.org/10.1177/0142331212451991 TICODG 0142-3312 Google Scholar

7. 

R. Zhao et al., “Pose estimation based on the constraints of inner angles and areas of triangles,” Proc. SPIE, 7384 738411 (2009). https://doi.org/10.1117/12.835128 PSISDG 0277-786X Google Scholar

8. 

C. Liu and W. Hu, “Relative pose estimation for cylinder-shaped space-crafts using single image,” IEEE Trans. Aerosp. Electron. Syst., 50 (4), 3036 –3056 (2014). https://doi.org/10.1109/TAES.2014.120757 IEARAX 0018-9251 Google Scholar

9. 

J. Peng, W. Xu and H. Yuan, “An efficient pose measurement method of a space non-cooperative target based on stereo vision,” IEEE Access, 5 22344 –22362 (2017). https://doi.org/10.1109/ACCESS.2017.2759798 Google Scholar

10. 

J. Peng et al., “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J., 19 (8), 3008 –3019 (2018). https://doi.org/10.1109/JSEN.2018.2889469 ISJEAZ 1530-437X Google Scholar

11. 

M. Becke and T. Schlegl, “Least squares pose estimation of cylinder axes from multiple views using contour line features,” in IECON 2015-Conf. IEEE Ind. Electron. Soc., 001855 –001861 (2015). https://doi.org/10.1109/IECON.2015.7392371 Google Scholar

12. 

J. B. Huang, Z. Chen and T. L. Chia, Pose Determination of a Cylinder Using Reprojection Transformation, Elsevier Science Inc.(1996). Google Scholar

13. 

C. Doignon and M. De Mathelin, “A degenerate conic-based method for a direct fitting and 3-D pose of cylinders with a single perspective view,” in IEEE Int. Conf. Rob. Autom., 4220 –4225 (2007). https://doi.org/10.1109/ROBOT.2007.364128 Google Scholar

14. 

A. Y. C. Shiu and H. Zhuang, “Pose determination of cylinders, cones, and spheres from perspective projections,” Proc. SPIE, 1708 771 –782 (1992). https://doi.org/10.1117/12.58624 PSISDG 0277-786X Google Scholar

15. 

R. Hanek et al., “Yet another method for pose estimation: A probabilistic approach using points, lines, and cylinders,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit., 550 (1999). https://doi.org/10.1109/CVPR.1999.784961 Google Scholar

16. 

Z. Zhao and G. Wen, “Model-based estimation of axisymmetric target’s pose and speed using line-scan camera,” in Int. Conf. Syst. Sci. Eng. Des. and Manuf. Inf., 31 –34 (2011). Google Scholar

17. 

Y. Zheng et al., “Revisiting the PnP problem: a fast, general and optimal solution,” in ICCV 2013, 2344 –2351 (2013). Google Scholar

18. 

C.-P. Lu, G. D. Hager and E. Mjolsness, “Fast and globally convergent pose estimation from video images,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (6), 610 –622 (2000). https://doi.org/10.1109/34.862199 ITPIDJ 0162-8828 Google Scholar

19. 

Y. X. Xu, Y. L. Jiang and C. Fang, “Generalized orthogonal iterative algorithm for pose estimation of multiple camera systems,” Acta Opt. Sin., 29 (1), 72 –77 (2009). https://doi.org/10.3788/AOS GUXUDC 0253-2239 Google Scholar

20. 

B. K. P. Horn, H. M. Hildren and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” Int. J. Opt. Soc. Am. A, 5 1127 –1135 (1988). https://doi.org/10.1364/JOSAA.5.001127 Google Scholar

21. 

Y. Q. Zhang et al., “Pose optimization based on integral of the distance between line segments,” Sci. China Technol. Sci., 59 (1), 135 –148 (2016). https://doi.org/10.1007/s11431-015-5958-1 Google Scholar

22. 

J. Heller, M. Havlena and T. Pajdla, “Globally optimal hand-eye calibration using branch-and-bound,” IEEE Trans. Pattern Anal. Mach. Intell., 38 (5), 1027 –1033 (2015). https://doi.org/10.1109/TPAMI.2015.2469299 ITPIDJ 0162-8828 Google Scholar

23. 

Z. Q. Zhang, L. Zhang and G. Z. Yang, “A computationally efficient method for hand-eye calibration,” Int. J. Comput. Assist. Radiol. Surg., 12 (10), 1775 –1787 (2017). https://doi.org/10.1007/s11548-017-1646-x Google Scholar

24. 

J. J. Lee and M. H. Jeong, “Stereo camera head-eye calibration based on minimum variance approach using surface normal vectors,” Sensors, 18 (11), 3706 (2018). https://doi.org/10.3390/s18113706 SNSRES 0746-9462 Google Scholar

25. 

Y. Zhang, Z. C. Qiu and X. M. Zhang, “Calibration method for hand-eye system with rotation and translation couplings,” Appl. Opt., 58 (20), 5375 –5387 (2019). https://doi.org/10.1364/AO.58.005375 APOPAI 0003-6935 Google Scholar

26. 

L. M. Zhou et al., “3D rigid pose tracking based on new distance function of line segments,” Proc. SPIE, 11179 111794L (2019). https://doi.org/10.1117/12.2540195 PSISDG 0277-786X Google Scholar

27. 

G. H. Golub, “Singular value decomposition and least squares solutions,” Numer. Math., 14 (5), 403 –420 (1970). Google Scholar

28. 

Q. Wang et al., The Sixth Edition of Linear Algebra, Department of Mathematics of Tongji University(2014). Google Scholar

Biography

Lijun Zhong received his BE degree in computer science and technology from Central South University, Changsha, China, in 2008 and his ME degree in software engineering from Central South University, Changsha, China, in 2008. He is currently pursuing his PhD with the College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, China. His research interests include image processing, computer vision, deep learning, and photogrammetry.

Zhang Li is an assistant professor at the College of Aerospace Science and Engineering, National University of Defense Technology, China. He received his PhD in biomedical engineering from the Delft University of Technology in 2015. He authored more than 20 papers in high-ranking journals and conferences (e.g., TMI and TBME). His research interests include computer vision, particularly image registration. He is also interested in the applications of deep learning.

Xiaohu Zhang received his PhD in aerospace science and technology from the National University of Defense Technology, Changsha, China, in 2006. He is a professor with the School of Aeronautics and Astronautics, Sun Yat-Sen University. His research interests include image processing, computer vision, space situational awareness, and photogrammetry.

Yang Shang received his PhD in aerospace science and technology from the National University of Defense Technology, Changsha, China, in 2006. He is a professor with the College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, China. His research interests include image processing, computer vision, vision navigation, and photogrammetry.

Qifeng Yu received his PhD in precision optical measurement from the University of Bremen, Germany, in 1996. He is a professor with the College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, China. He is also an academician of the Chinese Academy of Sciences. His research interests include image processing, computer vision, vision navigation, and photogrammetry.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Lijun Zhong, Zhang Li, Xiaohu Zhang, Yang Shang, and Qifeng Yu "High accuracy linear method for axis measurement," Optical Engineering 59(2), 024101 (8 February 2020). https://doi.org/10.1117/1.OE.59.2.024101
Received: 4 August 2019; Accepted: 13 January 2020; Published: 8 February 2020
Advertisement
Advertisement
KEYWORDS
Cameras

Imaging systems

Optical engineering

Calibration

Device simulation

Computer simulations

Feature extraction

RELATED CONTENT


Back to Top