Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as well as other environment recognition systems.
Performing efficient view frustum culling is a fundamental problem in computer graphics. In general, an octree is used
for view frustum culling. The culling checks the intersection of each octree node (cube) against the planes of the view
frustum. However, this involves many calculations. We propose a method for fast detecting the intersection of a plane
and a cube in an octree structure. When we check which child of the octree node intersects a plane, we compare the
coordinates of the corner of the node and the plane. Using an octree, we calculate the vertices of the child node by using
the vertices of the parent node. To find points within a convex region, a visibility test is performed by AND operation
with the result of three or more planes. In experiments, we tested the problem of searching for the visible point with a
camera. The method was two times faster than the conventional method, which detects a visible octree node by using the
inner product of the plane and each corner of the node.
Focusing on 3D object recognition for handling-robot tasks, we developed a registration method for point data measured
from a real object and model surfaces. On the basis of the iterative-closest-point (ICP) algorithm, we proposed a
registration technique that deforms model shapes instead of correcting measured range data including distance errors. We
call our technique a "viewpoint-dependent remodeling ICP" algorithm. Even when a laser range finder only is used, this
technique can reduce the effects of errors depending on surface characteristics such as colors and reflectance properties.
In the preliminary stages, the relationships between distance errors and surface characteristics of points on object
surfaces are determined and added to the models. In object recognition stages, we measure point data, and do registration
while changing the model position and attitude and deforming the model shape. The deformation depends on the
relationships and the relative positions of the model surfaces and the sensor position. In preliminary experimental tests,
we measured distances to black and white papers and evaluated the distance errors. Moreover, we simulated recognizing
the bottle covered with these papers. In this simulation, it was verified that our technique has convergence and improves
accuracy of correspondence estimations between measured data and models.
We present a novel approach for geometric alignment of 3D sensor data. The Iterative
Closest Point (ICP) algorithm is widely used for geometric alignment of 3D models as a
point-to-point matching method when an initial estimate of the relative pose is known.
However, the accuracy of the correspondence between point and point is difficult when the
points are sparsely distributed. In addition, the searching cost is high because the ICP
algorithm requires a search of the nearest-neighbor points at every minimization. In this paper,
we describe a plane-to-plane registration method. We define the distance between two planes
and estimate the translation parameter by minimizing the distance between the planes. The
plane-to-plane method is able to register the set of scatter points which are sparsely distributed
and the density is low with low cost. We tested this method with the large scatter points of a
manufacturing plant and show the effectiveness of our proposed method.
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
This research was focused on a system in which a manipulator with robot vision transfers objects to a mobile robot that
moves on a flat floor. In this system, an end effector of a manipulator operates on a plane surface, so only single vision
is required. In a robot vision system, many processes are usually needed for vision calibration, and one of them is
measurement of camera parameters. We developed a calibration technique that does not explicitly require camera
parameters to reduce the number of calibration processes required for our researched system.
With this technique, we measured relations between coordinate systems of images and a mobile robot in the moving
plane by using a projective transformation framework. We also measured relations between images and the manipulator
in an arbitrary plane in the same way. By analyzing the results, we obtained a relation between the mobile robot and the
manipulator without explicitly calculating the camera parameters. This means capturing images of the calibration board
can be skipped.
We tested the calibration technique using an object-transfer task. The results showed the technique has sufficient
accuracy to achieve the task.
We propose a spherical layout for a camera array system when shooting images for use in Integral Videography (IV). IV is an autostereoscopic video image technique based on Integral Photography (IP) and is one of the preferred autostereoscopic techniques for displaying images. There are many studies on autostereoscopic displays based on this technique indicating its potential advantages. Other camera arrays have been studied, but their purpose addressed other issues, such as acquiring high-resolution images, capturing a light field, creating contents for non-IV-based autostereoscopic displays and so on. Moreover, IV displays images with high stereoscopic resolution when objects are displayed close to the display. As a consequence, we have to capture high-resolution images in close vicinity to the display. We constructed the spherical layout for the camera array system using 30 cameras arranged in a 6 by 5 array. Each camera had an angular difference of 6 degrees, and we set the cameras to the direction of the sphere center. These cameras can synchronously capture movies. The resolution of the cameras is a 640 by 480. With this system, we determined the effectiveness of the proposed layout of cameras and actually captured IP images, and displayed real autostereoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.