Translator Disclaimer
Paper
1 November 1992 Visual-tracking-based robot vision system
Author Affiliations +
Proceedings Volume 1826, Intelligent Robots and Computer Vision XI: Biological, Neural Net, and 3D Methods; (1992) https://doi.org/10.1117/12.131621
Event: Applications in Optical Science and Engineering, 1992, Boston, MA, United States
Abstract
There are two kinds of depth perception for robot vision systems: quantitative and qualitative. The first one can be used to reconstruct the visible surfaces numerically while the second to describe the visible surfaces qualitatively. In this paper, we present a qualitative vision system suitable for intelligent robots. The goal of such a system is to perceive depth information qualitatively using monocular 2-D images. We first establish a set of propositions relating depth information, such as 3-D orientation and distance, to the changes of image region caused by camera motion. We then introduce an approximation-based visual tracking system. Given an object, the tracking system tracks its image while moving the camera in a way dependent upon the particular depth property to be perceived. Checking the data generated by the tracking system with our propositions provides us the depth information about the object. The visual tracking system can track image regions in real-time even as implemented on a PC AT clone machine, and mobile robots can naturally provide the inputs to our visual tracking system, therefore, we are able to construct a real-time, cost effective, monocular, qualitative and 3-dimensional robot vision system. To verify our idea, we present examples of perception of planar surface orientation, distance, size, dimensionality and convexity/concavity.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Keqiang Deng, Joseph N. Wilson, and Gerhard X. Ritter "Visual-tracking-based robot vision system", Proc. SPIE 1826, Intelligent Robots and Computer Vision XI: Biological, Neural Net, and 3D Methods, (1 November 1992); https://doi.org/10.1117/12.131621
PROCEEDINGS
12 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT

3D vision system for intelligent milking robot automation
Proceedings of SPIE (February 03 2014)
Optical flow based velocity estimation for mobile robots
Proceedings of SPIE (February 14 2015)
Dynamic object tracking against cluttered background
Proceedings of SPIE (August 30 2002)
Hand-eye coordination for grasping moving objects
Proceedings of SPIE (April 01 1991)
Teleoperation of a robot manipulator from 3D human hand arm...
Proceedings of SPIE (September 30 2003)
Depth map from a sequence of two monocular images
Proceedings of SPIE (October 13 1994)

Back to Top