Paper
26 August 1999 Vision-guided heterogeneous mobile robot docking
Author Affiliations +
Abstract
Teams of heterogeneous mobile robots are a key aspect of future unmanned system for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One examples of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and materia, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller 'throwable' robot contains a video camera capable of imaging the larger 'packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.
© (1999) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John R. Spofford, John Blitch, William N. Klarquist, and Robin R. Murphy "Vision-guided heterogeneous mobile robot docking", Proc. SPIE 3839, Sensor Fusion and Decentralized Control in Robotic Systems II, (26 August 1999); https://doi.org/10.1117/12.360331
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Sensors

Video

Mobile robots

Image processing

Robotics

Video processing

Back to Top