KEYWORDS: Sensors, Navigation systems, Cameras, Global Positioning System, Detection and tracking algorithms, Bone, Robotic systems, Machine vision, 3D modeling, RGB color model
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment)
Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and
Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle
(UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For
LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect)
filter based on object shape features with a color-histogram-based object detector. Our vision system was able to
learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based
on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement
autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies – including
outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not
integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and
behavior capabilities that may be applied to future autonomous robot systems.
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain
sensing and increase opportunities for improving line of sight communications. While numerous
military missions would benefit from coordinated UAV-UGV operations, foundational capabilities
that integrate stove-piped tactical systems and share available sensor data are required and not yet
available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially
SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative
capabilities for surveillance, targeting, and improved communications based on PackBot UGV and
Raven UAV platforms. We integrate newly available technologies into computational, vision, and
communications payloads and develop sensing algorithms to support vision-based target tracking.
We first simulated and then applied onto real tactical platforms an implementation of Decentralized
Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a
moving target in an open environment. In addition, system integration with AeroVironment's Digital
Data Link onto both air and ground platforms has extended our capabilities in communications range
to operate the PackBot as well as in increased video and data throughput. The system is brought
together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides
simultaneous waypoint navigation and traditional teleoperation. We also present several recent
capability accomplishments toward PackBot-Raven coordinated operations, including single OCU
display design and operation, early target track results, and Digital Data Link integration efforts, as
well as our near-term capability goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.