Off-road robotics efforts such as DARPA's PerceptOR program have motivated the development of testbed vehicles capable of sustained operation in a variety of terrain and environments. This paper describes the retrofitting of a minimally-modified ATV chassis into such a testbed which has been used by multiple programs for autonomous mobility development and sensor characterization. Modular mechanical interfaces for sensors and equipment enclosures enabled integration of multiple payload configurations. The electric power subsystem was capable of short-term operation on batteries with refueled generation for continuous operation. Processing subsystems were mounted in sealed, shock-dampened enclosures with heat exchangers for internal cooling to protect against external dust and moisture. The computational architecture was divided into a real-time vehicle control layer and an expandable high level processing and perception layer. The navigation subsystem integrated real time kinematic GPS with a three-axis IMU for accurate vehicle localization and sensor registration. The vehicle software system was based on the MarsScape architecture developed under DARPA's MARS program. Vehicle mobility software capabilities included route planning, waypoint navigation, teleoperation, and obstacle detection and avoidance. The paper describes the vehicle design in detail and summarizes its performance during field testing.
Teams of heterogeneous mobile robots are a key aspect of future unmanned systems for operations in complex and dynamic urban environments, such as that envisions by DARPA's Tactical Mobile Robotics program. Interactions among such team members enable a variety of mission roles beyond those achievable with single robots or homogeneous teams. Key technologies include docking for power and data transfer, marsupial transport and deployment, collaborative team user interface, cooperative obstacle negotiation, distributed sensing, and peer inspection. This paper describes recent results in the integration and evaluation of component technologies within a collaborative system design. Integration considerations include requirement definition, flexible design management, interface control, and incremental technology integration. Collaborative system requirements are derived from mission objectives and robotic roles, and impact system and individual robot design at several levels. Design management is a challenge in a dynamic environment, with rapid evolution of mission objectives and available technologies. The object-oriented system model approach employed includes both software and hardware object representations to enable on- the-fly system and robot reconfiguration. Controlled interfaces among robots include mechanical, behavioral, communications, and electrical parameters. Technologies are under development by several organizations within the TMR program community. The incremental integration and validation of these within the collaborative system architecture reduces development risk through frequent experimental evaluations. The TMR system configuration includes Packbot-Perceivers, Packbot- Effectors, and Throwbots. Surrogates for these robots are used to validate and refine designs for multi-robot interaction components. Collaborative capability results from recent experimental evaluations are presented.
With the increased use of specialized robots within heterogeneous robotic teams, as envisioned by DARPA’s Tactical Mobile Robotics program, the task of dynamically assigning work to individual robots becomes more complex and critical to mission success. The team must be able to perform all essential aspects of the mission, deal with dynamic and complex environments, and detect, identify, and compensate for failures and losses within the robotic team. Our mission analysis of targeted military missions has identified single-robot roles, and collaborative (heterobotic) roles for the TMR robots. We define a role as a set of activities or behaviors that accomplish a single, militarily relevant goal. We will present the use of these roles to: 1) identify mobility and other requirements for the individual robotic platforms; 2) rate various robots’ efficiency and efficacy for each role; and 3) identify which roles can be performed simultaneously and which cannot. We present a role-base algorithm for tasking heterogeneous robotic teams, and a mechanism for retasking the team when assets are gained or lost.
Teams of heterogeneous mobile robots are a key aspect of future unmanned system for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One examples of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and materia, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller 'throwable' robot contains a video camera capable of imaging the larger 'packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.
The advantages of an unmanned ground vehicle (UGV) team include application of UGVs to a wider range of missions, reduced operator workload, and more efficient use of communications resources. Several mission applications for multiple UGV systems are described. Many single-UGV missions can be performed more quickly with a larger group of vehicles partitioning the workload. Certain missions, however, are possible only with a team of UGVs, or are greatly enhanced by cooperation among the vehicles. The four-vehicle surrogate semiautonomous vehicle system developed under the UGV/Demo II program is reviewed, and its capabilities for multi-vehicle operations are described. This system implements unmanned mobility, reconnaissance and surveillance, tactical communications, and mission planning and monitoring. The four semi- autonomous vehicles may work independently or as a team, controlled and monitored by a single operator. Ongoing development efforts for Demo II are described and longer-term directions for multiple UGV systems are presented.
The unmanned ground vehicle (UGV) demonstration C was completed in July of 1995. This was the third of four planned UGV/Demo II demonstrations. Demonstration C highlighted multivehicle premission planning, mission execution monitoring, multivehicle mobility cooperation, target detection from moving and stationary platforms, obstacle avoidance, obstacle map sharing, stealthy movement, autonomous turnaround, formation control/zone security, cooperative reconnaissance, surveillance, and target acquisition, and hill cresting. This demonstration was the first to have two autonomous vehicles working cooperatively while performing a militarily relevant mission. This paper begins with a background of the UGV program and then focuses on Demo C. The paper finishes with an overview of the Demo II missions.
This paper presents an analysis of stopping distances for an unmanned ground vehicle achievable with selected ladar and stereo video sensors. Based on a stop-to-avoid response to detected obstacles, current passive stereo technology and existing ladars provide equivalent safe driving speeds. Only a proposed high-resolution ladar can detect small (8 inch) obstacles far enough ahead to allow driving speeds in excess of 10 miles per hour. The stopping distance analysis relates safe vehicle velocity to obstacle and sensor pixel sizes.
A multiarm robotic testbed for space servicing applications is presented. The system provides the flexibility for autonomous control with operator interaction at different levels of abstraction. Key Technologies from the areas of artificial intelligence, robotic control, computer vision, and human factors have been integrated in an architecture which has proven useful for resolving issues related to space-based servicing tasks. A system-level breakdown of testbed components is presented, outlining the function and role of each technology area. A key feature of the architecture is that it facilitates efficient transfer of teleoperation control to all levels in the system hierarchy, enabling the study of the relationship between the human operator and the remote system. This includes the ability to perform autonomous situation assessment so that operator control activities at lower levels can be interpreted in terms of system model updates at higher levels.