The Holy Grail of autonomous ground robotics has been to make ground vehicles that behave like humans. Over the
years, as a community, we have realized the difficulty of this task, and we have back pedaled from the initial Holy Grail
and have constrained and narrowed the domains of operation in order to get robotic systems fielded. This has lead to
phrases such as "operation in structured environments" and "open-and-rolling terrain" in the context of autonomous
robot navigation. Unfortunately, constraining the problem in this way has only put off the inevitable, i.e., solving the
myriad of difficult robotics problems that we identified as long ago as the 1980's on the Autonomous Land Vehicle
Project and in most cases are still facing today. These "Tall Poles" have included but are not limited to navigation
through complex terrain geometry, navigation through thick vegetation, the detection of geometry-less obstacles such as
negative obstacles and thin obstacles, the ability to deal with diverse and dynamic environmental conditions, the ability
to function in dynamic and cluttered environments alongside other humans, and any combination of the above. This
paper is an overview of the progress we have made at Autonomous Systems over the last three years in trying to knock
down some of the tall poles remaining in the field of autonomous ground robotics.
This work addresses the issue of Terrain Classification that can be applied for path planning for an Unmanned Ground Vehicle (UGV) platform. We are interested in classification of features such as rocks, bushes, trees and dirt roads. Currently, the data is acquired from a color camera mounted on the UGV as we can add range data from a second sensor in the future. The classification is accomplished by first, coarse segmenting a frame and then refining the initial segmentations through a convenient user interface. After the first frame, temporal information is exploited to improve the quality of the image segmentation and help classification adapt to changes due to ambient lighting, shadows, and scene changes as the platform moves. The Mean Shift Classifier algorithm provides segmentation of the current frame data. We have tested the above algorithms with four sequence of frames acquired in an environment with terrain representative of the type we expect to see in the field. A comparison of the results from this algorithm was done with accurate manually-segmented (ground-truth) data, for each frame in the sequence.
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
Under the DARPA MARS 2020 program, Perceptek has developed a technical foundation for performing roadway operations in both structured and unstructured environments. Fully autonomous roadway operations require a large set of atomic functionalities that must seamlessly perform in concert in complex and dynamic environments. PercepTek has developed multiple atomic functionalities and implemented a robot control architecture called ARTEA (Autonomous Robotic Test and Evaluation Architecture) that blends these atomic functionalities into one cohesive system that can perform complex missions and reason about its environment. Some of the atomic functionalities that have been implemented and integrated onto our robotic test platform are vision-based road-following for both structured and unstructured roads, vision/radar-based vehicle-following, safety gap maintenance, road feature detection and response, road sign detection/ recognition, and pedestrian detection. In this paper, technical details of each of the individual robotic functionalities are presented along with their performance and limitations. We then discuss some of the critical components of the ARTEA architecture that are used for blending inputs from disparate functionalities and performing reasoning about the environment. Field testing was a critical aspect of our development process and we will discuss the test platform that was used to develop and test our robotic system. At the culmination of our MARS 2020 effort we performed a robotic test drive from Denver Colorado to New Orleans Louisiana in which we tested and evaluated various aspects of our system. We finally discuss our performance and the limitations of our system for this drive.
One of the major problems with any robotic vehicle is inefficient use of available power. This research explores in detail the locomotion, power dynamics and performance of a skid steered robotic vehicle and develops techniques to derive efficient design parameters of the vehicle in order to achieve optimal performance by minimizing the power losses/consumption. Three categories of design variables describe the vehicle and its dynamics; variables that describe the vehicle, variables that describe the surface on which it runs and variables that describe the vehicle’s motion. Two major constituent components of power losses/consumption of the vehicle are − losses in skid steer turning, and losses in rolling. Our focus is on skid steering, we present a detailed analysis of skid steering for different turning modes; elastic mode steering, half-slip steering, skid turns, low radius turns, and zero radius turns. Each of the power loss components is modeled from physics in terms of the design variables. The effect of design variables on the total power losses/consumption is then studied using simulated data for different types of surfaces i.e. hard surfaces and muddy surfaces. Finally, we make suggestions about efficient vehicle design choices in terms of the design variables.
In order for an autonomous robot to “appropriately” navigate through a complex environment, it must have an in-depth understanding of the immediate surroundings. Appropriate navigation implies the robot will avoid collision or contact with hazards, will not be falsely rerouted around traversible terrain due to false hazard detections, and will exploit the terrain to maximize its concealment. Appropriate autonomous navigation requires the ability to detect and localize critical features in the environment. Examples of critical environmental features include rocks, trees, ditches, holes, bushes and water. Environmental features have a wide range of characteristics and multiple sensing phenomenologies are required to be able to detect them all. Once the data is acquired from these multiple phenomenologies, a mechanism is required to combine and analyze all of these disparate sources of information into one composite interpretation. In this paper we discuss the Demo III multi-sensor system for autonomous mobility, and the “operator-trained” fusion system called O-NAV (Object NAVigation) that is used to build a labeled three dimensional model of the immediate environment surrounding the robot vehicle so it can appropriately interact with its surroundings.
This paper will summarize the Autonomous Mobility system for the Demo III program. The autonomous mobility system involves issues in algorithms, sensors, and processing architectures. We will describe some history, and general philosophies that guided us in the direction of the design described in this paper.
We have developed a radial basis function network (RBFN) for visual autonomous road following at the University of Maryland Computer Vision Laboratory. Preliminary testing of the RBFN was done using a driving simulator, and the RBFN was then installed on an actual vehicle at Carnegie-Mellon University for testing in an actual road following application. The RBFN had some success, but it experienced some significant problems such as jittery control and driving failure. Several improvements have been made to the original RBFN architecture to overcome these problems, and they are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.