This paper presents a new obstacle avoidance method that provides unobtrusive assistance to tele-operation of unmanned ground vehicles. Different from existing obstacle avoidance methods, the present method can determine whether the driving commands from an operator are safe in the presence of obstacles and can automatically adjust unsafe commands to help the operator avoid proximate obstacles. The command adjustment is done in an unobtrusive manner and conforms to the dynamic and kinematic constraints of the vehicle in order to minimize its interference to the operator. Due to its assistive and unobtrusive nature, this method can quietly share some control authority with an operator in tele-operating a vehicle, and hence it has the potential to make tele-operation of ground vehicles in challenging environments significantly easier and safer. The effectiveness of this method is demonstrated in extensive experiments in cluttered environments using military-grade tracked robots.
This paper overviews the development and operator testing of a shared autonomy system for small unmanned ground vehicles operating in indoor environments. The project focused on creating driving assistance technologies to reduce the burden of performing low-level tasks when operating in cluttered or difficult areas by sharing control between the operator and the autonomous software. The system also provides a safety layer to prevent the robot from becoming disabled due to operator error or environmental hazards. Examples of developed behaviours include obstacle proximity warning, centering the vehicle through narrow doorways, wall following during long traversals, tip-over indicator, stair climbing aid, and retreat from communications loss. The hardware and software were integrated on a QinetiQ Talon IV robot and tested by military operators in a relevant environment.
A key component in the emerging localization and mapping paradigm is an appearance-based place recognition
algorithm that detects when a place has been revisited. This algorithm can run in the background at a low
frame rate and be used to signal a global geometric mapping algorithm when a loop is detected. An optimization
technique can then be used to correct the map by 'closing the loop'. This allows an autonomous unmanned ground
vehicle to improve localization and map accuracy and successfully navigate large environments. Image-based
place recognition techniques lack robustness to sensor orientation and varying lighting conditions. Additionally,
the quality of range estimates from monocular or stereo imagery can decrease the loop closure accuracy. Here,
we present a lidar-based place recognition system that is robust to these challenges. This probabilistic framework
learns a generative model of place appearance and determines whether a new observation comes from a new or
previously seen place. Highly descriptive features called the Variable Dimensional Local Shape Descriptors are
extracted from lidar range data to encode environment features. The range data processing has been implemented
on a graphics processing unit to optimize performance. The system runs in real-time on a military research
vehicle equipped with a highly accurate, 360 degree field of view lidar and can detect loops regardless of the
sensor orientation. Promising experimental results are presented for both rural and urban scenes in large outdoor
environments.
KEYWORDS: 3D modeling, Sensors, Weapons of mass destruction, 3D metrology, Situational awareness sensors, Data modeling, Cameras, 3D vision, 3D image processing, Databases
Situational awareness of CBRN robot operators is quite limited, as they rely on images and measurements from on-board
detectors. This paper describes a novel framework that enables a uniform and intuitive access to live and recent data via
2D and 3D representations of visited sites. These representations are created automatically and augmented with images,
models and CBRNE measurements. This framework has been developed for CBRNE Crime Scene Modeler (C2SM), a
mobile CBRNE mapping system. The system creates representations (2D floor plans and 3D photorealistic models) of
the visited sites, which are then automatically augmented with CBRNE detector measurements. The data stored in a
database is accessed using a variety of user interfaces providing different perspectives and increasing operators'
situational awareness.
CBRN Crime Scene Modeler (C2SM) is a prototype mobile CBRN mapping system for First Responders in events
where Chemical, Biological, Radiological and Nuclear agents where used. The prototype operates on board a small
robotic platform, increases situational awareness of the robot operator by providing geo-located images and data, and
current robot location. The sensor suite includes stereo and high resolution cameras, a long wave infra red (thermal)
camera and gamma and chemical detectors. The system collects and sends geo-located data to a remote command post in
near real-time and automatically creates 3D photorealistic model augmented with CBRN measurements. Two prototypes
have been successfully tested in field trials and a fully ruggedised commercial version is expected in 2010.
CBRN Crime Scene Modeler (C2SM) is a prototype 3D modeling system for first responders investigating environments
contaminated with Chemical, Biological, Radiological and Nuclear agents. The prototype operates on board a small
robotic platform or a hand-held device. The sensor suite includes stereo and high resolution cameras, a long wave infra
red camera, chemical detector, and two gamma detectors (directional and non-directional). C2SM has been recently
tested in field trials where it was teleoperated within an indoor environment with gamma radiation sources present. The
system has successfully created multi-modal 3D models (geometry, colour, IR and gamma radiation), correctly identified
location of radiation sources and provided high resolution images of these sources.
KEYWORDS: 3D modeling, Cameras, Visual process modeling, Projection systems, Motion estimation, 3D metrology, 3D image processing, Stereoscopic cameras, Data modeling, Motion models
Servicing satellites in space requires accurate and reliable 3D information. Such information can be used to create virtual models of space structures for inspection (geometry, surface flaws, and deployment of appendages), estimation of relative position and orientation of a target spacecraft during autonomous docking or satellite capture, replacement of serviceable modules, detection of unexpected objects and collisions. Existing space vision systems rely on assumptions to achieve the necessary performance and reliability. Future missions will require vision systems that can operate without visual targets and under less restricted operational conditions towards full autonomy.
Our vision system uses stereo cameras with a pattern projector and software to obtain reliable and accurate 3D information. It can process images from cameras mounted on a robotic arm end-effector on a space structure or a spacecraft. Image sequences can be acquired during relative camera motion, during fly-around of a spacecraft or motion of the arm. The system recovers the relative camera motion from the image sequence automatically without using spacecraft or arm telemetry. The 3D data computed can then be integrated to generate a calibrated photo-realistic 3D model of the space structure.
Feature-based and shape-based approaches for camera motion estimation have been developed and compared. Imaging effects on specular surfaces are introduced by space materials and illumination. With a pattern projector and redundant stereo cameras, the robustness and accuracy of stereo matching are improved as inconsistent 3D points are discarded. Experiments in our space vision facility show promising results and photo-realistic 3D models of scaled satellite replicas are created.
KEYWORDS: 3D modeling, Cameras, 3D metrology, 3D image processing, Motion models, Visualization, Visual process modeling, Stereoscopic cameras, Image processing, Sensors
Instant Scene Modeler (iSM) is a vision system for generating calibrated photo-realistic 3D models of unknown
environments quickly using stereo image sequences. Equipped with iSM, Unmanned Ground Vehicles (UGVs) can
capture stereo images and create 3D models to be sent back to the base station, while they explore unknown
environments. Rapid access to 3D models will increase the operator situational awareness and allow better mission
planning and execution, as the models can be visualized from different views and used for relative measurements.
Current military operations of UGVs in urban warfare threats involve the operator hand-sketching the environment from
live video feed. iSM eliminates the need for an additional operator as the 3D model is generated automatically. The
photo-realism of the models enhances the situational awareness of the mission and the models can also be used for
change detection. iSM has been tested on our autonomous vehicle to create photo-realistic 3D models while the rover
traverses in unknown environments.
Moreover, a proof-of-concept iSM payload has been mounted on an iRobot PackBot with Wayfarer technology, which is
equipped with autonomous urban reconnaissance capabilities. The Wayfarer PackBot UGV uses wheel odometry for
localization and builds 2D occupancy grid maps from a laser sensor. While the UGV is following walls and avoiding
obstacles, iSM captures and processes images to create photo-realistic 3D models. Experimental results show that iSM
can complement Wayfarer PackBot's autonomous navigation in two ways. The photo-realistic 3D models provide better
situational awareness than 2D grid maps. Moreover, iSM also recovers the camera motion, also known as the visual
odometry. As wheel odometry error grows over time, this can help improve the wheel odometry for better localization.
KEYWORDS: Satellites, LIDAR, Space operations, 3D modeling, Data modeling, 3D acquisition, Target detection, Visual process modeling, Detection and tracking algorithms, Sensors
Servicing satellites on-orbit requires ability to rendezvous and dock by an unmanned spacecraft with no or minimum human input. Novel imaging sensors and computer vision technologies are required to detect a target spacecraft at a distance of several kilometers and to guide the approaching spacecraft to contact. Current optical systems operate at much shorter distances, provide only bearing and range towards the target, or rely on visual targets.
Emergence of novel LIDAR technologies and computer vision algorithms will lead to a new generation of rendezvous and docking systems in the near future. Such systems will be capable of autonomously detecting a target satellite at a distance of a few kilometers, estimating its bearing, range and relative orientation under virtually any illumination, and in any satellite pose.
At MDA Space Missions we have developed a proof-of-concept vision system that uses a scanning LIDAR to estimate pose of a known satellite. First, the vision system detects a target satellite, and estimates its bearing and range. Next, the system estimates the full pose of the satellite using a 3D model. Finally, the system tracks satellite pose with high accuracy and update rate. Estimated pose provides information where the docking port is located even if the port is not visible and enables selecting more efficient flight trajectory.
The proof-of-concept vision system has been integrated with a commercial time-of-flight LIDAR and tested using a moving scaled satellite replica in the MDA Vision Testbed.
Reconstruction of the vascular tree in retinal (ocular fundus) images is important, because it yields information such as the shape and size of individual vessels, their branching pattern and arterio-venous crossings, thereby providing information on the condition of the retina. The vascular tree is also helpful in the registration of retinal images. In this paper we describe an automated technique for detecting and reconstructing vascular trees, based on a robust detection of vessel candidates (ribbonlike features), their labelling using a neural network (NN), and a final reconstruction of the vessel tree using these labels. The NN uses vessel models automatically built during a training phase and does not rely on any explicit user specified models or sets of features.
This paper describes a new sensor that combines visual information from a CCD camera with sparse distance measurements from an infra-red laser range-finder. The camera and the range- finder are coupled together in such a way that their optical axes are parallel. A mirror with a different reflectivity for visible and for infra-red light is used to ensure collinearity of effective optical axes of the camera lens and the range-finder. The range is measured for an object in the center of the camera field of view. The Laser Eye is mounted on a robotic head and is used in an active vision system for an autonomous mobile robot (called ARK).
A new model of an adaptive adjacency graph (AAG) for representing a 2-D image or a 2-D view of a 3-D scene is introduced. The model makes use of image representation similar in form to a region adjacency graph. Adaptive adjacency graph, as opposed to region adjacency graph, is an active representation of the image. The AAG can adapt to the image or track features and maintain the topology of the graph. Adaptability of the AAG is achieved by incorporating active contours (`snakes') in the graph. Various methods for creating the AAGs are discussed. Results obtained for dynamic tracking of features in sequence of images and for registration of retinal images are presented.
This paper describes an image segmentation algorithm and the results obtained using a specially designed robotic head. The head consists of a camera and a laser range-finder mounted on a pan & tilt unit. Additional distance measuring capabilities, offered by the head, have been integrated into the segmentation process. The described method will be used for detecting visual landmarks by an autonomous mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.