A growing need for more advanced training capabilities and the proliferation of government standards into the commercial market has inspired Cybernet to create an advanced, distributed 3D Simulation Toolkit. This system, called OpenSkies, is a truly open, realistic distributed system for 3D visualization and simulation. One of the main strengths of OpenSkies is its capability for data collection and analysis. Cybernet's Data Collection and Analysis Environment is closely integrated with OpenSkies to produce a unique, quantitative, performance-based measurement system. This system provides the capability for training students and operators on any complex equipment or system that can be created in a simulated world. OpenSkies is based on the military standard HLA networking architecture. This architecture allows thousands of users to interact in the same world across the Internet. Cybernet's OpenSkies simulation system brings the power and versatility of the OpenGL programming API to the simulation and gaming worlds. On top of this, Cybernet has developed an open architecture that allows the developer to produce almost any kind of new technique in their simulation. Overall, these capabilities deliver a versatile and comprehensive toolkit for simulation and distributed visualization.
We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.
This paper is the result of Cybernet efforts starting in May of 1995 and currently on-going to develop system for capturing high resolution 3D terrain and cultural features from multiple position references video sequences. The notion is to test and implement a perception-based image rendering technique for terrain capture of real sites applicable to robotics, virtual reality and simulation applications. These techniques will enable the Army to better understand and evaluate the operational capabilities of vision based task performance. The proposed development effort has focused on the development and evaluation of image-based object constructions and rendering algorithms which will be summarized following.
This paper presents a system for performing real-time vehicular self-location through a combination of triangulation of target sightings and low-cost auxiliary sensor information (e.g. accelerometer, compass, etc.). The system primarily relies on the use of three video cameras to monitor a dynamic 180 degree field of view. Machine vision algorithms process the imagery from this field of view searching for targets placed at known locations. Triangulation results are then combined with the past video processing results and auxiliary sensor information to arrive at real-time vehicle location update rates in excess of 10 Hz on a single low-cost conventional CPU. To supply both extended operating range and nighttime operational capabilities, the system also possesses an active illumination mode that utilizes multiple, inexpensive infrared LEDs to act as the illuminating source for reflective targets. This paper presents the design methodology used to arrive at the system, explains the overall system concept and process flow, and will briefly discuss actual results of implementing the system on a standard commercial vehicle.
This paper presents a system for performing real-time vehicular self-location through a combination of triangulation of target sightings and low-cost auxiliary sensor information (e.g. accelerometer, compass, etc.). The system primarily relies on the use of three video cameras to monitor a dynamic 1 80° field of view. Machine vision algorithms process the imagery from this field of view searching for targets placed at known locations. Triangulation results are then combined with the past video processing results and auxiliary sensor information to arrive at real-time vehicle location update rates in excess of 10 Hz on a single low-cost conventional CPU. To supply both extended operating range and nighttime operational capabilities, the system also possesses an active illumination mode that utilizes multiple, inexpensive infrared LED's to act as the illuminating source for reflective targets. This paper presents the design methodology used to arrive at the system, explains the overall system concept and process flow, and will briefly discuss actual results of implementing the system on a standard commercial vehicle.
Keywords: Machine Vision, Self-Location, Autonomous Vehicles, Infrared Sensing, Position Determination
This paper presents a system for performing real-time vehicular self-location through a combination of triangulation of target sightings and low-cost auxiliary sensor information (e.g. accelerometer, compass, etc.). The system primarily relies on the use of three video cameras to monitor a dynamic 180 degree field of view. Machine vision algorithms process the imagery from this field of view searching for targets placed at known locations. Triangulation results are then combined with the past video processing results and auxiliary sensor information to arrive at real-time vehicle location update rates in excess of 10 Hz on a single low-cost conventional CPU. To supply both extended operating range and nighttime operational capabilities, the system also possesses an active illumination mode that utilizes multiple, inexpensive, infrared LEDs to act as the illuminating source and retroreflectors as the system targets. This paper will present the design methodology used to arrive at the system, discuss the overall system concept and process flow, and will briefly discuss actual results of implementaing the system on a standard commercial vehicle.
New generations of military unmanned systems on the ground, at sea, and in the air will be driven by man-portable command units. In past efforts we implemented several prototypes of such units which provided display and capture of up to four video input channels, provided 4 color LCD screens and a larger status display LCD screen, provided drive input through two joysticks, and, through software, supported a flexible 'virtual' driver's interface. We have also performed additional trade analysis of prototype systems incorporating force feedback and extensive image-oriented processing facilities applied to man-controlled robotic control systems. This prior work has resulted in a database of practical design guidelines and a new generation of hardened compact robotic command center which is being designed and built to provide more advanced video capture, display, and interfacing features, supercomputer level computational performance, and ergonomic features for hard field use. In this paper we will summarize some past work and will project current performance to features likely to be common across most unmanned systems command, control, and communications subsystems of the near future.
At least three of the five senses must be fully addressed in a successful virtual reality (VR) system. Sight, sound, and touch are the most critical elements for the creation of the illusion of presence. Since humans depend so much on sight to collect information about their environment, this area has been the focus of much of the prior art in virtual reality, however, it is also crucial that we provide facilities for force, torque, and touch reflection, and sound replay and 3-D localization. In this paper we present a sampling of hardware and software in the virtual environment maker's `toolbox' which can support rapidly building up of customized VR systems. We provide demonstrative examples of how some of the tools work and we speculate about VR applications and future technology needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.