Translator Disclaimer
19 May 2006 Adaptive synthetic vision
Author Affiliations +
Through their ability to safely collect video and imagery from remote and potentially dangerous locations, UAVs have already transformed the battlespace. The effectiveness of this information can be greatly enhanced through synthetic vision. Given knowledge of the extrinsic and intrinsic parameters of the camera, synthetic vision superimposes spatially-registered computer graphics over the video feed from the UAV. This technique can be used to show many types of data such as landmarks, air corridors, and the locations of friendly and enemy forces. However, the effectiveness of a synthetic vision system strongly depends on the accuracy of the registration - if the graphics are poorly aligned with the real world they can be confusing, annoying, and even misleading. In this paper, we describe an adaptive approach to synthetic vision that modifies the way in which information is displayed depending upon the registration error. We describe an integrated software architecture that has two main components. The first component automatically calculates registration error based on information about the uncertainty in the camera parameters. The second component uses this information to modify, aggregate, and label annotations to make their interpretation as clear as possible. We demonstrate the use of this approach on some sample datasets.
© (2006) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Simon J. Julier, Dennis Brown, Mark A. Livingston, and Justin Thomas "Adaptive synthetic vision", Proc. SPIE 6226, Enhanced and Synthetic Vision 2006, 62260H (19 May 2006);

Back to Top