Paper
8 September 2003 Motion compression for telepresent walking in large-scale remote environments
Author Affiliations +
Abstract
Telepresent walking creates the sensation of walking through a target environment, which is not directly accessible to a human, e.g. because it is remote, hazardous, or of inappropriate scale. A mobile teleoperator replicates user motion and collects visual and auditory information from the target environment, which is then sent and displayed to the user. While walking freely about the user environment, the user perceives the target environment with the sensors of the teleoperator and feels as if walking through the target environment. Without additional processing of the user's motion data, the size of the target environment to be explored is limited to the size of the user environment. Motion compression extends telepresent walking to arbitrarily large target environments without making use of scaling or walking-in-place metaphors. Both travel distances and turning angles are mapped with ratio 1:1.
© (2003) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Norbert Nitzsche, Uwe D. Hanebeck, and Guenther Schmidt "Motion compression for telepresent walking in large-scale remote environments", Proc. SPIE 5079, Helmet- and Head-Mounted Displays VIII: Technologies and Applications, (8 September 2003); https://doi.org/10.1117/12.488379
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications and 2 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Environmental sensing

Visualization

Detection and tracking algorithms

Virtual reality

Head

Information visualization

Magnetic tracking

RELATED CONTENT

How much is enough? the human factors of field of...
Proceedings of SPIE (May 05 2017)
Human factors requirements of helmet trackers for HMDs
Proceedings of SPIE (September 08 2003)
Walking simulator for evaluation of ophthalmic devices
Proceedings of SPIE (March 18 2005)
Ground plane segmentation from multiple visual cues
Proceedings of SPIE (July 31 2002)
Gestural interaction in a virtual environment
Proceedings of SPIE (April 15 1994)

Back to Top