Translator Disclaimer
10 April 1996 Sculpting 3D worlds with music: advanced texturing techniques
Author Affiliations +
Proceedings Volume 2653, Stereoscopic Displays and Virtual Reality Systems III; (1996)
Event: Electronic Imaging: Science and Technology, 1996, San Jose, CA, United States
Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.
© (1996) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Christian Greuel, Mark T. Bolas, Niko Bolas, and Ian E. McDowall "Sculpting 3D worlds with music: advanced texturing techniques", Proc. SPIE 2653, Stereoscopic Displays and Virtual Reality Systems III, (10 April 1996);


Back to Top