Translator Disclaimer
7 May 1997 Visual matching between real and virtual images in image-guided neurosurgery
Author Affiliations +
Nowadays, neurosurgeons have access to 3D multimodal imaging when planning and performing surgical procedures. 3D multimodal registration algorithms are available to establish geometrical relationships between different modalities. For a given 3D point, most multimodal applications merely display a cursor on the corresponding point in the other modality. The surgeon needs tools allowing the visual fusion of these heterogeneous data in the same coordinate system but also in the same visual space in order to facilitate comprehension of the data. This problem is particularly crucial when using these images in the operating room. The goal of this paper is to analyze different methods to obtain this visual fusion between real images and virtual images. We discuss the relevance of different solutions depending on (1) the type of information shared between these different modalities and (2) the hardware location of this visual fusion. Two new approaches are presented to illustrate our purposes: a neuro- navigational microscope which provides an augmented reality feature through a microscope and a new technique for matching 2D real images with 3D virtual data sets. We introduce this second technique illustrated by the mapping of a 2D intra-operative photograph of the patient's anatomy onto 3D MRI images. Unlike other solutions which display virtual images in the real worked, our method involves ray traced texture mapping in order to display real images in a computed world.
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Pierre Jannin, Alain Bouliou, Jean-Marie Scarabin, Christian Barillot, and J. Luber "Visual matching between real and virtual images in image-guided neurosurgery", Proc. SPIE 3031, Medical Imaging 1997: Image Display, (7 May 1997);

Back to Top