The understanding of the mastication system motion is essential to maxillofacial surgeons and dentists in the procedures concerning jaw and teeth corrections. The temporomandibular joint (TMJ), despite its complexity, is one of the most frequently used joints of the human body. The incidence of a great number of injuries in this joint is influenced not only by its regular use during the mastication, but also by the strong forces applied by the muscles and the wide range of movements it is capable to perform. In this work, we propose the development of a jaw simulator capable of reproducing the complete mastication movement. Our jaw simulator is basically composed by three triangle meshes representing the 3D model of the cranium, mandible and teeth; and an anatomically-based joint model conceived to represent the TMJ motion. The polygonal meshes describing the bones and teeth are obtained from CT images and the jaw motion is simulated using the joint model guided by a 3D motion curve obtained from the composition of the standard 2D curves available in the medical literature. The scale, height and width of these original curves are modified to simulate different kind and size of food and to represent the movements’ variability depending on patient morphology (teeth, bones, joints and muscles). The evaluation of preliminary results involved the comparison of a dynamic MRI of a healthy person with the respective simulation.
This work presents a set of tools developed to provide 3D visualization and interaction with large volumetric data that relies on recent programmable capabilities of consumer-level graphics cards. We are exploiting the programmable control of calculations performed by the graphics hardware for generating the appearance of each pixel on the screen to develop real-time, interactive volume manipulation tools. These tools allow real-time modification of visualization parameters, such as color and opacity classification or the selection of a volume of interest, extending the benefit of hardware acceleration beyond display, namely for computation of voxel visibility. Three interactive tools are proposed: a cutting tool that allows the selection of a convex volume of interest, an eraser-like tool to eliminate non-relevant parts of the image and a digger-like tool that allows the user to eliminate layers of a 3D image. To interactively apply the proposed tools on a volume, we are making use of some so known user interaction techniques, as the ones used in 2D painting systems. Our strategy is to minimize the user entrainment efforts involved in the tools learning. Finally, we illustrate the potential application of the conceived tools for preoperative planning of liver surgery and for liver vascular anatomy study. Preliminary results concerning the system performance and the images quality and resolution are presented and discussed.
Conference Committee Involvement (1)
Tenth International Symposium on Medical Information Processing and Analysis
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.