KEYWORDS: 3D modeling, Magnetic resonance imaging, Augmented reality, Clouds, Brain, Process modeling, Autoregressive models, Visualization, Visual process modeling, Image segmentation
This study aims to refine an automated workflow for neuroimages that generates three-dimensional (3D) point clouds in the Polygon File Format (.ply) to be deployed into augmented reality (AR) head mount displays (HMD). Our current work involves enhancing and refining core features, along with improving the point-cloud application to optimize the brain MRI intake. Our brain image segmentation algorithm and web-based point cloud generator show promise for clinical workflows, where high-quality point-cloud AR models can be generated from patient MRIs.
KEYWORDS: 3D modeling, Autoregressive models, Augmented reality, Process modeling, 3D image processing, Clouds, Medical imaging, 3D displays, Surgery, Image segmentation
This study aims to display the efficient generation of three-dimensional (3D) models of medical scans and the ability for physicians to utilize the models via augmented reality (AR) head mount displays. The ability to view and interact with 3D models of patients’ medical scans on head mount displays (HMD) such as the Microsoft HoloLens 2, opens a wide range of new possibilities for more accurate and intuitive ways for physicians to approach preoperative and intraoperative planning. Traditionally, the manual workflow for generating AR models of medical scans involves various software packages that are required to manually carry out steps such as image segmentation, mesh refinement, and file conversion1. Our web-based application automates the steps involved in generating AR models with end-to-end integration from image uploading to viewing and annotating the 3D model collaboratively on multiple AR headsets simultaneously.
In addition to the main functions involving automated segmentation, interpolation, resizing, and the cropping of uploaded DICOM images, users can now automatically convert the files into a point cloud (PLY), which can be viewed and interacted with through a preview screen implemented onto the web app. Furthermore, these 3D models can be directly uploaded to an AR headset to be viewed and annotated by multiple AR headsets simultaneously using the AR app developed for this workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.