Interventional Radiology (IR) is a rapidly advancing field, with complex procedures and techniques being developed at increasingly high rates. As these procedures and the underlying imaging technology continue to evolve, one of the challenges for physicians lies in maintaining optimal visualization of the various displays used to guide the procedure. Many Augmented Reality Surgical Navigation Systems (AR-SNS) have been proposed in the literature that aim to improve the way physicians visualize their patient’s anatomy, but there are few that address the problem of space within the IR suite. Our solution is the incorporation of an Augmented Reality “cockpit”, which streams and renders image data inside virtual displays visualized within the Hololens two, eliminating the need for physical displays. The benefit of our approach is that sterile free interaction and customization can be performed using hand gestures and voice commands, and the physician can optimize the positioning of the display without the need to worry about physical interference from other equipment. For proof of concept, we performed a user study to validate the suitability of our approach in the context of liver tumor ablation procedures. We found there was no significant differences in insertion accuracy or time between the proposed approach and the traditional method. This indicates that visualization of US imaging using our approach is an adequate replacement to the traditional physical display and paves the way for the next iteration of the system, which is to quantify the benefits of our approach when used in multi-modality procedures.
Augmented reality is becoming prevalent in modern video-based surgical navigation systems. Augmented reality in forms of image-fusion between the virtual objects (i.e. virtual representation of the anatomy derived from pre-operative imaging modalities) and the real objects (i.e. anatomy imaged by a spatially-tracked surgical camera) facilitate the visualization and perception of the surgical scene. However, this requires spatial calibration between the external tracking system and the optical axis of the surgical camera, known as hand-eye calibration. With the standard implementation of the most common hand-eye calibration techniques being static-photo-based, the time required for data collection may inhibit the thoroughness and robustness to achieve an accurate calibration. To address these translational issues, we introduce a video-based hand-eye calibration technique with open-source implementation that is accurate and robust. Based on the point-to-line Procrustean registration, a short video of a tracked and pivot-calibrated ball-tip stylus was recorded where, in each frame of the tracked video, the 3D position of the ball-tip (point) and its projection onto the video (line) serve as a calibration data point. We further devise a data sampling mechanism designed to optimize the spatial configuration of the calibration fiducials, leading to consistently high quality hand-eye calibrations. To demonstrate the efficacy of our work, a Monte Carlo simulation was performed to obtain the mean target projection error as a function of the number of calibration data points. The results obtained, exemplified using a Logitech C920 Pro HD Webcam with an image resolution of 640 × 480, show that the mean projection error decreased as more data points were used per calibration, and the majority of mean projection errors fell below four pixels. An open-source implementation, in the form of a 3D Slicer module, is available on GitHub.
Advancements in Head-Mounted-Display (HMD) technology have led to an increasing focus in the development of Augmented Reality (AR) applications in the image-guided surgery field. These applications are often enabled by third-party vision-based tracking techniques, allowing virtual models to be registered to their corresponding real-world objects. The accuracy of the underlying vision-based tracking technique is critical towards the efficacy of these systems, and must be thoroughly evaluated before integration into clinical practice. In this paper, we propose a framework for the purpose of evaluating the technical accuracy of the HMD’s intrinsic vision-based tracking techniques using an extrinsic tracking system as ground truth. Specifically, we assess the tracking accuracy of the Vuforia Augmented Reality Software Development Kit, a vision-based tracking technique commonly used in conjunction with the Microsoft Hololens 2, against a commercial optical tracking system using a co calibration apparatus. The framework follows a two-stage pipeline of first calibrating the cameras with respect to the optical tracker (hand-eye calibration), and then calibrating a Vuforia target to the optical tracker using a custom calibration apparatus. We then evaluate the absolute tracking accuracy of three Vuforia target types (image, cylinder, and cube) using a stand-alone Logitech webcam and the front-facing camera on the Hololens 2. The hand-eye calibration projection errors were 1.4 ± 0.6 pixels for the Logitech camera and 2.3 ± 1.2 pixels for the Hololens 2 camera. The cylinder target provided the most stable and accurate tracking, with mean errors of 12.5 ± 0.6 mm and 10.7 ± 0.0 mm for the Logitech and Hololens 2 cameras, respectively. These results show that Vuforia has promising potential for integration into surgical navigation systems, but the type and size of target must be optimized for the particular surgical scenario to minimize tracking error. Future work will use our framework to perform a more robust analysis of optimal target shapes and sizes for vision-based navigation systems, both independently and when fused with the Simultaneous Localization and Mapping (SLAM) based tracking embedded with the Microsoft Hololens 2.
C-Arm positioning for interventional spine procedures can often be associated with a steep learning curve. The current training standards involve using real X-rays on cadavers or via apprenticeship-based programs. To help limit excess radiation exposure, several radiation-free training systems have been proposed in the literature but there lacks a hands-on, cost-effective simulator that does not require access to a physical C-Arm. In order to expand the accessibility of radiation-free C-Arm training, we have developed a 10:1 scaled down C-Arm simulator using 3D-printed parts and wireless accelerometers for tracking. We generated Digitally Reconstructed Radiographs (DRRs) in real-time using a 1-dimensional transfer function operating on a ray-traced projection of a patient CT scan. To evaluate the efficacy of the system as a training tool, we conducted a user study in which anesthesiology and orthopedic residents were evaluated on the accuracy of their C-Arm placement for three standard views used in spinal injection procedures. Both the experimental group and control group were given the same evaluation task with the experimental group receiving 5 minutes of training on the system using real-time DRRs and a standardized two page curriculum on proper image acquisition. The experimental group achieved an angular error of 4.76±1.66° which was lower than the control group at 6.88±3.67° and the overall feedback of the system was positive based on a Likert scale questionnaire filled out by each participant. The results indicate that our system has high potential for improving C-Arm placement in interventional spine procedures and we plan to conduct a follow-up study to evaluate the long-term training capabilities of the simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.