PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12466, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-guided bronchoscopy systems and new robotics-assisted bronchoscopy systems are transforming the practice of bronchoscopy. To use such a system, the physician must first create an airway route plan to preselected Regions of Interests (ROIs) using a patient’s chest CT scan, prior to the live procedure. Many unexpected situations arise during the live procedure, however, where the physician must examine a new previously unplanned ROI site — this requires an airway guidance route leading to the new site derived live in real time. We propose a method for deriving an airway route during a live bronchoscopic procedure to any selected site in any imaging view or bronchoscopic video view observed on an assisted-bronchoscopy system’s display. The method includes an interactive graphical tool for managing and selecting new ROI sites and fits within the framework of an assisted-bronchoscopy system. When a site is selected, the methodology draws on the patient’s chest CT scan and a previously derived airway-tree centerline structure to compute the desired airway route in real-time. Subsequently, the physician can then preview the new airway route on the assisted-bronchoscopy system’s display and undertake guided bronchoscopy to the new site. Example results demonstrate the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes a method for bronchial nomenclature using real bronchoscopic (RB) images and pre-built knowledge base of branches. The bronchus has a complex tree-like structure, which increases the difficulty of bronchoscopy. Therefore, a bronchoscopic navigation system is used to help physicians during examination. Conventional navigation system used preoperative CT images and real bronchoscopic images to obtain the camera pose for navigation, whose accuracy is influenced by organ deformation. We propose a bronchial nomenclature method to estimate branch names for bronchoscopic navigation. This method consists of a bronchus knowledge base construction model, a camera motion estimation module, an anatomical structure tracking module, and a branch name estimation module. The knowledge base construction module is used to find the relationship of each branch. The anatomical tracking module is used to track the bronchial orifice (BO) extracted in each RB frame. The camera motion estimation module is used to estimate the camera motion between two frames. The branch name estimation module uses the pre-built bronchus knowledge base and BO tracking results to find the name of each branch. Experimental results showed that it is possible to estimate branch names using only RB images and the pre-built knowledge base of branches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neurosurgical techniques often require accurate targeting of deep-brain structures even in the presence of deformation due intervention and egress of Cerebrospinal Fluid (CSF) during surgical access. Prior work reported Simultaneous Localization and Mapping (SLAM) methods for endoscopic guidance using 3D reconstruction. In this work, methods for correcting the geometric distortion of a neuroendoscope are reported in a form that have been translated intraoperative use in first clinical studies. Furthermore, SLAM methods are evaluated in first clinical studies for real-time 3D endoscopic navigation with near real-time registration in the presence of deep-brain tissue deformation. A custom calibration jig with swivel mounts was designed and manufactured for neuroendoscope calibration in the operating room. The process is potentially suitable to intraoperative use while maintaining sterility of the endoscope, although the current calibration system was used in the Operating Room (OR) immediately following the case for offline analysis. A six by seven checkerboard pattern was used to obtain corner locations for calibration, and the method was evaluated in terms of Reprojection Error (RPE). Neuroendoscopic video was acquired under an IRB-approved clinical study, demonstrating rich vascular features and other structures on the interior walls of the lateral ventricles for 3D point-cloud reconstruction. Geometric accuracy was evaluated in terms of Projected Error (PE) on a ground truth surface defined from MR or cone-beam CT (CBCT) images. Intraoperative neuroendoscope calibration was achieved with sub-pixel [0.61 ± 0.20 px] error. The calibration yielded a focal length of 816.42 px and 822.71 px in X and Y directions respectively, along with radial distortion coefficients of -0.432 (first order term [𝑘1]) and 0.158 (second order term [𝑘2]). The 3D reconstruction was performed successfully with a PE of 0.23 ± 0.15 mm compared to the ground truth surface. The system for neuroendoscopic guidance based on SLAM 3D point-cloud reconstruction provided a promising platform for the development of 3D neuroendoscopy. The studies reported in this work presented an important means of neuroendoscope calibration in the OR and provided preliminary evidence for accurate 3D video reconstruction in first clinical studies. Future work aims to further extend the clinical evaluation and improve reconstruction accuracy using ventricular shape priors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a segmentation method for laparoscopic images using semantically similar groups for multi-class semantic segmentation. Accurate semantic segmentation is a key problem for computer assisted surgeries. Common segmentation models do not explicitly learn similarities between classes. We propose a model that, in addition to learning to segment an image into classes, also learns to segment it into human-defined semantically similar groups. We modify the LinkNet34 architecture by adding a second decoder with an auxiliary task of segmenting the image into these groups. The feature maps of the second decoder are merged into the final decoder. We validate our method against our base model LinkNet34 and a larger LinkNet50. We find that our proposed modification increased the performance both with mean Dice (average +1.5%) and mean Intersection over Union metrics (average +2.8%) on two laparoscopic datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality is becoming prevalent in modern video-based surgical navigation systems. Augmented reality in forms of image-fusion between the virtual objects (i.e. virtual representation of the anatomy derived from pre-operative imaging modalities) and the real objects (i.e. anatomy imaged by a spatially-tracked surgical camera) facilitate the visualization and perception of the surgical scene. However, this requires spatial calibration between the external tracking system and the optical axis of the surgical camera, known as hand-eye calibration. With the standard implementation of the most common hand-eye calibration techniques being static-photo-based, the time required for data collection may inhibit the thoroughness and robustness to achieve an accurate calibration. To address these translational issues, we introduce a video-based hand-eye calibration technique with open-source implementation that is accurate and robust. Based on the point-to-line Procrustean registration, a short video of a tracked and pivot-calibrated ball-tip stylus was recorded where, in each frame of the tracked video, the 3D position of the ball-tip (point) and its projection onto the video (line) serve as a calibration data point. We further devise a data sampling mechanism designed to optimize the spatial configuration of the calibration fiducials, leading to consistently high quality hand-eye calibrations. To demonstrate the efficacy of our work, a Monte Carlo simulation was performed to obtain the mean target projection error as a function of the number of calibration data points. The results obtained, exemplified using a Logitech C920 Pro HD Webcam with an image resolution of 640 × 480, show that the mean projection error decreased as more data points were used per calibration, and the majority of mean projection errors fell below four pixels. An open-source implementation, in the form of a 3D Slicer module, is available on GitHub.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo matching methods that enable depth estimation are crucial for visualization enhancement applications in Computer-Assisted Surgery (CAS). Learning-based stereo matching methods are promising to predict accurate results for applications involving video images. However, they require a large amount of training data, and their performance may be degraded due to domain shifts. Maintaining robustness and improving performance of learning-based methods are still open problems. To overcome the limitations of learning-based methods, we propose a disparity refinement framework consisting of a local disparity refinement method and a global disparity refinement method to improve the results of learning-based stereo matching methods in a cross-domain setting. Those learning-based stereo matching methods are pre-trained on a large public dataset of natural images and are tested on a dataset of laparoscopic images. Results from the SERV-CT dataset showed that our proposed framework can effectively refine disparity maps on an unseen dataset even when they are corrupted by noise, and without compromising correct prediction, provided the network can generalize well on unseen datasets. As such, our proposed disparity refinement framework has the potential to work with learning-based methods to achieve robust and accurate disparity prediction. Yet, as a large laparoscopic dataset for training learning-based methods does not exist and the generalization ability of networks remains to be improved, it will be beneficial to incorporate the proposed disparity refinement framework into existing networks for more accurate and robust depth estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally Invasive Surgery (MIS) has expanded broadly in the field of abdominal and pelvic surgery. Laparoscopic and robotic surgery has improved surgeon ergonomics, instrument precision, operative time, and postoperative recovery across various abdominal procedures. The goal of this study is to establish the feasibility of implementing high-speed hyperspectral imaging into a standard laparoscopic setup and exploring its benefit to common intracorporeal procedures. A hyperspectral laparoscopic imaging system was constructed using a customized hyperspectral camera alongside a standard rigid laparoscope and was validated for both spectral and spatial accuracy. Demos icing methods were investigated for improved full-resolution visualization. Hyperspectral cameras with different spectral ranges were considered and compared with one another alongside two different light sources to determine the most effective configuration. Finally, different porcine tissues were imaged ex-vivo to test the capabilities of the system and spectral footprints of the various tissues were extracted. The tissue was also imaged in a phantom to simulate the system’s use in MIS. The results demonstrated a hyperspectral laparoscopic imaging system that could provide quantitative, diagnostic information while not disrupting normal workflow nor adding excessive weight to the laparoscopic setup. The high-speed hyperspectral laparoscopic imaging system can have immediate applications in image-guided surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design, fabrication, and experimental validation of a photoacoustic (PA) imaging probe for robotic surgery. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. When equipped with a PA probe, a surgical robot can provide intraoperative guidance to the operating physician, alerting them of the presence of vital substrate anatomy (e.g., nerves or blood vessels) invisible to the naked eye. Our probe is designed to work with the da Vinci surgical system to produce three-dimensional PA images: We propose an approach wherein the robot provides Remote Center-of-Motion (RCM) scanning across a region of interest, and successive PA tomographic images are acquired and interpolated to produce a three-dimensional PA image. To demonstrate the accuracy of the PA guidance in scanning 3D tomography actuated by the robot, we conducted an experimental study that involved the imaging of a multi-layer wire phantom. The computed Target Registration Error (TRE) between the acquired PA image and the phantom was 1.5567±1.3605 mm. The ex vivo study demonstrated the function of the proposed laparoscopic device in 3D vascular detection. These results indicate the potential of our PA system to be incorporated into clinical robotic surgery for functional anatomical guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo reconstruction is an important tool for generating 3D surface observations of deformable tissues that can be used to non-rigidly update intraoperative image guidance. As compared to traditional image processing-based stereo matching techniques, emerging machine learning approaches aim to deliver shorter processing times, more accurate surface reconstructions, and greater robustness to the suboptimal qualities of intraoperative tissue imaging (e.g., occlusion, reflection, and minimally textured surfaces). This work evaluates the popular PSMNet convolutional neural network as tool for generating disparity maps from the video feed of the da Vinci Xi Surgical System. Reconstruction accuracy and speed were assessed for a series of 44 stereoendoscopic frame pairs showing key structures in a silicone renal phantom. Surface representation accuracy was found to be on the order of 1mm for reconstructions of the kidney and inferior vena cava, and disparity maps were produced in under 2s when inference was performed on a standard modern GPU. These preliminary results suggest that PSMNet and similar trained models may be useful tools for integrating intraoperative stereo reconstruction into advanced navigation platforms and warrant further development of the overall data pipeline and testing with biological tissues in representative surgical conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Up to 35% of breast-conserving surgeries fail to resect all the tumors completely. Ideally, machine learning methods using the iKnife data, which uses Rapid Evaporative Ionization Mass Spectrometry (REIMS), can be utilized to predict tissue type in real-time during surgery, resulting in better tumor resections. As REIMS data is heterogeneous and weakly labeled, and datasets are often small, model performance and reliability can be adversely affected. Self-supervised training and uncertainty estimation of the prediction can be used to mitigate these challenges by learning the signatures of input data without their label as well as including predictive confidence in output reporting. We first design an autoencoder model using a reconstruction pretext task as a self-supervised pretraining step without considering tissue type. Next, we construct our uncertainty-aware classifier using the encoder part of the model with Masksembles layers to estimate the uncertainty associated with its predictions. The pretext task was trained on 190 burns collected from 34 patients from Basal Cell Carcinoma iKnife data. The model was further trained on breast cancer data comprising of 200 burns collected from 15 patients. Our proposed model shows improvement in sensitivity and uncertainty metrics of 10% and 15.7% over the baseline, respectively. The proposed strategies lead to improvements in uncertainty calibration and overall performance, toward reducing the likelihood of incomplete resection, supporting removal of minimal non-neoplastic tissue, and improved model reliability during surgery. Future work will focus on further testing the model on intraoperative data and additional exvivo data following collection of more breast samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computed-based skill assessment relies on accurate metrics to provide comprehensive feedback to trainees. Improving the accuracy of video-based metrics computed using object detection is generally done by improving the performance of the object detection network, however increasing its performance requires resources that cannot always be obtained. This study aims to improve the accuracy of metrics in central venous catheterization without requiring a high performing object detection network by removing false positive predictions identified using uncertainty quantification. The uncertainty for each bounding box was calculated using an entropy equation. The uncertainties were then compared to an uncertainty threshold computed using the optimal point of a Receiver Operating Characteristic curve. Predictions were removed if the uncertainty fell below the predefined threshold. 50 videos were recorded and annotated with ground truth bounding boxes. These bounding boxes were used to train an object detection network, which was used to produce predictive bounding boxes for the test set. This method was evaluated by computing metrics for the predictive bounding boxes with and without having false positives removed and comparing them to ground truth labels using a Pearson Correlation. The Pearson Correlations for the baseline comparisons and the comparisons made using the results calculated using false positive removal were 0.922 and 0.816 for syringe path lengths, 0.753 and 0.510 for ultrasound path lengths, 0.831 and 0.489 for ultrasound usage times, and 0.857 and 0.805 for syringe usage times. This method consistently reduced inflated metrics, making it promising for improving metric accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Treatment for Basal Cell Carcinoma (BCC) includes an excisional surgery to remove cancerous tissues, using a cautery tool to make burns along a defined resection margin around the tumor. Margin evaluation occurs post-surgically, requiring repeat surgery if positive margins are detected. Rapid Evaporative Ionization Mass Spectrometry (REIMS) can help distinguish healthy and cancerous tissue but does not provide spatial information about the cautery tool location where the spectra are acquired. We propose using intraoperative surgical video recordings and deep learning to provide surgeons with guidance to locate sites of potential positive margins. Frames from 14 intraoperative videos of BCC surgery were extracted and used to train a sequence of networks. The first network extracts frames showing surgery in-progress, then, an object detection network localizes the cautery tool and resection margin. Finally, our burn prediction model leverages the effectiveness of both a Long Short-Term Memory (LSTM) network and a Receiver Operating Characteristic (ROC) curve to accurately predict when the surgeon is cutting. The cut identifications will be used in the future for synchronization with iKnife data to provide localizations when cuts are predicted. The model was trained with four-fold cross-validation on a patient-wise split between training, validation, and testing sets. Average recall over the four folds of testing for the LSTM and ROC were 0.80 and 0.73, respectively. The video-based approach is simple yet effective at identifying tool-to-skin contact instances and may help guide surgeons, enabling them to deliver precise treatments in combination with iKnife data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following the shift from time-based medical education to a competency-based approach, a computer-assisted training platform would help relieve some of the new time burden placed on physicians. A vital component of these platforms is the computation of competency metrics which are based on surgical tool motion. Recognizing the class and motion of surgical tools is one step in the development of a training platform. Object detection can achieve tool recognition. While previous literature has reported on tool recognition in minimally invasive surgeries, open surgeries have not received the same attention. Open Inguinal Hernia Repair (OIHR), a common surgery that general surgery residents must learn, is an example of such surgeries. We present a method for object detection to recognize surgical tools in simulated OIHR. Images were extracted from six video recordings of OIHR performed on phantoms. Tools were labelled with bounding boxes. A YOLOV3 object-detection model was trained to recognize the tools used in OIHR. The Average Precision scores per class and the mean Average Precision (mAP) were reported to benchmark the model’s performance. The mAP of the tool classes was 0.61, with individual Average Precision scores reaching up to 0.98. Tools with poor visibility or similar shapes such as the forceps, or scissors achieved lower precision scores. With an object detection network that can identify tools, research can be done on tissue-tool interactions to achieve workflow recognition. Workflow recognition would allow a training platform to detect the tasks performed in hernia repair surgeries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surgical instrument tracking is an active research area that can provide surgeons feedback about the location of their tools relative to anatomy. Recent tracking methods are mainly divided into two parts: segmentation and object detection. However, both can only predict 2D information, which is limiting for application to real-world surgery. An accurate 3D surgical instrument model is a prerequisite for precise predictions of the pose and depth of the instrument. Recent singleview 3D reconstruction methods are only used in natural object reconstruction and do not achieve satisfying reconstruction accuracy without 3D attribute-level supervision. Further, those methods are not suitable for the surgical instruments because of their elongated shapes. In this paper, we firstly propose an end-to-end surgical instrument reconstruction system — Self-supervised Surgical Instrument Reconstruction (SSIR). With SSIR, we propose a multi-cycle-consistency strategy to help capture the texture information from a slim instrument while only requiring a binary instrument label map. Experiments demonstrate that our approach improves the reconstruction quality of surgical instruments compared to other self-supervised methods and achieves promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimation of surgical tool pose is essential for surgical image guidance. Near real-time position and angle estimation is crucial, for example, for intraoperative optical coherence tomography tracking in retinal microsurgery. The current state-of-the-art algorithm for surgical tool tracking in posterior eye surgery was first introduced by Alsheakhali et al.1 We propose Dual Color Space Algorithm - an improved tool segmentation method based on combined color space masks, set thresholds, a shadow-insensitive detector for the tool edge, and more robust detection of the tip of the surgical tools. The presented algorithms are benchmarked on a series of manually annotated images from posterior eye surgery video. The video frames suffer from the confounding effect of the tool’s shadow and spot illumination occurring several times. A severalfold improvement in the algorithm’s accuracy is reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Validation of deformable image registration techniques is extremely important, but hard, especially when complex deformations or content mismatch are involved. These complex deformations and content mismatch, for example, occur after the placement of an applicator for brachytherapy for cervical cancer. Virtual phantoms could enable the creation of validation data sets with ground truth deformations that simulate the large deformations that occur between image acquisitions. However, the quality of the multi-organ Finite Element Method (FEM)-based simulations is dependent on the patient-specific external forces and mechanical properties assigned to the organs. A common approach to calibrate these simulation parameters is through optimization, finding the parameter settings that optimize the match between the outcome of the simulation and reality. When considering inherently simplified organ models, we hypothesize that the optimal deformations of one organ cannot be achieved with a single parameter setting without compromising the optimality of the deformation of the surrounding organs. This means that there will be a trade-off between the optimal deformations of adjacent organs, such as the vagina-uterus and bladder. This work therefore proposes and evaluates a multi-objective optimization approach where the trade-off between organ deformations can be assessed after optimization. We showcase what the extent of the trade-off looks like when bi-objectively optimizing the patient-specific mechanical properties and external forces of the vagina-uterus and bladder for FEM-based simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-dose-rate brachytherapy is an accepted standard-of-care treatment for prostate cancer. In this procedure, catheters are inserted using three-dimensional (3D) transrectal ultrasound image-guidance. Their positions are manually segmented for treatment planning and delivery. The transverse ultrasound sweep, which is subject to tip and depth error for catheter localization, is a commonly used ultrasound imaging option available for image acquisition. We propose a two-step pipeline that uses a deep-learning network and curve fitting to automatically localize and model catheters in transversely reconstructed 3D ultrasound images. In the first step, a 3D U-Net was trained to automatically segment all catheters in a 3D ultrasound image. Following this step, curve fitting was implemented to detect the shapes of individual catheters using polynomial fitting. Of the 343 catheters (from 20 patients) in the testing data, the pipeline detected 320 (93.29%) with 7 false positives (2.04%) and 13 false negatives (3.79%). The average distance± one standard deviation between the ground truth and predictions for each catheter shaft was 1.9 ± 0.3 mm. The average difference of each catheter tip was 3.0 ± 0.4 mm. The proposed pipeline provides a method for reducing time spent on verification of catheter positions, minimizing uncertainties, and improving clinical workflow during the procedure. Reducing human variability in catheter placement predictions may increase the accuracy of tracking and radiation dose modelling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a perineal access tool for MRI-guided prostate interventions and evaluates it using a phantom study. The development of this device has been driven by the clinical need and a close collaboration effort. The device seamlessly fits into the workflow of MRI-guided prostate procedures such as cryoablation and biopsies. It promises a significant cut in the procedure time, accurate needle placement, lower number of insertions, and a potential for better patient outcomes. The current embodiment includes a frame which is placed next to the perineum and incorporates both visual and MRI-visible markers. These markers are automatically detected both in MRI and by a pair of stereo cameras (optical head) allowing for automatic optical registration. The optical head illuminates the procedure area and can track instruments and ultrasound probes. The frame has a window to access the perineum. Multiple swappable grids may be placed in this window depending on the application. It is also possible to entirely remove the grid for freehand procedures. All the components are designed to be used inside the MRI suite. To test this system, we built a custom phantom with MRI visible targets and planned 21 needle insertions with three grid types using the SCENERGY software. With an average insertion depth of about 85 mm, the average error of needle tip placement was 2.74 mm. We estimated the error by manually segmenting the needle tip in post-insertion MRIs of the phantom and comparing that to the plan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiomic analysis has shown significant potential for predicting treatment response to neoadjuvant therapy in rectal cancers via routine MRI, though primarily based off a single acquisition plane or single region of interest. To exploit intuitive clinical and biological aspects of tumor extent on MRI, we present a novel multi-plane, multi-region radiomics framework to more comprehensively characterize and interrogate treatment response on MRI. Our framework was evaluated on a cohort of 71 T2-weighted axial and coronal MRIs from patients diagnosed with rectal cancer and who underwent chemoradiation. 2D radiomic features were extracted from three regions of interest (tumor, fat proximal to tumor, and perirectal fat) across axial and coronal planes, with a two-stage feature selection scheme designed to identify descriptors associated with pathologic complete response. When evaluated via a quadratic discriminant analysis classifier, our multi-plane, multi-region radiomics model outperformed single-plane or single-region feature sets with an area under the ROC curve (AUC) of 0.80 ± 0.03 in discovery and AUC=0.65 in hold-out validation. Uniquely, the optimal feature set comprised descriptors from across multiple planes (axial, coronal) as well as multiple regions (tumor, proximal fat, perirectal fat). Our multi-plane, multi-region radiomics framework may thus enable more comprehensive phenotyping of treatment response on MRI, potentially finding application for improved personalization of therapeutic and surgical interventions in rectal cancers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liver tumor involvement by either primary or secondary cancers is responsible for over 1 million deaths per year worldwide. Image-guided percutaneous thermal ablation (PTA) has become a widely utilized option for patients not eligible for surgery, demonstrating similar 5-year overall survival rates between surgery and PTA. Achieving a 5 mm ablation margin has been shown to correlate with improved survival, however, achieving accurate needle placement and confirming sufficient ablation is challenging in the presence of liver deformation, needle artifacts, and inability to distinguish between the tumor boundary and ablation region post-PTA. This presentation will describe data demonstrating the need for accurate ablation measurement for improved outcomes, the emerging role of deep learning to provide segmentation of the liver, tumor, and ablation region, and the advances in precision of targeting the tumor and assessing the outcomes of the PTA through the use of biomechanical modeling of the liver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to accurately account for intraoperative soft tissue deformations has been a longstanding barrier to efficacious translation of image-guided frameworks into abdominal interventions. In surgical applications, few data acquisition systems are amenable to stringent operative workflow constraints, and many are too costly for widespread adoption. Consequently, computational methods for surgical guidance based on sparse data obtained over the organ surface have become prevalent within approaches for image-to-patient alignment of soft tissue. However, the sparse data environment presents an especially challenging algorithmic landscape for accurately inferring deformable anatomical alignments between preoperative and intraoperative organ states from incomplete information sources. This work, presented as a preliminary conclusion to the image-to-physical liver registration sparse data challenge introduced at SPIE Medical Imaging 2019, seeks to characterize the potential for sparse data registration algorithms to achieve high fidelity predictions of intraoperative organ deformations from sparse descriptors of organ surface shape. A total of seven rigid and nonrigid biomechanical and deep learning registration techniques are compared, and the findings suggest that the family of biomechanically simulated boundary condition reconstruction techniques offers a promising opportunity for accurately estimating intraoperative organ deformation states from sparse intraoperative point clouds. Over a common dataset of 112 registration scenarios, this family of deformable registration techniques was found to outperform globally optimal rigid registrations, was robust to varying degrees of surface data coverage, and maintained good performance under added sources of measurement noise. Further analysis investigates error correlations among methods to illuminate sparse data performance within state-of-the-art image-to-physical registration algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In select patients when transplant is not possible, liver surgery is a preferred treatment for liver cancer and is performed with curative intent. Currently, only about 20% of patients are eligible for resection due to the complexity of the procedure. Image-Guided Liver Surgery (IGLS) that utilizes preoperative Computed Tomographic imaging (CT) data is not yet standard of care, and one of the confounding factors toward its realization in the liver is the presence of soft-tissue deformations that compromise the fidelity of these systems. IGLS systems involve intraoperative data collection to achieve image-to-physical alignments, and realizations to date have previously used an optically tracked stylus that requires physical contact with the liver. One source of error in this process involves contact pressure that may cause inaccuracies during the acquisition of liver shape data. In this study, we use a non-contact Conoprobe digitization method for comparison against stylus-based acquisition. We developed a novel Conoprobe device attachment and sterilization process to enable prospective data acquisition in the operating room. The goal of this work is to study the difference between rigid registration and non-rigid registration with respect to two forms of digitization (contact and non-contact) in vivo. For this preliminary work, data from one patient undergoing liver resection was analyzed from our novel prototype under an IRB approved study at Memorial Sloan Kettering Cancer Center. The organ surface coverage of the two digitization methods was compared. Rigid and model-based non-rigid registration were performed and evaluated for a patient undergoing liver surgery. Segmented contours of the ultrasound-identified targets were compared to their registered preoperative counterparts for accuracy validation. The findings indicate that surface coverage of the Conoprobe is less than that of stylus, suggesting that non-contact accuracy benefits may be obscured by inferior data coverage. However, more investigation is needed to potentially increase Conoprobe surface data extent during acquisition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a deep learning (DL)-based framework, Surf-X, to estimate real-time 3D liver motion. Surf-X synergizes two imaging modalities, optical surface imaging and x-ray imaging, to track the 3D liver motion. By incorporating prior knowledge of motion learnt from patient-specific 4D-CTs, Surf-X progressively solves the liver motion in two steps: firstly from an optical surface image via learnt internal-external correlations; and secondly from directly-observed motion on an x-ray projection. Surf-X combines the complementary information from surface and x-ray imaging and solves liver motion more accurately and robustly than either modality alone, all at a temporal resolution of <100 milliseconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cone-Beam CT (CBCT) is used in Interventional Radiology (IR) for identification of complex vascular anatomy, difficult to visualize in 2D fluoroscopy. However, long acquisition time makes CBCT susceptible to soft-tissue deformable motion that degrades visibility of fine vessels. We propose a targeted framework to compensate for deformable intra-scan motion via learned full-sequence models for identification of vascular anatomy coupled to an autofocus function specifically tailored to vascular imaging. The vessel-targeted autofocus acts in two stages: (i) identification of vascular and catheter targets in the projection domain; and (ii) autofocus optimization for a 4D vector field through an objective function that quantifies vascular visibility. Target identification is based on a deep learning model that operates on the complete sequence of projections, via a transformer encoder-decoder architecture that uses spatial-temporal self-attention modules to infer long-range feature correlations, enabling identification of vascular anatomy with highly variable conspicuity. The vascular autofocus function is derived through eigenvalues of the local image Hessian, which quantify the local image structure for identification of bright tubular structures. Motion compensation was achieved via spatial transformer operators that impart time dependent deformations to NPAR = 90 partial angle reconstructions, allowing for efficient minimization via gradient backpropagation. The framework was trained and evaluated in synthetic abdominal CBCTs obtained from liver MDCT volumes and including realistic models of contrast-enhanced vascularity with 15 to 30 end branches, 1 – 3.5 mm vessel diameter, and 1400 HU contrast. The targeted autofocus resulted in qualitative and quantitative improvement in vascular visibility in both simulated and clinical intra-procedural CBCT. The transformer-based target identification module resulted in superior detection of target vascularity and a lower number of false positives, compared to a baseline U-Net model acting on individual projection views, reflected as a 1.97x improvement in intersection-over-union values. Motion compensation in simulated data yielded improved conspicuity of vascular anatomy, and reduced streak artifacts and blurring around vessels, as well as recovery of shape distortion. These improvements amounted to an average 147% improvement in cross correlation computed against the motion-free ground truth, relative to the un-compensated reconstruction. Targeted autofocus yielded improved visibility of vascular anatomy in abdominal CBCT, providing better potential for intra-procedural tracking of fine vascular anatomy in 3D images. The proposed method poses an efficient solution to motion compensation in task-specific imaging, with future application to a wider range of imaging scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ventricular tachycardia (VT) can be treated with catheter ablation therapy, a technique in which catheters are guided into the ventricle and radiofrequency energy is delivered into the myocardial tissue to stop arrhythmic electrical pathways. These procedures are invasive and come with associated risks; therefore, recent efforts have investigated the use of noninvasive proton beam therapy for treatment of VT. In this approach, target regions are identified in a pre-treatment computed tomography scan of the left ventricle followed by proton beam ablation therapy. The effects of beam ablation therapy in myocardial tissue can be characterized using imaging, electroanatomic mapping, and histology. These data are also important for determining the appropriate dose for effective treatment of VT while minimizing collateral damage to surrounding healthy tissue. Studies conducted to date demonstrate that proton beam ablation is a promising new approach for treatment of VT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiotherapy treatment necessitates accurate tracking of the tumor in real-time, often during free-breathing. However, in lung cancer, the respiration entails a significant displacement of the tumor during radiation. This movement, if not well accounted for, can lead to an under-radiation of the tumor or damaging surrounding healthy regions. It is therefore paramount to be able to follow the displacement of the tumor over the entire respiratory cycle. In deep learning applications, it is important to have enough data to capture reliable and representative motion patterns. However, obtaining large amounts of dynamic images is known to be difficult, especially when there is a need to use manually annotated images. Consequently, even incomplete data are worth being utilized. In this work, we propose a model capable of predicting lungs deformations to predict missing phases in a 4D CT lungs dataset, based on probabilistic motion auto-encoders. The model uses the information from a reference 3D volume obtained at the beginning of treatment and a set of 2D surrogate images to predict the next 3D respiratory volumes. The proposed model was evaluated on a free-breathing 4DCT dataset of 165 patients treated for lung cancer. We achieve a mean performance of 81.70% structural similarity, a mean square error of 3.02% and a negative local cross correlation of 81.43% on a hold-out test set comprised of 34 patients. The proposed model can also be used to complete missing respiratory phases in datasets of 4DCT scans of lungs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate lung nodule localization during Video-Assisted Thoracic Surgery (VATS) for the treatment of early-stage lung cancer is a surgical challenge. Recently, a new minimally invasive approach for nodule localization during VATS has been proposed, which consists in compensating by a biomechanical model the very large lung deformations occurring before and during surgery. This estimation of the deformations allows to transfer the position of the nodule visible on the preoperative CT to an acquisition of the lung performed during the operation using a Cone-Beam CT scanner (CBCT two). But, in this approach, an additional CBCT acquisition (CBCT one) must also be acquired just after the patient is placed in the operative position in order to estimate the deformations due to the change of the patient’s position, from supine during the CT acquisition to lateral decubitus in the operating room. Our goal is to simplify this procedure and thus reduce the radiation dose to the patient. To this end, we propose to improve this solution by replacing the CBCT one acquisition by a model allowing to predict these deformations. This model is defined using the lung state information from CBCT two and a general statistical motion model built from the position change deformations already observed on other patients. We have data from 17 patients. The method is evaluated with a leave-one-out cross-validation on its ability to reproduce the observed deformations. The method reduces the average prediction error from 12.12 mm without prediction to 8.09 mm for an average prediction, and finally to 6.33 mm for a prediction with our model fitted to CBCT two only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative analysis of the dynamic properties of thoraco-abdominal organs such as lungs during respiration could lead to more accurate surgical planning for disorders such as Thoracic Insufficiency Syndrome (TIS). This analysis can be done from semi-automatic delineations of the aforesaid organs in scans of the thoraco-abdominal body region. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for this application, although automatic segmentation of the organs in these images is very challenging. In this paper, we describe an auto-segmentation system we built and evaluated based on dMRI acquisitions from 95 healthy subjects. For the three recognition approaches, the system achieves a best average location error (LE) of about one voxel for the lungs. The standard deviation (SD) of LE is about one to two voxels. For the delineation approach, the average Dice Coefficient (DC) is about 0.95 for the lungs. The standard deviation of DC is about 0.01 to 0.02 for the lungs. The system seems to be able to cope with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data and on other thoraco-abdominal organs including liver, kidneys, and spleen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigates a method of time resolved 3D (4D) x-ray imaging of contrast dynamics internal to a vascular structure (e.g. intracranial aneurysm) to enable evaluation of blood flow patterns during an interventional procedure. The proposed method employs repetitive-short-pulse injection of small contrast boluses, rotational x-ray imaging with a C-arm, and retrospectively gated iterative image reconstruction. Under conditions where the passage of each contrast pulse through a vascular region is repeatable and the C-arm rotation is slow compared to the injection cycle, each flow state (spatial distribution of contrast agent at an instant) is imaged at multiple projection angles. After partitioning the projections by flow state, a sequence of 3D volumes corresponding to different states of contrast passage can be reconstructed. Feasibility was demonstrated in a patient-specific 3D-printed aneurysm phantom with 1 Hz simulated cardiac flow waveform. A custom-built power injector was programmed to produce repetitive 100ms injections of iodinated contrast agent upstream of the aneurysm, synchronized to the mid-diastolic phase of the simulated cardiac cycle (1 Hz, 0.4 mL/pulse, 20 pulses, 8 mL total). An interventional C-arm short-scan was performed with 11.3 s rotation time and 27fps frame rate. Modified PICCS reconstruction was used to generate the 4D images. The temporal evolution of contrast agent in the 4D x-ray images was visually similar to the flow patterns observed in MRI imaging and CFD simulation of the same phantom. 95% of the surface deviations between the 4D aneurysm volume and traditional 3D-DSA aneurysm volume were within -0.02 ± 0.24 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-invasive cardiac radioablation is an emerging therapy for the treatment of ventricular tachycardia (VT). Electrophysiologic, anatomic and molecular imaging studies are used to localize the breakout region of the VT, but current therapy planning is tedious and prone to error due to a lack of data integration. In this work we present the design and development of a software platform and workflow to facilitate precision-targeted therapy planning, including affine non-rigid multimodality image registration and 2D-3D-4D visualization across modalities. Registration accuracy was measured using Dice Similarity and Hausdorff Distance of total left ventricle tissue volumes, which were 0.914 ± 0.013 and 2.65mm ± 0.34mm, respectively (average ± standard deviation). Electrocardiographic maps of VT parameters were registered temporally to surface electrode data to recreate familiar ECG tracings. 2D polar maps, 3D slice-views, and 4D cine-renderings were used for hybrid fusion displays of molecular and electroanatomic images. Segmentations of the cardiac-gated contrast CT blood-pool and molecular images of perfusion and glucose metabolism were used to identify regions of fibrotic scar tissue and hibernating myocardium in the 3D scene. Ablation targets were painted onto the 2D polar map, 3D slice or 4D-cine views, and exported as DICOM for import to radiotherapy planning software. We anticipate that the combination of accurate multimodality image registration and visualizations will enable more reliable therapy planning, expedite treatment and may improve understanding of the underlying pathophysiology of these lethal arrhythmias.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cochlear Implants (CI) are a widely successful neural-prosthetic device for improving quality of life for individuals experiencing severe to profound hearing loss. A minimally invasive technique for inserting the CI electrode, percutaneous cochlear access, typically involves a surgical trajectory through the facial recess. Image-based surgical planning techniques are heavily reliant on accurate segmentation of the chorda tympani since it is one of the delineating structures of the facial recess. Furthermore, damage to this structure can lead to loss of taste for the patient. However, the chorda’s thin nature and the surrounding appearance of pneumatized bone pose difficulties when segmenting this structure in conventional CT. Our previous automatic method still leads to unacceptable inaccuracies in difficult images. In this work, we propose the use of a conditional generative adversarial network for automatic segmentation of this structure. We use a weakly supervised approach, leveraging a dataset of sixteen hand-labelled images and 130 weakly-labelled images acquired through automatic atlas-based techniques. Our resulting network displays a 49% increase in segmentation performance over our previous automatic method with a mean localization error of 0.49mm. Even in the worst case, our method still provides sub-millimeter localization errors of 0.82mm. These results are encouraging for potential use in clinical settings as safe trajectory planning typically involves 1 mm error margins to sensitive structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AI-assisted surgeries have drawn the attention of the medical image research community due to their real-world impact on improving surgery success rates. For image-guided surgeries, such as Cochlear Implants (CIs), accurate object segmentation can provide useful information for surgeons before an operation. Recently published image segmentation methods that leverage machine learning usually rely on a large number of manually annotated ground truth labels. However, it is a laborious and time-consuming task to prepare the dataset. This paper presents a novel technique using a self-supervised 3D-UNet that produces a dense deformation field between an atlas and a target image that can be used for atlas-based segmentation of the ossicles. Our results show that our method outperforms traditional image segmentation methods and generates a more accurate boundary around the ossicles based on Dice similarity coefficient and point-to-point error comparisons. The mean Dice coefficient is improved by 8.51% with our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the expansion of Cone Beam CT (CBCT) to new interventional procedures continues, the burdensome challenge of metal artifacts remains. Photon starvation and beam hardening from metallic implants and surgical tools in the field of view can result in the anatomy of interest being partially or fully obscured by imaging artifacts. Leveraging the flexibility of modern robotic CBCT imaging systems, implementing non-circular orbits designed for reducing metal artifacts by ensuring data-completeness during acquisition has become a reality. Here, we investigate using non-circular orbits to reduce metal artifacts arising from metallic hip prostheses when imaging pelvic anatomy. As a first proof-of-concept, we implement a sinusoidal and a double-circle-arc orbit on a CBCT test bench, imaging a physical pelvis phantom, with two metal hip prostheses, housing a 3D-printed iodine-filled radial line-pair target. A standard circular orbit implemented with the CBCT test bench acted as comparator. Imaging data collection and processing, geometric calibration and image reconstruction was completed using in-house developed software programs. Imaging with the standard circular orbit, image artifacts were observed in the pelvic bones and only 33 out of the possible 45 line-pairs of the radial line-pair target were partially resolvable in the reconstructed images. Comparatively, imaging with both the sinusoid and double-circle-arc orbits reduced artifacts in the surrounding anatomy and enabled all 45 line-pairs to be visibly resolved in the reconstructed images. These results indicate the potential of non-circular orbits to assist in revealing previously obstructed structures in the pelvic region in the presence of metal hip prosthesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Up to 40% of Breast Conserving Surgery (BCS) patients must undergo repeat surgery because cancer is left behind in the resection cavity. The mobility of the breast resection cavity makes it difficult to localize residual cancer and, therefore, cavity shaving is a common technique for cancer removal. Cavity shaving involves removing an additional layer of tissue from the entire resection cavity, often resulting in unnecessary healthy tissue loss. In this study, we demonstrated a navigation system and open-source software module that facilitates visualization of the breast resection cavity for targeted localization of residual cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selective amygdalohippocampectomy (SelAH) for mesial temporal lobe epilepsy (mTLE) involves the resection of the anterior hippocampus and the amygdala. A recent study related to SelAH reports that among 168 patients for whom two-year Engel outcomes data were available, 73% had Engel I outcomes (free of disabling seizure); 16.6% had Engel II outcomes (rare disabling seizures); 4.7% had Engel III outcomes (worthwhile improvement); and 5.3% had Engel IV outcomes (no worthwhile improvement). Success rate among sites also varies greatly. Possible explanations for variability in outcomes are the resected volume and/or the subregion of the hippocampus and amygdala that have been resected. To explore this hypothesis, the accurate segmentation of the resected cavity needs to be performed on a large scale. This is, however, a difficult and time-consuming task that requires expertise. Here we explore using a nnUNET to perform the task. Inspired by Youngeun, a level set loss is used in addition to the original DICE and cross-entropy loss in nnUNET to capture the cavity boundaries better. We show that, even with a modest-sized training set (25 volumes), the median DICE value between automated and manual segmentations is 0.88, which suggests that the automatic and accurate segmentation of the resection cavity is achievable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Glioblastoma Multiforme (GBM) is the most common and most lethal primary brain tumor in adults with a five-year survival rate of 5%. The current standard of care and survival rate have remained largely unchanged due to the degree of difficulty in surgically removing these tumors, which plays a crucial role in survival, as better surgical resection leads to longer survival times. Thus, novel technologies need to be identified to improve resection accuracy. Our study features a curated database of GBM and normal brain tissue specimens, which we used to train and validate a multi-instance learning model for GBM detection via rapid evaporative ionization mass spectrometry. This method enables real-time tissue typing. The specimens were collected by a surgeon, reviewed by a pathologist, and sampled with an electrocautery device. The dataset comprised 276 normal tissue burns and 321 GBM tissue burns. Our multi-instance learning model was adapted to identify the molecular signatures of GBM, and we employed a patient-stratified four-fold cross-validation approach for model training and evaluation. Our models demonstrated robustness and outperformed baseline models with an improved AUC of 0.95 and accuracy of 0.95 in correctly classifying GBM and normal brain. This study marks the first application of deep learning to REIMS data for brain tumor tissue characterization. This study sets the foundation for investigating more clinically relevant questions where intraoperative tissue detection in neurosurgery is pertinent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain deformations associated with burr hole and dura opening during Deep Brain Stimulation (DBS) surgeries can significantly affect electrodes' placement, directly impacting optimal treatment response. Enhanced interpretation of clinical outcomes and, in addition, study the effects of shifting electrode leads on neural pathways can be accomplished by coupling patient-specific finite element biomechanical/bioelectric tissue and conventional neurophysiological models. A dataset of six patients who had undergone intraoperative Magnetic Resonance (iMR)-guided DBS procedure is considered in this study. To realistically predict soft tissue deformation during DBS surgery, biomechanical models were constructed based on patient-specific imaging data and driven with iMR data. In addition, bioelectric finite element models for both undeformed (no shift) and deformed states were used to estimate the effect of electric fields using two conventional neuromodulation prediction approaches. In the first approach, successful neuron pathway recruitment was established using a neurophysiological simulation estimating the likelihood that a given field would influence action potential dynamics. In the second approach, recruitment was based on the direct use of an electric-field norm threshold to establish an activation volume. Results showed about 49% difference in recruited neuronal pathways when comparing neurophysiological models and electric-field norm threshold neural activation model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In open cranial procedures, the accuracy of image guidance using preoperative MR (pMR) images can be degraded by intraoperative brain deformation. Intraoperative stereovision (iSV) has been used to acquire 3D surface profile of the exposed cortex at different surgical stages, and surface displacements can be extracted to drive a biomechanical model as sparse data to provide updated MR (uMR) images that match the surgical scene. In previous studies, we have employed an Optical Flow (OF) based registration technique to register iSV surfaces acquired from different surgical stages and estimate cortical surface shift throughout surgery. The technique was efficient and accurate but required manually selected Regions of Interest (ROI) in each image after resection began. In this study, we present a registration technique based on Scale Invariant Feature Transform (SIFT) algorithm and illustrate the methods using an example patient case. Stereovision images of the cortical surface were acquired and reconstructed at different time points during surgery. Both SIFT and OF based registration techniques were used to estimate cortical shift, and extracted displacements were compared against ground truth data. Results show that the overall errors of SIFT and OF based techniques were 0.65±0.53 mm and 2.18±1.35 mm in magnitude, respectively, on the intact cortical surface. The OF-based technique generated inaccurate sparse data near the resection cavity region, whereas SIFT-based technique only generated accurate sparse data. The computational efficiency was ⪅0.5 s and ⪆20 s for SIFT and OF based techniques, respectively. Thus, the SIFT-based registration technique shows promise for OR applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-guided neuro navigation systems are still reliant on the rigid alignment of preoperative tomographic imaging information (usually magnetic resonance imaging data) to the intraoperative patient anatomy. It is well understood that soft tissue deformations during surgery can compromise that alignment leading to significant discrepancies between imaged neuroanatomy and its physical space counterpart. While intraoperative MRI/CT are available, the encumbrance, cost, and workflow have inhibited the widespread adoption as a standard of care. As a result, computational imaging efforts to adjust for deformations have been under investigation. The goal of this work was to perform a feasibility study to evaluate a model-based strategy to correct for deformations and evaluate in real-time. For this study n=8 subjects were enrolled in an IRB approved study at Vanderbilt University Medical Center. Model-based deformation correction was performed and evaluated for six of the eight surgeries with a total of seven evaluations performed (in one case, both the attending and resident evaluated the correction separately). With respect to evaluation, at the end of each surgery, the model-deformed images were displayed next to the gold standard preoperative images in the operating room. A stylus was used by the surgeon to interrogate the surgical field and evaluate the alignments between the model-based corrected guidance system and standard-of-care conventional IGS system. For all cases with substantial shift, which was six of the seven evaluations, surgeons preferred the model-based approach in terms of image alignment to patient anatomy. For the case with minimal shift, the surgeon found no difference between the two systems. For six of the seven evaluations, surgeons had an overall preference for the model-based approach. In conclusion, we demonstrated initial feasibility of using a model-based deformation correction scheme during brain tumor resection surgeries. Additionally, based on the surgeon consensus of improved image alignment to intraoperative anatomy, this study demonstrates the potential benefit of our approach in terms of evaluating resection boundaries intraoperatively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound + Image-Guided Procedures: Joint Session with Conferences 12466 and 12470
Ultrasound holds promise for use in spinal cord injury cases for both diagnostic and therapeutic purposes. Focused ultrasound applications demand an added threshold of study to ensure the safety and efficacy of the therapy. For optimal treatment outcomes, it is crucial to understand whether relevant structures are being targeting with sufficient energy without damaging neighboring tissue and vasculature. However, it is difficult to predict the expected displacement and pressure profile of the ultrasound wavefront due to challenges with visualizing an acoustic beam in real-time and complex patient-specific anatomy. This challenge is particularly prominent in anatomies with varying medium acoustic properties that cause reflection and distortion of the signal, which is inherent to the composition of the spinal cord and is exacerbated by the formations of injury-induced hematomas. Incorrect placement of focused ultrasound transducers can be detrimental to patient health, specifically if therapeutic ultrasound is used at higher intensities, as the beam propagation can target healthy tissue and important structures that could lead to tissue damage and death. We study how computational tools can be leveraged to aid placement of the transducer using an ultrasound simulation software, Wave 3000 Plus, that allows for the visualization of ultrasound propagation through anatomical structures. By simulating the propagation of ultrasound beams through patient-specific Digital Imaging and Communications in Medicine (DICOM) images, we study computational approaches to determine the optimal placement of devices. In this study, we use in vivo porcine spinal cord images following spinal cord injury (as an example medical use case) to determine if the injury site is being targeted appropriately and to visualize the distribution of pressure throughout the simulation. We demonstrate that Wave 3000 Plus is a viable approach for visualizing ultrasound propagation through patient-specific anatomies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vascular navigation is an essential component of transcatheter cardiovascular interventions, conventionally performed using either 2D fluoroscopic imaging or CT- derived vascular roadmaps which can lead to many complications for the patients as well as the clinicians. This study presents an open-source and user-friendly 3D Slicer module that performs vessel reconstruction from tracked intracardiac ultrasound (ICE) imaging using deep learning-based methods. We also validate the methods by performing a vessel-phantom study. The results indicate that our Slicer module is able to reconstruct vessels with sufficient accuracy with an average distance error of 0.86 mm. Future work involves improving the speed of the methods as well as testing the module in an in-vivo setting. Clinical adaptation of this platform will allow the clinicians to navigate the vessels in 3D and will potentially enhance their spatial awareness as well as improve procedural safety.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Haptic devices allow touch-based information transfer between humans and their environment. In minimally invasive surgery, a human teleoperator benefits from both visual and haptic feedback regarding the interaction forces between instruments and tissues. In this talk, I will discuss mechanisms for stable and effective haptic feedback, as well as how surgeons and autonomous systems can use visual feedback in lieu of haptic feedback. For haptic feedback, we focus on skin deformation feedback, which provides compelling information about instrument-tissue interactions with smaller actuators and larger stability margins compared to traditional kinesthetic feedback. For visual feedback, we evaluate the effect of training on human teleoperators’ ability to visually estimate forces through a telesurgical robot. In addition, we design and characterize multimodal deep learning-based methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli. Finally, we describe the next generation of soft, flexible surgical instruments and the opportunities and challenges they present for seeing and feeling in robot-assisted surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational tools, such as "digital twin" modeling, are beginning to enable patient-specific surgical planning of ablative therapies to treat hepatocellular carcinoma. Digital twins models use patient functional data and biomarker imaging to build anatomically accurate models to forecast therapeutic outcomes through simulation, i.e., providing accurate information for guiding clinical decision-making. In microwave ablation (MWA), tissue-specific factors (e.g., tissue perfusion, material properties, disease state, etc.) can affect ablative therapies, but current thermal dosing guidelines do not account for these differences. This study establishes an imaging-data-driven framework to construct digital twin biophysical models to predict ablation extents in livers with varying fat content in MWA. Patient anatomic scans were segmented to develop customized three-dimensional computational biophysical models, and fat-quantification images were acquired to reconstruct spatially accurate biophysical material properties. Simulated patient-specific microwave ablations of homogenous digital-twin models (control) and enhanced digital twin models were performed at four levels of fatty liver disease. When looking at the short diameter (SD), long diameter (LD), ablation volume, and spherical index of the ablation margins - the heterogenous digital-twin models did not produce significantly different ablation margins compared to the control models. Both models produced results that report ablation margins for patients with high-fat livers are larger than low-fat livers (LD of 6.17cm vs. 6.30cm and SD of 2.10 vs. 1.99, respectively). Overall, the results suggest that modeling heterogeneous clinical fatty liver disease using fat-quantitative imaging data has the potential to improve patient specificity for this treatment modality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interventional Radiology (IR) is a rapidly advancing field, with complex procedures and techniques being developed at increasingly high rates. As these procedures and the underlying imaging technology continue to evolve, one of the challenges for physicians lies in maintaining optimal visualization of the various displays used to guide the procedure. Many Augmented Reality Surgical Navigation Systems (AR-SNS) have been proposed in the literature that aim to improve the way physicians visualize their patient’s anatomy, but there are few that address the problem of space within the IR suite. Our solution is the incorporation of an Augmented Reality “cockpit”, which streams and renders image data inside virtual displays visualized within the Hololens two, eliminating the need for physical displays. The benefit of our approach is that sterile free interaction and customization can be performed using hand gestures and voice commands, and the physician can optimize the positioning of the display without the need to worry about physical interference from other equipment. For proof of concept, we performed a user study to validate the suitability of our approach in the context of liver tumor ablation procedures. We found there was no significant differences in insertion accuracy or time between the proposed approach and the traditional method. This indicates that visualization of US imaging using our approach is an adequate replacement to the traditional physical display and paves the way for the next iteration of the system, which is to quantify the benefits of our approach when used in multi-modality procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Studying the impact of diagnostic x-ray dose on human tissue has historically been problematic; it is unethical to subject patients to unnecessary procedures and impossible to implant dosimetry sensors in vivo. Further, the rapid growth of machine learning in early disease detection is creating enormous demand for radiological screening data. We are adapting additive manufacturing methods to create x-ray CT phantoms with smooth gradients of X-ray attenuation coefficients, mimicking invasive disease, edema, and perfusion events. Using a Crane Quad fused deposition modeling printer equipped with an M3D QuadFusion print head capable of blending materials from up to four different filaments, we are constructing solid models from mixed media that have x-ray attenuation characteristics that mimic human tissue when imaged with a CT scanner. Our work includes exploring the design and production of solid models with human x-ray characteristics with embedded dosimetry sensors. Using four different filaments, Polylactic Acid (PLA), lightweight PLA, and PLA copper-filled and bronze-filled composite filaments, we constructed phantoms with progressive densities. We had the resulting phantoms scanned with three different x-ray energies and analyzed the resulting signatures of Hounsfield Units. We demonstrate our ability to express gradients of X-ray attenuation in solid models. Lastly, we have also produced models of 2D images. This work is the first step in generating reproducible phantoms that mimic the radiological responses of human anatomy and pathology. Future studies will linearize our printing scale and later embed photodiode-based dosimetry sensors in 3D phantoms of the human body.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Patient specific organ and tissue mimicking phantoms are used routinely to develop and assess new image-guided intervention tools and techniques in laboratory settings, enabling scientists to maintain acceptable anatomical relevance, while avoiding animal studies when the developed technology is still in its infancy. Gelatin phantoms, specifically, offer a cost-effective and readily available alternative to the traditional manufacturing of anatomical phantoms, and also provide the necessary versatility to mimic various stiffness properties specific to various organs or tissues. In this study, we describe the protocol to develop patient specific anthropomorphic gelatin kidney phantoms and we also assess the faithfulness of the developed phantoms against the patient specific CT images and corresponding virtual anatomical models used to generate the phantoms. We built the gelatin phantoms by first using additive manufacturing to generate a kidney mold based on patient specific CT images, into which the gelatin was poured. We then evaluated the fidelity of the phantoms (i.e., children) against the virtual kidney model generated from the patient specific CT image (i.e., parent) by comparing it to the surface model of the mold and gelatin phantoms (i.e., children) following their CT imaging post-manufacturing using various registration metrics. Our experiments showed a 0.58 ± 0.48 mm surface-to-surface distance between the phantoms and mold models following landmark-based registration, and 0.52 ± 0.40 mm surface-to-surface distance between the phantoms and the mold model following Iterative Closest Point (ICP) registration. These experiments confirm that the described protocol provides a reliable, fast, and cost-effective method for manufacturing faithful patient specific organ emulating gelatin phantoms and can be applied or extended to other image-guided intervention applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer commonly requires surgical treatment. A procedure used to remove breast cancer is lumpectomy, which removes a minimal healthy tissue margin surrounding the tumor, called a negative margin. A cancer-free margin is difficult to achieve because tumors are not visible or palpable, and the breast deforms during surgery. One notable solution is Rapid Evaporative Ionization Mass Spectrometry (REIMS), which differentiates tumor from healthy tissue with high accuracy from the vapor generated by the surgical cautery. REIMS combined with navigation could detect where the surgical cautery breaches tumor tissue. However, fusing position tracking and REIMS data for navigation is challenging. REIMS has a time-delay dependent on a series of factors. Our objective was to evaluate REIMS time-delay for surgical navigation. The average time-delay of REIMS classifications was measured by video recording. Incisions and corresponding REIMS classifications were measured in tissue samples. We measured the time-delay between physical incision of the tissue and tissue classification. We measured the typical timing of incisions by tracking the cautery in five lumpectomy procedures. The average REMIS time delay was found to be 2.1 ± 0.36 s (average ± SD), with a 95% confidence interval of 0.08 s. The average time between incisions was 2.5 ± 0.87 s. In conclusion, the variation in REIMS tissue classification time-delay allows localization of the tracked incision where the tissue sample originates. REIMS could be used to update surgeons about the location of cancerous tissue with only a few seconds of delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper advances a new paradigm of minimally invasive neurosurgical interventions through skull foramina, which promise to improve patient outcomes by reducing postoperative pain and recovery times, and perhaps even complication rates. The foramen ovale, a small opening in the base of the skull, is currently used to insert recording electrodes into the brain for diagnosing epilepsy and as a pathway for ablating the trigeminal nerve for facial pain. An MRI-compatible robotic platform to position neurosurgical tools along a prescribed trajectory through the foramen ovale can enable access to deep brain targets for diagnosis or intervention. In this paper, we describe design goals and constraints, determined both heuristically and empirically, for such a robotic system. These include the space available within the scanner around the patient, the set of possible needle angles of approach to the foramen ovale, patient positioning options within the scanner, and the force needed to tilt the needle to desired angles. These design considerations can be used to inform future work on the design of MRI-conditional robots to access the brain through the foramen ovale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to ⪅2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new imaging modality (viz., Long-Film [LF]) for acquiring long-length tomosynthesis images of the spine was recently enabled on the O-arm™ system and used in an IRB-approved clinical study at our institution. The work presented here implements and evaluates a combined image synthesis and registration approach to solve multi-modality registration of MR and LF images. The approach is well-suited for pediatric cases that use MR for preoperative diagnosis and aim for lower levels of intraoperative radiation exposure. A patch-based conditional GAN was used to synthesize 3D CT images from MR. The network was trained on deformably co-registered MR and CT image pairs. Synthesized images were registered to LF images using a model-based 3D-2D registration algorithm. Images from our clinical study were manually labeled, and the intra-user variability in anatomical landmark definition was measured in a simulation study. Geometric accuracy of registrations was evaluated on anatomical landmarks in separate test cases from the clinical study. The synthesis process generated CT images with clear bone structures. Analysis of manual labeling revealed 3.1±2.2 mm projection distance error between 3D and 2D anatomical landmarks. Anatomical MR landmarks projected on lateral LF images demonstrated a median projection distance error of 3.6 mm after registration. This work constitutes the first reported approach for MR to LF registration based on deep image synthesis. Preliminary results demonstrated the feasibility of globally rigid registration in aligning preoperative MR and intraoperative LF images. Work currently underway extends this approach to vertebra level, locally rigid / globally deformable registrations, with initialization based on automatically labeled vertebrae levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration of preoperative or intraoperative imaging is necessary to facilitate surgical navigation in spine surgery. After image acquisition, intervertebral motion and spine pose changes can occur during surgery from instrumentation, decompression, physician manipulation or correction. This causes deviations from the reference imaging reducing the navigation accuracy. To evaluate the ability to use the registration between stereovision surfaces in order to account for this intraoperative spine motion through a simulation study. Co-registered CT and stereovision surface data were obtained of a swine cadaver’s exposed lumbar spine in the prone position. Data was segmented and labeled by vertebral level. A simulation of biomechanically bounded motion was applied to each vertebral level to move the prone spine to a new position. A reduced surface data set was then registered level-wise back to the prone spines original position. The average surface to surface distance was recorded between simulated and prone positions. Localized targets on these surfaces were used for a calculation of target registration error. Target registration error increases with distance between surfaces. Movement exceeding 2.43 cm between stereovision acquisitions exceeds registration accuracy of 2mm. Lateral bending of the spine contributes most to this effect compared to axial rotation and flexion-extension. In conclusion, the viability of using stereovision-to-stereovision registration to account for interoperative motion of the spine is shown through this simulation. It is suggested the distance of spine movement between corresponding points does not surpass 2.43 cm between stereovision acquisitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cochlear implant (CI) surgery requires manual or robotic insertion of an electrode array into the patient’s cochlea. At the vast majority of institutions including ours, preoperative CT scans are acquired and used to plan the procedure because they permit to visualize the bony anatomy of the temporal bone. However, CT images involve ionizing radiation, and some institutions and surgeons prefer preoperative MRI, especially for children. To expand the number of patients who can benefit from a computer-assisted CT-based planning system we are developing without additional radiation exposure, we propose to use a conditional generative adversarial network (cGAN)-based method to generate synthetic CT (sCT) images from multi-sequence MR images. We use image quality-based, segmentation-based, and planning-based metrics to compare the sCTs with the corresponding real CTs (rCTs). Loss terms were used to improve the quality of the overall image and of the local regions containing critical structures used for planning. We found very good agreement between the segmentations of structures in the sCTs and the corresponding rCTs with Dice values equal to 0.94 for the labyrinth, 0.79 for the ossicles, and 0.81 for the facial nerve. Such a high Dice value for the ossicles is noteworthy because they cannot be seen in the MR images. Furthermore, we found that the mean errors for quantities used for preoperative insertion plans were smaller than what is humanly perceivable. Our results strongly suggest that potential CI recipients who only have MR scans can benefit from CT-based preoperative planning through sCT generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth perception is a major issue in surgical Augmented Reality (AR) with limited research conducted in this scientific area. This study establishes a relationship between luminance and depth perception. This can be used to improve visualization design for AR overlay in laparoscopic surgery, providing surgeons a more accurate perception of the anatomy intraoperatively. Two experiments were conducted to determine this relationship. First, an online study with 59 participants from the general public, and second, an in-person study with ten surgeons as participants. We developed two open-source software tools utilizing SciKit-Surgery libraries to enable these studies and any future research. Our findings demonstrate that the higher the relative luminance, the closer a structure is perceived to the operating camera. Furthermore, the higher the luminance contrast between the two structures, the higher the depth distance perceived. The quantitative results from both experiments are in agreement, indicating that online recruitment of the general public can be helpful in similar studies. An observation made by the surgeons from the in-person study was that the light source used in laparoscopic surgery plays a role in depth perception. This is due to its varying positioning and brightness which could affect the perception of the overlaid AR. We found that luminance directly correlates with depth perception for both surgeons and the general public, regardless of other depth cues. Future research may focus on comparing different colors used in surgical AR and using a mock Operating Room (OR) with varying light sources and positions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Up to 30% of breast-conserving surgery patients require secondary surgery to remove cancerous tissue missed in the initial intervention. We hypothesize that tracked tissue sensing can improve the success rate of breast-conserving surgery. Tissue sensor tracking allows the surgeon to intraoperatively scan the tumor bed for leftover cancerous tissue. In this study, we characterize the performance of our tracked optical scanning testbed using an experimental pipeline. We assess the Dice similarity coefficient, accuracy, and latency of the testbed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-guided therapies are reliant on the spatial tracking of surgical tools for navigation. Ensuring that tracking is non-intrusive and accurate is therefore important. As tracking sensors become smaller, it is important to determine their effective range in comparison to the sensors that have been previously evaluated. We tested three different electromagnetic sensor sizes in the context of a surgical navigation system. Three different sized electromagnetic sensors were tested for tracking accuracy using optical tracking as the ground truth. An algorithm was developed to calculate the error between the data collected from the electromagnetic sensors with respect to the ground-truth measurements. Contours were generated to visualize the areas where tracking error is under certain threshold values. Multiple contours from electromagnetic sensors of different sizes were generated. To reduce noise in the measurements, repeated results were averaged. Results: The 8 mm and 2 mm length sensors performed comparably, both within acceptable error in the center of the tracking system’s workspace (50 cm away from the transmitter). The accuracy of the 0.5 mm sensor was acceptable up to 40 cm away from the transmitter. A distance greater than 20 cm led to a loss of consistent accuracy from the electromagnetic sensor. The 8 mm sensor and the 2 mm sensor shared similar iso-surface volumes, establishing that the 8 mm sensor could be substituted for the 2 mm sensor, which would be clinically beneficial typically. This would allow for electromagnetic sensors to be less intrusive in the operating room when tracking surgical and percutaneous intervention tools. The 0.5 mm sensor was not able to present the clinical required accuracy ranges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Childhood musculoskeletal pain is a common complaint experienced by approximately 5% to 30% of school age children. This musculoskeletal pain can be difficult to diagnose and assess. When present in long bones such as the legs, this pain can decrease the quality of life of patients. The current standard of care for work-up of persistent bone pain usually requires imaging and lab work. If suspicious bone lesions are seen on imaging, an image-guided needle bone biopsy may be needed to confirm the tissue diagnosis and cause of the pain. This pain can sometimes be caused by infection of the bone known as osteomyelitis. Osteomyelitis a common pathological response to bacterial or fungal infections in the bone and it is estimated that 50% of patients who suffer from this condition are children under the age of 6 years. Childhood cancers such as leukemia or primary bone tumors such as osteosarcoma are also associated with bone pain. Leukemia is the most common type of childhood cancer, representing 25% of all new childhood cancer cases. Appropriate diagnoses of these various conditions are important for appropriate targeted treatment plans. Therefore, obtaining a quality biopsy sample for pathological testing is of the utmost importance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because Radial-Probe Endobronchial Ultrasound (RP-EBUS) can provide real-time confirmation of a suspect peripheral nodule situated outside of the airways, it is widely used during bronchoscopy for lung cancer diagnosis. RP-EBUS, however, tends to be difficult to use effectively, without some form of guidance. Previously, we had prototyped a multimodal image-guided bronchoscopy system that provides guidance during both bronchoscopic navigation and RP-EBUS localization. To use the system, the user first generates a guidance plan offline prior to the live procedure. Later, in the surgical suite, the user then employs the image-guided system to perform the desired multimodal RP-EBUS bronchoscopy, driven by the procedure plan. We now validate this system in a series of live studies. As the first set of end-to-end live system studies, we first tested the system in controlled animal studies. Through these studies, we tested the functionality and feasibility of the system prototype over the standard clinical workflow, without the usual risks associated with live patient procedures. Through these studies, we sharpened the workflow for the prototype and improved user interaction. We then tested the refined system over the standard clinical workflow in our University Hospital’s lung cancer management clinic. This study proved the potential of our system for live clinical usage by demonstrating the safety, feasibility, and functionality of our complete system for guiding RP-EBUS bronchoscopy during peripheral nodule diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computed Tomography (CT) guided procedures are common minimally invasive technique used to perform diagnostic and therapeutic procedures. These applications include obtaining biopsy samples, delivering medications, aspiring/draining fluids, and ablating regions of interest. This minimally invasive approach is especially common in pediatrics. Approximately five to nine million children receive CTs each year. Despite the excellent boney region discrimination and high resolution possible with CT, there are concerns regarding the risks of ionizing radiation exposure. Exposure is often minimized, as CT exposure in children has been linked to the development cancer in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To support the development of an automatic path-planning procedure for bronchoscopy, semantic segmentation of pulmonary nodules and airways is required. The segmentation should happen simultaneously and automatically to save time and effort during the intervention. The challenges of the combined segmentation are the different shapes, frequencies, and sizes of airways, lungs, and pulmonary nodules. Therefore, a sampling strategy is explored using especially relevant crops of the volumes during training and weighting the classes differently, counteracting class imbalance. For the segmentation, a 3D U-Net is used. The proposed algorithm is compared to nnU-Net. First, it is trained as a one-class problem on all classes individually and in a second approach as a multi-label problem. The developed Multi-Label Segmentation network (MLS) is trained with full supervision. The results of the experiments have shown that without further adaption, a combined segmentation of nodules, airways, and lungs is complex. The multi-label nnU-Net failed to find nodules. Considering the different properties of the three classes, MLS accomplishes segmenting all classes simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several lung diseases that lead to alterations in regional lung mechanics, including acute respiratory distress syndrome. Such alterations can lead to localized underventilation of the affected areas resulting in the overdistension of the surrounding healthy regions. They can also lead to the surrounding alveoli expanding unevenly or distorting. Therefore, the quantification of the regional deformation in the lungs offers insights into identifying the regions at risk of lung injury. Although few recent studies have developed image processing techniques to quantify the regional volumetric deformation in the lung from dynamic imaging, the presence and extent of distortional deformation in the lung, and its correlation with volumetric deformation, remain poorly understood. In this study, we present a method that uses the four-dimensional displacement field obtained from image registration to quantify both regional volumetric and distortional deformation in the lung. We used dynamic computed tomography scans in a healthy rat over the course of one respiratory cycle in free breathing. Non-rigid image registration was performed to quantify voxel displacement during respiration. The deformation gradient was calculated using the displacement field and its determinant was used to quantify regional volumetric deformation. Regional distortion was calculated as the ratio of maximum to minimum principal stretches using the isochoric part of the Cauchy green tensor. We found an inverse correlation between volumetric strains and distortion indicating that poorly expanding alveoli tend to experience larger distortion. The combination of regional volumetric strains and distortion may serve as high-fidelity biomarkers to identify the regions at risk of most adverse lung injuries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Percutaneous epicardial access for epicardial ablation and mapping of cardiac arrhythmias is being performed more and more often. Unfortunately, complications such as injury to surrounding structures have been reported. Despite the current imaging techniques, it is still difficult to guarantee sufficient ablation accuracy. Head-Mounted-Display (HMD) Augmented Reality (AR) overlay and guidance has the potential to reduce the risk of complications. The objective of this study was to evaluate the accuracy and performance of an AR-guided epicardial puncture for catheter ablation of ventricular tachycardia. An AR software tool was designed to render real-time needle trajectories and 3D patient-specific organs. Registration of preoperative data is realized by attaching four AR patterns to the skin of the patient. Needle tracking is realized by attaching one AR pattern to the end of the needle’s base. The ideal trajectory through the pericardial space and patient-specific organs was planned and segmented on preoperative CT. The application’s accuracy was evaluated in a phantom study. Seven operators performed needle puncture with and without the use of the AR system. Placements errors were measured on postprocedural CT. With the use of the proposed AR-based guidance, post procedure CT revealed an error at the puncture site of 3.67±2.78 mm. At the epicardial interface, the error increased to 7.78±2.36 mm. The angle of the actual trajectory deviated on average 4.82±1.48◦ from the planned trajectory. The execution time was on average 34.0 ± 25.1 s, hence introducing no significant delay at an overall superior performance level compared to without AR-guided puncturing. The proposed AR platform has the potential to facilitate percutaneous epicardial access for epicardial ablation and mapping of cardiac arrhythmias by improving needle insertion accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality (AR) is becoming a more common addition to physicians’ repertoire for aiding in resident training and patient interactions. However, the use of augmented reality in clinical settings is still beset with many complications, including the lack of physician control over the systems, set modes of interactions within the system, and physician’s lack of familiarity with such AR systems. In this paper, we plan to expand on our previous prostate biopsy AR system by adding in improved user interface systems within the virtual world in order to allow the user to more accurately visualize only parts of the system which they consider to be useful at that time. To accomplish this, we have incorporated three-dimensional virtual sliders built from the ground up, using Unity to afford control over each model’s RGB values, as well as their transparency. This means that the user would be able to fully edit the color, and transparency of each individual model in real time as they see fit quickly and easily while still being immersed in the augmented space. This would allow users to view internal holograms while not sacrificing the capability to view the external structure. Such leeway could be invaluable when visualizing a tumor within a prostate and would provide the physician with the capability to view as much or as little of the surrounded virtual models as desired, while providing the option to reinstate the surrounding models at will. The AR system can provide a new approach for potential uses in image-guided interventions including targeted biopsy of the prostate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work proposes a novel U-shaped neural network, Shifted-window MLP (Swin-MLP), that incorporates a Convolutional Neural Network (CNN) and Multilayer Linear Perceptron-Mixer (MLP-Mixer) for automatic CT multi-organ segmentation. The network has a structure like V-net: 1) a Shifted-window MLP-Mixer encoder learns semantic features from the input CT scans, and 2) a decoder, which mirrors the architecture of the encoder, then reconstructs segmentation maps from the encoder’s features. Novel to the proposed network, we apply a Shifted-window MLP-Mixer rather than convolutional layers to better model both global and local representations of the input scans. We evaluate the proposed network using an institutional pelvic dataset comprising 120 CT scans, and a public abdomen dataset containing 30 scans. The network’s segmentation accuracy is evaluated in two domains: 1) volume-based accuracy is measured by Dice Similarity Coefficient (DSC), segmentation sensitivity, and precision; 2) surface-based accuracy is measured by Hausdorff Distance (HD), Mean Surface Distance (MSD), and Residual Mean Square distance (RMS). The average DSC achieved by MLP-Vnet on the pelvic dataset is 0.866; sensitivity is 0.883, precision is 0.856, HD is 11.523 millimeter (mm), MSD is 3.926 mm, and RMS is 6.262 mm. The average DSC on the public abdomen dataset is 0.903, and HD is 5.275 mm. The proposed MLP-Mixer-Vnet demonstrates significant improvement over CNN-based networks. The automatic multi-organ segmentation tool may potentially facilitate the current radiotherapy treatment planning workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most typical type of cancer among women is breast cancer. Despite the crucial role that digital mammography plays in the early identification of breast cancer, many tumors could not be discriminated on mammography, especially in women with dense breast tissue. Contrast-enhanced magnetic resonance imaging (CE-MRI) of the breast is routinely used to find lesions that are invisible on mammography. MRI-guided biopsies must be used to further analyze these lesions. But MRI-guided biopsy is highly priced, time-consuming, and not frequently accessible. In our earlier work, we introduced a novel method using two methods of registration: biomechanical and image-based registration to transfer lesions from MRI to spot mammograms to allow x-ray guided biopsy. In this paper, we focus on enhancing and developing the image-based registration between full and spot mammograms and analyzing a correlation between the accuracy of our method and features such as views, location of lesion, breast area, size of lesion in each modality, and age. Results for 48 patients from the Medical University of Vienna are provided. The median target registration error is 20.9 mm and the standard deviation is 23.9 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to develop a deep learning method to effectively consider respiratory motion for the generation of realistic dose-volume histograms for more accurate and efficient propagation of organ and tumor contours from a target phase to all phases in lung 4DCT patient datasets. Our proposed method is a platform that performs Deformable Image Registration (DIR) of individual phase datasets in a simulation 4DCT and comprises a generator and discriminator. The generator accepts moving and target CTs as input and outputs the Deformation Vector Fields (DVFs) to match the two CTs. The generator is optimized during both forward and backward paths to enhance the bidirectionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network, specifically through landmark-driven loss. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. A publicly available DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean of TRE for the DIR-Lab datasets was 1.03±0.66 mm. These results demonstrated the feasibility and efficacy of our proposed method, which provides a potential method for realistic registration of phases in 4DCT lung datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided detection systems for lung nodules play an important role in the early diagnosis and treatment process. False positive reduction is a significant component in pulmonary nodule detection. To address the visual similarities between nodules and false positives in CT images and the problem of two-class imbalanced learning, we propose a Central Attention Convolutional Neural Network on Imbalanced Data (CACNNID) to distinguish nodules from a large number of false positive candidates. To solve the imbalanced data problem, we consider density distribution, data augmentation, noise reduction, and balanced sampling for making the network well-learned. During the network training, we design the model to pay high attention to the central information and minimize the influence of irrelevant edge information for extracting the discriminant features. The proposed model has been evaluated on the public dataset LUNA16 and achieved a mean sensitivity of 92.64%, specificity of 98.71%, accuracy of 98.69%, and AUC of 95.67%. The experimental results indicate that our model can achieve satisfactory performance in false positive reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A competency-based approach for colonoscopy training is particularly important, since the amount of practice required for proficiency varies widely between trainees. Though numerous objective proficiency assessment frameworks have been validated in the literature, these frameworks rely on expert observers. This process is time-consuming, and as a result, there has been increased interest in automated proficiency rating of colonoscopies. This work aims to investigate sixteen automatically computed performance metrics, and whether they can measure improvements in novices following a series of practice attempts. This involves calculating motion-tracking parameters for three groups: untrained novices, those same novices after undergoing training exercises, and experts. Both groups had electromagnetic tracking markers fixed to their hands and the scope tip. Each participant performed eight testing sequences designed by an experienced clinician. Novices were then trained on 30 phantoms and re-tested. The tracking data of these groups were analyzed using sixteen metrics computed by the Perk Tutor extension for Slicer. Statistical differences were calculated using a series of three t-tests, adjusting for multiple comparisons. All sixteen metrics were statistically different between pre-trained novices and experts, which provides evidence of their validity as measures of performance. Experts had fewer translational or rotational movements, a shorter and more efficient path, and performed the procedure faster. Pre- and post-trained novices did not significantly differ in average velocity, motion smoothness, or path inefficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to develop and test a computer vision-based method for tracking and navigating a bronchoscope during lung procedures, such as biopsy or ablation. A vision-based algorithm was developed to track bronchoscope rotation and identify airway branches for navigation. The algorithm was tested on a phantom and a preclinical swine subject. The 3D airway tree was segmented from pre-procedural cone-beam CT. The airway tree was subdivided into segments and the 3D coordinates were stored using centerline extraction. Feature-based rotational tracking was calculated using SURF and brute-force matcher. Bifurcation detection was accomplished by image processing and blob detection. The localization of the bronchoscope within the airway tree was performed based on the projection of the child branches relative to the parent and related to the 3D image. A sufficient number of features to identify rotational positioning of the bronchoscope were found in 720 out of 811 (89%) video frames with an error of 3.2±2.2 degrees. Airway bifurcations were correctly identified in 29 out of 31 (90%) cases and the bronchoscope was correctly localized within a segment in seven out of seven (100%) cases. In conclusion, a computer vision-based method for tracking in the airways accurately identified the rotation of a bronchoscope and classified bifurcations to assist in navigation without the use of electromagnetic, position detection, or fiber optic shape-sensing technologies. Implementation of this technology could enable cost-controlled adoption of bronchoscopic technologies for trainees and might be utilized in low-resource settings unequipped with expensive robotic and tracking systems for diagnosis and management of suspected lung cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As medical education adopts a competency-based training approach, assessment of skills and timely provision of formative feedback is required. Provision of such assessment and feedback places a substantial time burden on surgeons. To reduce this time burden, we look to develop a computer-assisted training platform to provide both instruction and feedback to residents learning open Inguinal Hernia Repairs (IHR). To provide feedback on residents’ technical skills, we must first find a method of workflow recognition of the IHR. We thus aim to recognize and distinguish between workflow steps of an open IHR based on the presence and frequencies of different tool-tissue interactions occurring during each step. Based on ground truth tissue segmentations and tool bounding boxes, we identify the visible tissues within a bounding box. This provides an estimation of which tissues a tool is interacting with. The presence and frequencies of the interactions during each step are compared to determine whether this information can be used to distinguish between steps. Based on the ground truth tool-tissue interactions, the presence and frequencies of interactions during each step in the IHR show clear, distinguishable patterns. In conclusion, due to the distinct differences in the presence and frequencies of the tool-tissue interactions between steps, this offers a viable method of step recognition of an open IHR performed on a phantom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mass Spectrometry Imaging (MSI) is a powerful tool capable of visualizing molecular patterns to identify disease markers in tissue analysis. However, data analysis is computationally heavy and currently time-consuming as there is no single platform capable of performing the entire preprocessing, visualization, and analysis pipeline end-to-end. Using different software tools and file formats required for such tools also makes the process prone to error. The purpose of this work is to develop a free, open-source software implementation called “Visualization, Preprocessing, and Registration Environment” (ViPRE), capable of end-to-end analysis of MSI data. ViPRE was developed to provide various functionalities required for MSI analysis including data import, data visualization, data registration, Region of Interest (ROI) selection, spectral data alignment and data analysis. The software implementation is offered as an open-source module in 3D Slicer, a medical imaging platform. It is also designed for flexibility and usability throughout the user experience. ViPRE was tested using sample MSI data to evaluate the computational pipeline, with the results showing successful implementation of its functionalities and end-to-end usage. A preliminary usability test was also performed to assess user experience, with findings showing positive results. ViPRE aspires to satisfy the need for a single-stop comprehensive interface for MSI data analysis. The source code and documentation will be made publicly available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After breast-conserving surgery, positive margins occur when breast cancer cells are found on the resection margin, leading to a higher chance of recurrence and the need for repeat surgery. The NaviKnife is an electromagnetic tracking-based surgical navigation system that helps to provide visual and spatial feedback to the surgeon. In this study, we conduct a gross evaluation of this navigation system with respect to resection margins. The trajectory of the surgical cautery relative to ultrasound-visible tumor will be visualized, and its distance and location from the tumor will be compared with pathology reports. Six breast-conserving surgery cases that resulted in positive margins were performed using the NaviKnife system. Trackers were placed on the surgical tools and their positions in three-dimensional space were recorded throughout the procedure. The closest distance between the cautery and the tumor throughout the procedure was measured. The trajectory of the cautery when it came closest to the tumor model was plotted in 3D Slicer and compared with pathology reports. In two of the six cases, the side at which the cautery came the closest to the tumor model coincided with the side at which positive margins were found from pathology reports. Our method shows that positive margins occur mainly in areas that are not visible from ultrasound imaging. Our system will need to be used in combination with intraoperative tissue characterization methods to effectively predict the occurrence and location of positive margins.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient and accurate segmentation of the rectum in images acquired with a low-field (58-74mT), prostate Magnetic Resonance Imaging (MRI) scanner may be advantageous for MRI-guided prostate biopsy and focal treatment guidance. However, automated rectum segmentation on low-field MRI images is challenging due to spatial resolution and signal-to-Noise Ratio (SNR) constraints. This study aims to develop a deep learning model to automatically segment the rectum in a low-field MRI prostate image. 132, 3D images from ten patients were assembled. A 3D, U-Net model with the input matrix size 120×120×40 voxels was trained to detect and segment the rectum. The 3D U-Net can learn and integrate the relative information between adjacent MRI slices, which can enforce 3D patterns such as rectal wall smoothness and thus compensate for slice-to-slice variability in SNR and rectal boundary fuzziness. Contrast stretching, histogram equalization, and brightness enhancement were also investigated and applied to normalize intra- and inter- image intensity heterogeneity. Data augmentation methods such as elastic deformation, flipping, rotation, and scaling were also applied to reduce the risk of overfitting in model training. The model was trained and tested using a 4-fold cross-validation method with 3:1:2 split for training, validation, and testing. Study results show that the mean intersection over union score (IOUs) is 0.63 for the rectum on the testing dataset. Additionally, visual examination suggests that the displacement between the centroids of the ground truth and inferred volumetric segmentations is less than 3mm. Thus, this study demonstrates that (1) a 3D U-Net model can effectively segment the rectum on low-field MRI scans and (2) applying image processing and data augmentation can boost model performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents surgical area recognition from laparoscopic images in laparoscopic gastrectomy. Laparoscopic gastrectomy is performed as a minimally invasive procedure for removing gastric cancer. In this surgery, surgeons should cut the blood vessels around the stomach before resecting cancer. Since this type of surgery requires higher surgical skill, a surgical assistance system has been developed to enhance surgeons’ abilities. Recognition of the surgical area related to the blood vessels from laparoscopic videos provides essential information in the operative field to the surgical assistance system. Therefore, we develop a method for recognizing laparoscopic images into the surgical area. The proposed method classifies the laparoscopic images into seven scenes using deep neural networks. We introduce the label smoothing in time direction to obtain a soft label. Bayesian neural networks are used to classify the laparoscopic images and estimate the uncertainty. After the classification, we modify the predictions on each laparoscopic image using the estimated uncertainty and temporal information. We evaluated the proposed method using 10,818 images from ten videos recorded during laparoscopic gastrectomy for gastric cancer. Five-fold cross-validation was performed for the performance evaluation. The mean classification accuracy was 84.0%. The experimental results showed that the proposed method could recognize the surgical area from laparoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Trans-Oral Robotic Surgery (TORS) is an alternative surgery technique used to treat head-and-neck cancer. Compared with conventional surgery, robot assistance allows surgeons to operate within areas with restricted access, such as the oropharynx, reducing the operative morbidity, risk of reconstructive surgery and improving patient outcomes. TORS is a challenging procedure, and intra-operative Ultrasound (US) has the potential to improve anatomy visualization to lessen the cognitive load on surgeons. To date, only intra-oral US has been used in exploratory studies, but intra-oral US can interfere with robot tools. In this study, we assess the feasibility of using transcervical 3D US with TORS: we propose to place the US probe on the patient’s neck to evaluate oropharyngeal anatomy intra-operatively. We also perform the first feasibility study of image registration between transcervical 3D US and Magnetic Resonance Imaging (MRI) for the oropharynx. We collected 3D US and MRI data from five healthy volunteers and four patients with oropharyngeal cancer, and we use a semi-automatic MRI-US registration algorithm to estimate an affine transformation between the two image spaces. The average Target Registration Error (TRE) is 8.26 ± 7.41mm for healthy volunteers and 9.63 ± 5.91mm for patients, and our case studies show that image quality is the key factor for good registration. Our work shows that 3D transcervical US has the clinical potential to enable intraoperative oropharynx imaging and interventional MR guidance during TORS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., Natural Intelligence (NI)) with deep learning (i.e., Artificial Intelligence (AI)) for recognition and delineation of the thoracic Brachial Plexuses (BPs) in Computed Tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of two key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cochlear Implants (CIs) are neural prosthetics which use an array of implanted electrodes to improve hearing in patients with severe-to-profound hearing loss. After implantation, the CI is programmed by audiologists who adjust various parameters to optimize hearing performance for the patient. Without knowing which Auditory Nerve Fibers (ANFs) are being stimulated by each electrode, this process can require dozens of programming sessions and often does not lead to optimal programming. The Internal Auditory Canal (IAC) houses the ANFs as they travel from the implantation site, the cochlea, to the brain. In this paper, we present a method for localizing the IAC in a CT image by deforming an atlas IAC mesh to a CT image using a 3D U-Net. Our results suggest this method is more accurate than an active shape model-based method when tested on a test set of 20 images with ground truth. This IAC segmentation can be used to infer the position of the invisible ANFs to assist with patient-specific CI programming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast conserving surgery is a common treatment option for early-stage breast cancer, and it relies on complete tumor excision such that no residual cancer is left in the resection cavity. However, the use of preoperative imaging to inform excision is compromised by intraoperative deformations that change the location, volume, and shape of the tumor compared to the imaging configuration. For intra-procedural guidance specifically, incision and retraction alter the tumor presentation and geometry. Being able to compensate for retraction deformations intraoperatively may increase the utility of image guidance technologies. In this work, a breast retraction phantom and deformation modeling approach are developed to explore the potential of modeling retraction for image guidance during BCS. Surface and subsurface beads were embedded in a realistic silicone breast phantom, and CT images were acquired in undeformed and retracted states. A reconstructive, sparse-data registration method was used to model retraction. Modeling accuracy was evaluated by comparing model-predicted and ground-truth bead displacements. The average surface bead registration error after retraction modeling in a region of interest was 0.5 ± 0.1 mm (maximum 0.5 mm). The average subsurface bead registration error in a region of interest was 1.2 ± 0.6 mm (maximum 2.6 mm). A biomechanical modeling method that includes retraction may improve the accuracy of image guidance for breast conserving surgery, but more work is needed to evaluate its utility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is a leading cause of death among women in the United States, and Breast Conserving Surgery (BCS) is a common treatment option for women with early-stage breast cancer. The goal of BCS is to localize and remove the tumor with negative margins, and incomplete tumor excision can lead to reoperation and recurrence. To aid with intraoperative tumor localization, a BCS image guidance system that features an optical tracking camera (NDI Polaris Vicra) and stereocameras (FLIR Grasshopper stereocamera pair) for tracking surgical tools and surgical scene monitoring, respectively, is being investigated. However, sensor data collected from both stereovision systems are not inherently co-registered. Thus, this paper proposes utilizing a tracked checkerboard calibration object paired with a custom 3DSlicer module for optical tracker and stereo camera co-registration. The module features an easy-to-use user interface to streamline calibration. The calibration process was validated with a tissue-mimicking breast phantom embedded with beads to represent surface fiducials and undergoing deformations (n=5 mock phantom tissue states). Across five independent calibration trials, the average Fiducial Registration Error (FRE) was calculated to be 0.68mm ± 0.11mm while the average Target Registration Error (TRE) was calculated to be 0.35mm ± 0.14mm. With respect to the challenge test conditions using the mock breast phantom under differing deformations, TRE values were on average determined to be 2.58mm ± 0.26mm over all breast phantom states. Thus, a tracked checkerboard tool paired with the custom 3DSlicer module provided the ability to localize and track points of interest for accurate registration of two different optical tracking systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The standard-of-care treatment to restore sound perception for individuals with severe-to-profound sensorineural hearing loss is the Cochlear Implant (CI) — a small, surgically-inserted electronic device that bypasses most of the mechanism of unaided acoustic hearing to directly stimulate Auditory Nerve Fibers (ANFs). Although many individuals experience success with these devices, a significant portion of recipients receive only marginal benefits. Biophysical models of ANFs have been developed that could be used in an image-guided treatment pipeline for patient-customized CI interventions. However, due to the difficult nature of determining neuron properties in humans, existing models rely on parameters derived from animal studies that were subsequently adapted to human models. Additionally, it is well-established that individual neurons of a single type can be non-homogeneous. In this research, we present a sensitivity analysis of a set of parameters used in one existing fiber model to (1) establish the influence of these parameters on predicted neural activity and (2) explore whether incorporation of these properties as patient-specific tunable parameters in a neural health optimization algorithm can produce a more comprehensive picture of ANF health when used in an image-guided treatment pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional image guidance systems have become the standard for planning in neurosurgical procedures. While these systems have been shown to increase accuracy and precision for a variety of cases, there are still drawbacks. Such limitations revolve around the use of multiple physical components to separately display the information and the substantial amount of training time required for residents to become familiar with these systems. These disadvantages can increase the difficulty of the surgery unnecessarily, and thus affect quality. To address these concerns, a mixed reality application was developed on the Microsoft HoloLens as a surgical planning and teaching tool. Neurosurgeons could see head, brain, and tumor holograms and create a craniotomy plan through hologram interaction. These interactions included the ability to rotate the head to its desired location, perform an image-to-physical registration, and utilize a virtual stylus to collect points for the surgical plan. Furthermore, the app was expanded to also track physical tools with an optical tracker to provide a more realistic surgical planning scenario. With a tracked stylus, the user was capable of selecting points on the physical head to perform a registration where the 3D neuroanatomy models properly appeared in the physical head as a virtually augmented structure. To evaluate the prototype, practicing neurosurgeons were provided a demonstration and then promptly interviewed to assess design and efficacy. The tool alignment procedure was also evaluated to quantify the calibration error. Initial responses indicated that the prototype could be an effective surgical teaching and planning tool for less-experienced neurosurgeons because it could allow neurosurgeons to view physical and image space simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vagus Nerve Stimulation (VNS) is an effective technique for treating epilepsy, and it is a promising method to treat many other health conditions, such as depression, cardiovascular disease, chronic pain, diabetes, and others. Due to this wide range of applications, many researchers have developed VNS devices and stimulation techniques over the past decades. However, a common practice is to implant an electrode that has a rather broad stimulation field across the Vagus Nervus (VN) and as a result has limited anatomical specificity and may lead to adverse side effects. The efficacy and breadth of VNS therapy can be improved by selectively modulating only regions associated with a given function. Additionally, enhanced precision should also facilitate uncovering functional vagotopy. In this work, stimulation levels, amount of current injected and electrode configuration are investigated to determine the extent to which activation of the vagus nerve can be precisely controlled. A simple quantitative method to optimize activation is also proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cone-Beam CT (CBCT) is a valuable imaging modality for the intraoperative localization of pulmonary nodules during Video-Assisted Thoracoscopic Surgery (VATS). However, inferring the nodule position from the CBCT to the operative field remains challenging and could greatly benefit from computer-aided guiding. As a first step towards an Augmented Endoscopy guiding system, we propose to register 2D monocular endoscopic views into the 3D CBCT space. Ribs and wound protectors are segmented in both imaging modalities, then registered using an image-to-cloud Iterative Closest Point variant. The method is evaluated qualitatively on clinical VATS video sequences from three patients. The promising results validate this first step towards a seamless monocular VATS navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractures of the acetabulum, the cavity of the hip that hosts the femoral head, are complex to understand, plan, and surgically reduce. Segmenting bone fragments in CT scans is fundamental for assisting surgeons in their therapeutically process and can benefit from recent learning-based advances. In this paper, we extended a learning-based network for the semantic segmentation of six pelvic bones: left and right hip, left and right femur, sacrum, and lumbar spine. This semantic segmentation is then process by a surgeon to separate fracture fragments, similarly to an existing baseline process. Results on 6 fracture cases show a qualitative improvement of the final fragment segmentation quality. Mostly, the segmentation time is statistically significantly reduced from 94 min to 18 min, in mean, which is a promising step towards using such learning-based method in preoperative clinical routine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This conference presentation was prepared for SPIE Medical Imaging, 2023.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cardiac motion remains a challenge in the treatment of ventricular tachycardia with external beam ablation therapy. Current techniques involve expansion of the treatment area which can lead to unwanted collateral damage. Surrounding healthy tissue could be spared by gating the delivery of the beam to the cardiac cycle. In prior work, we assessed cardiac motion using in vivo fiducial markers and demonstrated that motion would be reduced if treatment were gated to half of the cardiac cycle, approximately corresponding to diastole. In the current work, we extend our prior analysis by quantitatively assessing the optimal gating window for motion reduction in the left ventricle. Motion was assessed in five porcine models with two fiducial clips per animal for a total of ten clips. The minimal cardiac motion occurred when the gating window started at 70% of the cardiac cycle. Without gating, three-dimensional cardiac motion was 7.0 ± 3.9 in x (left/right), 5.3 ± 2.5 in y (anterior/posterior), and 5.6 ± 2.3 in z (superior/inferior) mm. Using an optimal gating window, cardiac motion was 3.1 ± 1.8 in x (left/right), 2.5 ± 1.2 in y (anterior/posterior), and 3.1 ± 1.7 in z (superior/inferior) mm. The percentage reduction in motion with optimal gating was 51 ± 23 in x (left/right), 49 ± 21 in y (anterior/posterior), and 45 ± 24 % in z (superior/inferior). This work demonstrates that gating shows significant promise for reducing the effects of left ventricular motion when treating ventricular tachycardia with external beam ablation therapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flexible ureteroscopes (fURS) are the most commonly used surgical device for endoscopic management of upper urinary tract conditions, including nephrolithiasis. Single-use flexible ureteroscopes (su-fURS) were introduced in 2015 with purported decrease maintenance costs and sterility concerns compared to reusable devices; however, the ergonomic impact of these devices on surgeons is not well-characterized. This study aims to investigate su-fURS ergonomics by developing a biomechanical feedback system for use during ureteroscopy. The study is designed for use by numerous su-fURS, with an initial focus on LithoVue™ (Boston Scientific). Two experimental models mimicking in-vivo fURS use were selected: an anatomically correct kidney-ureter-bladder (KUB) model; and a fURS training model simulating varying physical complexity and operator strain. Clinically relevant testing metrics selection was informed by fellowship-trained endourologist consultation. A series of representative fURS tasks was developed for testing: endoluminal navigation, kidney stone manipulation, basketing, and extraction. The dominant hand thumb, index finger distal interphalangeal joint, extensor digitorum tendons, and flexor digitorum muscle were identified as most relevant for monitoring and highest strain risk during fURS operation. A biomechanical feedback system was developed using a prototypical set of inertial measurement units provided by Mayo Clinic special purpose processor development group to provide live readings during endoscopic movements. Pilot testing demonstrated reliable hand kinematic measurements during simulated ureteroscopy. We developed and pilot-tested a novel biomechanical feedback system for flexible ureteroscopy to provide live feedback during ureteroscopy. Following further validation, this system may be applied to improve surgical training, decreasing physician fatigue and injury, and ultimately improve patient care.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Positive surgical margins are a common complication of trans-oral tumor resection, and implementation of image guidance is typically hindered by significant tissue deformation introduced by oral retractors. Recent advances have produced multiple pathways for developing intraoperative trans-oral image guidance, which must ultimately be displayed to the surgeon in real time. This work presents a pipeline for automatically displaying CT-registered three-dimensional surface structures in the surgeon console of a da Vinci surgical system and assesses image-plane projection accuracy using Dice coefficient and intersection over union metrics. While coarse accuracy is acceptable (metric averages ⪆0.5), more accurate projections were obtained using registration methods based on optically tracking the endoscope shaft. Further improvement of registration, kinematic modeling, and endoscope calibration is necessary prior to use in preclinical evaluation of image guidance strategies for trans-oral robotic surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Miniature Screws, often used for fiducials, are currently localized on DICOM images manually. This time-consuming process can add tens of minutes to the computational process for registration, or error analysis. Through a series of morphological operations, this localization task can be completed in a time much less than a second when performed on a standard laptop. Two image sets were analyzed. The first data set consisted of six intraoperative CT (iCT) scans of the lumbar spine of both cadaver and live porcine samples. This dataset includes not only implanted mini-screws, but other metal instrumentation. The second dataset consists of 6 semi-rigidly deformed CT (uCT) scans of the lumbar spine of the same animals. This dataset has been intensity down sampled from 16 bits to eight bits as a pre-processing step. Also, due to other deformation steps, other artifacts are apparent. Both datasets show at least 18 mini-screws which were rigidly implanted in the lumbar vertebrae. Each vertebra has at least three mini-screws implanted. These images were processed as follows: projection image forming via maximum row values, thresholding, opening, non-circular regions were removed, and circular regions were eroded. Leaving voxel locations of the center of each mini-screw. The aforementioned steps can be completed with a mean computational efficiency of .0365 seconds. Which is an unobtainable time for manual localization. Even by the most skilled. The true positive rates of the iCT and uCT datasets were 96.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two main features of near-infrared (NIR) light are the ability to perform component analysis based on spectral differences and to have permeability to biological tissues. These features make the technology to acquire NIR spectral of the deep lesion and analyze the components by each pixel, called hyperspectral imaging (HSI). Mounting this technology to a laparoscope enables visualization of invisible or looking-similar tissues in visible light during laparoscopic surgery. In this research, the developed NIR-HSI laparoscopic device acquired NIR spectrum images on in vivo pig specimens. Through the experiments, the difference in spectrum between the artery and surrounding other tissues was confirmed. Additionally, a machine learning procedure provided high accuracy detection of the artery area; accuracy, precision, and recall are 0.868 %, 0.921 %, and 0.637 % respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The success of deep brain stimulation (DBS) is dependent on the accurate placement of electrodes in the operating room (OR). However, due to intraoperative brain shift, the accuracy of pre-operative scans and pre-surgical planning are often degraded. To compensate for brain shift, we created a finite element bio-mechanical brain model that updates preoperative images by assimilating intraoperative sparse data from the brain surface or deep brain targets. Additionally, we constructed an artificial neural network (ANN) that leveraged a large number of ventricle nodal displacements to estimate brain shift. The machine learning method showed potential in incorporating ventricle sparse data to accurately compute shift at the brain surface. Thus, in this paper, we propose using this machine learning model to estimate brain atrophy at deep brain targets such as the anterior commissure (AC) and the posterior commissure (PC). The ANN consists of an input layer with nine hand-engineered features, such as the distance between the deep brain target and the ventricle node, two hidden layers and an output layer. This model was trained using eight patient cases and tested on two patient cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To deal with multitask segmentation, detection and classification of colon polyps, and solve the clinical problems of small polyps with similar background, missed detection and difficult classification, we have realized the method of supporting the early diagnosis and correct treatment of gastrointestinal endoscopy on the computer. We apply the residual U-structure network with image processing to segment polyps, and a Dynamic Attention Deconvolutional Single Shot Detector (DAD-SSD) to classify various polyps on colonic narrow-band images. The residual U-structure network is a two-level nested U-structure that is able to capture more contextual information, and the image processing improves the segmentation problem. DAD-SSD consists of Attention Deconvolutional Module (ADM) and Dynamic Convolutional Prediction Module (DCPM) to extract and fuse context features. We evaluated narrow-band images, and the experimental results validate the effectiveness of the method in dealing with such multi-task detection and classification. Particularly, the mean average precision (mAP) and accuracy are superior to other methods in our experiment, which are 76.55% and 74.4% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.