Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.
PURPOSE: The segmentation of Computed Tomography (CT) colonography images is important to both colorectal research and diagnosis. This process often relies on manual interaction, and therefore depends on the user. Consequently, there is unavoidable interrater variability. An accurate method which eliminates this variability would be preferable. Current barriers to automated segmentation include discontinuities of the colon, liquid pooling, and that all air will appear the same intensity on the scan. This study proposes an automated approach to segmentation which employs a 3D implementation of U-Net. METHODS: This research is conducted on 76 CT scans. The U-Net comprises an analysis and synthesis path, both with 7 convolutional layers. By nature of the U-Net, output segmentation resolution matches the input resolution of the CT volumes. K-fold cross-validation is applied to ensure no evaluative bias, and accuracy is assessed by the Sørensen-Dice coefficient. Binary cross-entropy is employed as a loss metric. RESULTS: Average network accuracy is 98.81%, with maximum and minimum accuracies of 99.48% and 97.03% respectively. Standard deviation of K accuracies is 0.5%. CONCLUSION: The network performs with considerable accuracy, and can reliably distinguish between colon, small intestine, lungs, and ambient air. A low standard deviation is indicative of high consistency. This method for automatic segmentation could prove a supplemental or alternative tool for threshold-based segmentation. Future studies will include an expanded dataset and a further optimized network.
Purpose: Colonoscopy is a complex procedure with considerable variation among patients, requiring years of experience to become proficient. Understanding the curvature of colons could enable practitioners to be more effective. The purpose of this research is to develop methods to analyze the curvature of patients’ colons, and compare key segments of colons between supine and prone positions. Methods: The colon lumen in CT scans of ten patients are segmented. The following steps are automated by Python scripts in the 3D Slicer application: a set of center points along the colon are generated, and a curve is fit to these points. By identifying local maximums and local minimums in curvature, curves can be defined between two local curvature minimums. The angle of each curve is calculated over the distance of curves. Results: This automated process was used to identify and quantitatively analyze curves on the colon centerline in different patient positions. On average, there are 4.6 ± 3.8 more curves in supine position than prone. In the descending colon, there are more curves in the supine position, but curves in the prone position are larger. Conclusion: This process can quantify the curvature of colons, and can be adapted to consider other patient groups. Descriptive statistics indicate supine position has more curves in the descending colon, and prone has sharper curves in the descending colon. These preliminary results motivate further work with a larger sample size, which may reveal additional significant differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.