A competency-based approach for colonoscopy training is particularly important, since the amount of practice required for proficiency varies widely between trainees. Though numerous objective proficiency assessment frameworks have been validated in the literature, these frameworks rely on expert observers. This process is time-consuming, and as a result, there has been increased interest in automated proficiency rating of colonoscopies. This work aims to investigate sixteen automatically computed performance metrics, and whether they can measure improvements in novices following a series of practice attempts. This involves calculating motion-tracking parameters for three groups: untrained novices, those same novices after undergoing training exercises, and experts. Both groups had electromagnetic tracking markers fixed to their hands and the scope tip. Each participant performed eight testing sequences designed by an experienced clinician. Novices were then trained on 30 phantoms and re-tested. The tracking data of these groups were analyzed using sixteen metrics computed by the Perk Tutor extension for Slicer. Statistical differences were calculated using a series of three t-tests, adjusting for multiple comparisons. All sixteen metrics were statistically different between pre-trained novices and experts, which provides evidence of their validity as measures of performance. Experts had fewer translational or rotational movements, a shorter and more efficient path, and performed the procedure faster. Pre- and post-trained novices did not significantly differ in average velocity, motion smoothness, or path inefficiency.
PURPOSE: The segmentation of Computed Tomography (CT) colonography images is important to both colorectal research and diagnosis. This process often relies on manual interaction, and therefore depends on the user. Consequently, there is unavoidable interrater variability. An accurate method which eliminates this variability would be preferable. Current barriers to automated segmentation include discontinuities of the colon, liquid pooling, and that all air will appear the same intensity on the scan. This study proposes an automated approach to segmentation which employs a 3D implementation of U-Net. METHODS: This research is conducted on 76 CT scans. The U-Net comprises an analysis and synthesis path, both with 7 convolutional layers. By nature of the U-Net, output segmentation resolution matches the input resolution of the CT volumes. K-fold cross-validation is applied to ensure no evaluative bias, and accuracy is assessed by the Sørensen-Dice coefficient. Binary cross-entropy is employed as a loss metric. RESULTS: Average network accuracy is 98.81%, with maximum and minimum accuracies of 99.48% and 97.03% respectively. Standard deviation of K accuracies is 0.5%. CONCLUSION: The network performs with considerable accuracy, and can reliably distinguish between colon, small intestine, lungs, and ambient air. A low standard deviation is indicative of high consistency. This method for automatic segmentation could prove a supplemental or alternative tool for threshold-based segmentation. Future studies will include an expanded dataset and a further optimized network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.