Image segmentation has been increasingly applied in medical settings as recent developments have skyrocketed the potential applications of deep learning. Urology, specifically, is one field of medicine that is primed for the adoption of a real-time image segmentation system with the long-term aim of automating endoscopic stone treatment. In this project, we explored supervised deep learning models to annotate kidney stones in surgical endoscopic video feeds. In this paper, we describe how we built a dataset from the raw videos and how we developed a pipeline to automate as much of the process as possible. For the segmentation task, we adapted and analyzed three baseline deep learning models – U-Net, U-Net++, and DenseNet – to predict annotations on the frames of the endoscopic videos with the highest accuracy above 90%. To show clinical potential for real-time use, we also confirmed that our best trained model can accurately annotate new videos at 30 frames per second. Our results demonstrate that the proposed method justifies continued development and study of image segmentation to annotate ureteroscopic video feeds.
KEYWORDS: 3D modeling, Data modeling, 3D image reconstruction, Surgery, Imaging systems, 3D image processing, Luminescence, Image segmentation, Endoscopes, Kidney, Robotic surgery
Over the past several years, researchers have made significant progress toward providing image guidance for the da Vinci system, utilizing data sources such as robot kinematic data, endoscope image data, and preoperative medical images. One data source that could provide additional subsurface information for use in image guidance is the da Vinci’s FireFly camera system. FireFly is a fluorescence imaging feature for the da Vinci system that uses injected indocyanine green dye and special endoscope filters to illuminate subsurface anatomical features as the surgeon operates. FireFly is now standard for many surgical procedures with the da Vinci robot; however, it is currently challenging to understand spatial relationships between pre-operative CT images and intraoperative fluorescence images. Here, we extend our image guidance system to incorporate FireFly information as well, so that the surgeon can view FireFly data in the image guidance display while operating with the da Vinci robot. We present a method for reconstructing 3D models of the FireFly fluorescence data from endoscope images and mapping the models into an image guidance display that also includes segmented, registered preoperative CT images. We analyze the accuracy of our reconstruction and mapping method and present a proof-of-concept application where we reconstruct a fluorescent subsurface blood vessel and map it into our image guidance display. Our method could be used to provide surgeons with additional context for the FireFly fluorescence imaging data or to provide additional data for computing or verifying the registration between the robot and preoperative images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.