You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
16 March 2020Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems
Transrectal ultrasound (TRUS) fusion-guided biopsy and brachytherapy (BT) offer promising diagnostic and therapeutic improvements to conventional practice for prostate cancer. One key component of these procedures is accurate segmentation of the prostate in three-dimensional (3D) TRUS images to define margins used for accurate targeting and guidance techniques. However, manual prostate segmentation is a time-consuming and difficult process that must be completed by the physician intraoperatively, often while the patient is under sedation (biopsy) or anesthetic (BT). Providing physicians with a quick and accurate prostate segmentation immediately after acquiring a 3D TRUS image could benefit multiple minimally invasive prostate interventional procedures and greatly reduce procedure time. Our solution to this limitation is the development of a convolutional neural network to segment the prostate in 3D TRUS images using multiple commercial ultrasound systems. Training of a modified U-Net was performed on 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsy and BT procedures. Our approach for 3D segmentation involved prediction on 2D radial slices, which were reconstructed into a 3D geometry. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 unseen 3D side-fire images. Pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision) and volume percent difference (VPD) were computed to assess error in the segmentation algorithm. Our algorithm performed with a 93.5% median DSC and 5.89% median VPD with a <0.7 s computation time, offering the possibility for reduced treatment time during prostate interventional procedures.
Nathan Orlando,Derek J. Gillies,Igor Gyacskov, andAaron Fenster
"Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems", Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113152I (16 March 2020); https://doi.org/10.1117/12.2549804
The alert did not successfully save. Please try again later.
Nathan Orlando, Derek J. Gillies, Igor Gyacskov, Aaron Fenster, "Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems," Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113152I (16 March 2020); https://doi.org/10.1117/12.2549804