PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We propose a method to automatically segment prostate from TRUS image based on multi-derivate deeply supervised network and multi-directional contour refinement. 3D multi-derivate V-Net is introduced to enable end-to-end segmentation. Deep supervision mechanism is integrated into the hidden layers to cope with the optimization difficulties when training such a network with limited training data. The probability map of new prostate contour is generated by the well-trained network and fused to reconstruct the prostate contour by multi-directional contour refinement. This proposed algorithm was evaluated using 30 patients’ data with TRUS image and manual contours. The mean Dice similarity coefficient (DSC) and mean surface distance (MSD) were 0.92 and 0.60 mm, which demonstrate the high accuracy of the proposed segmentation method. We have developed a novel deep learning-based method demonstrated that this method could significantly improve contour accuracy especially around the apex and base region. This segmentation technique could be a useful tool in ultrasound-guided interventions for prostate-cancer diagnosis and treatment.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Yang Lei, Tonghe Wang, Bo Wang, Xiuxiu He, Sibo Tian, Ashesh B. Jani, Hui Mao, Walter J. Curran, Pretesh Patel, Tian Liu, Xiaofeng Yang, "Ultrasound prostate segmentation based on 3D V-Net with deep supervision," Proc. SPIE 10955, Medical Imaging 2019: Ultrasonic Imaging and Tomography, 109550V (15 March 2019); https://doi.org/10.1117/12.2512558