Transrectal ultrasound (TRUS) images have real-time and low-cost advantages. It is essential for preoperative diagnosis and intraoperative treatment of the prostate to segment prostates from TRUS images. In this paper, an Adaptive Detail Compensation Network (ADC-Net) for 3D prostate segmentation is proposed, which utilizes the convolutional neural networks (CNN) in deep learning to realize the automatic segmentation of TRUS images. The proposed method is consisting of a U-Net-based backbone network, a detail compensation module, three spatial-based attention modules, and an aggregation fusion module. A pre-trained ResNet-34 as the detail compensation module is utilized to compensate for the loss of detailed information caused by the down-sampling process of the U-Net encoder. The proposed method uses the spatial-based attention module to introduce multilevel features to refine single-layer features, thereby suppressing the useless background influence and enriching the contextual information of the foreground. Finally, to obtain a predicted prostate, the aggregation fusion module fuses refined single-layer features to further enrich the prostate semantic information and filter out other irrelevant information in TRUS images. Furthermore, a deep supervision mechanism applied in our method also plays an irreplaceable role in network training. Experimental results show that the proposed ADC-Net has achieved satisfactory results in the 3D TRUS image segmentation of prostates, providing accurate detection of prostate regions.
|