Translator Disclaimer
Presentation + Paper
15 March 2019 Improving V-Nets for multi-class abdominal organ segmentation
Author Affiliations +
Segmentation is one of the most important tasks in medical image analysis. With the development of deep leaning, fully convolutional networks (FCNs) have become the dominant approach for this task and their extension to 3D achieved considerable improvements for automated organ segmentation in volumetric imaging data, such as computed tomography (CT). One popular FCN network architecture for 3D volumes is V-Net, originally proposed for single region segmentation. This network effectively solved the imbalance problem between foreground and background voxels by proposing a loss function based on the Dice similarity metric. In this work, we extend the depth of the original V-Net to obtain better features to model the increased complexity of multi-class segmentation tasks at higher input/output resolutions using modern large-memory GPUs. Furthermore, we markedly improved the training behaviour of V-Net by employing batch normalization layers throughout the network. In this way, we can efficiently improve the stability of the training optimization, achieving faster and more stable convergence. We show that our architectural changes and refinements dramatically improve the segmentation performance on a large abdominal CT dataset and obtain close to 90% average Dice score.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chen Shen, Fausto Milletari, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, and Kensaku Mori "Improving V-Nets for multi-class abdominal organ segmentation", Proc. SPIE 10949, Medical Imaging 2019: Image Processing, 109490B (15 March 2019);

Back to Top