KEYWORDS: Kidney, Image segmentation, Data modeling, Magnetic resonance imaging, 3D modeling, Performance modeling, Statistical modeling, 3D image processing, Tumor growth modeling, 3D acquisition
Purpose: Multiparametric magnetic resonance imaging (mp-MRI) is being investigated for kidney cancer because of better soft tissue contrast ability. The necessity of manual labels makes the development of supervised kidney segmentation algorithms challenging for each mp-MRI protocol. Here, we developed a transfer learning-based approach to improve kidney segmentation on a small dataset of five other mp-MRI sequences.
Approach: We proposed a fully automated two-dimensional (2D) attention U-Net model for kidney segmentation on T1 weighted-nephrographic phase contrast enhanced (CE)-MRI (T1W-NG) dataset (N = 108). The pretrained weights of T1W-NG kidney segmentation model transferred to five other distinct mp-MRI sequences model (T2W, T1W-in-phase (T1W-IP), T1W-out-of-phase (T1W-OP), T1W precontrast (T1W-PRE), and T1W-corticomedullary-CE (T1W-CM), N = 50) and fine-tuned by unfreezing the layers. The individual model performances were evaluated with and without transfer-learning fivefold cross-validation on average Dice similarity coefficient (DSC), absolute volume difference, Hausdorff distance (HD), and center-of-mass distance (CD) between algorithm generated and manually segmented kidneys.
Results: The developed 2D attention U-Net model for T1W-NG produced kidney segmentation DSC of 89.34 ± 5.31 % . Compared with randomly initialized weight models, the transfer learning-based models of five mp-MRI sequences showed average increase of 2.96% in DSC of kidney segmentation (p = 0.001 to 0.006). Specifically, the transfer-learning approach increased average DSC on T2W from 87.19% to 89.90%, T1W-IP from 83.64% to 85.42%, T1W-OP from 79.35% to 83.66%, T1W-PRE from 82.05% to 85.94%, and T1W-CM from 85.65% to 87.64%.
Conclusions: We demonstrate that a pretrained model for automated kidney segmentation of one mp-MRI sequence improved automated kidney segmentation on five other additional sequences.
KEYWORDS: Kidney, Image segmentation, Magnetic resonance imaging, 3D modeling, Data modeling, 3D image processing, Tumor growth modeling, Algorithm development, Tissues, Cancer
Multi-parametric magnetic resonance imaging (mp-MRI) is a promising tool for diagnosis of renal masses and may outperform computed tomography (CT) to differentiate between benign and malignant renal masses due to superior soft tissue contrast. Deep learning (DL)-based methods for kidney segmentation are under-explored in mp-MRI which consists of several pulse sequences, including primarily T2-weighted (T2W) and contrast-enhanced (CE) images. Multi-parametric MRI images have domain shift due to differences in acquisition systems and image protocols, leading to lack of generalizability of methods for image segmentation. To perform similar automated kidney segmentation on another mp- MRI sequence, the model needs a large dataset with manual segmentations to train a model from scratch, which is labor intensive and time consuming. In this paper, we first trained a DL-based method using 108 cases of labeled data to contour kidneys using T1 weighted-Nephrographic Phase CE-MRI (T1W-NG). We then applied a transfer learning approach to other mp-MRI images using pre-trained weights from the source domain, thus eliminating the need for large manually annotated datasets in target domain. The fully automated 2D U-Net for kidney segmentation in source domain containing total 108 3D images of T1W-NG, yielded Dice-similarity coefficient (DSC) of 0.91 ± 0.07 on test cases. The transfer learning of pretrained weights of T1W-NG model on the smaller target domain T2W dataset containing total 50 3D images for automated kidney segmentation generated DSC of 0.90 ± 0.06 (p<0.05), which was an improvement of 3.43% in DSC by compared to the without transfer learning approach (T2W-UNet model).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.