Splenomegaly segmentation on computed tomography (CT) abdomen anatomical scans is essential for identifying spleen biomarkers and has applications for quantitative assessment in patients with liver and spleen disease. Deep convolutional neural network automated segmentation has shown promising performance for splenomegaly segmentation. However, manual labeling of abdominal structures is resource intensive, so the labeled abdominal imaging data are rare resources despite their essential role in algorithm training. Hence, the number of annotated labels (e.g., spleen only) are typically limited with a single study. However, with the development of data sharing techniques, more and more publicly available labeled cohorts are available from different resources. A key new challenging is to co-learn from the multi-source data, even with different numbers of labeled abdominal organs in each study. Thus, it is appealing to design a co-learning strategy to train a deep network from heterogeneously labeled scans. In this paper, we propose a new deep convolutional neural network (DCNN) based method that integrates heterogeneous multi-resource labeled cohorts for splenomegaly segmentation. To enable the proposed approach, a novel loss function is introduced based on the Dice similarity coefficient to adaptively learn multi-organ information from different resources. Three cohorts were employed in our experiments, the first cohort (98 CT scans) has only splenomegaly labels, while the second training cohort (100 CT scans) has 15 distinct anatomical labels with normal spleens. A separate, independent cohort consisting of 19 splenomegaly CT scans with labeled spleen was used as testing cohort. The proposed method achieved the highest median Dice similarity coefficient value (0.94), which is superior (p-value<0.01 against each other method) to the baselines of multi-atlas segmentation (0.86), SS-Net segmentation with only spleen labels (0.90) and U-Net segmentation with multi-organ training (0.91). Our approach for adapting the loss function and training structure is not specific to the abdominal context and may be beneficial in other situations where datasets with varied label sets are available.
Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI’s with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI’s acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI’s with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has
been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR
images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical
images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and
difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen
segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation
for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated
atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective
and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated
craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to
guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2
weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC >
0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson
correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to
achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas
segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the
substantial registration errors between the heterogeneous abdominal CT images, the context learning method for
performance level estimation (CLSIMPLE) method was previously proposed. The context learning method
generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian
framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas
set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than
using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the
GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets
are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length,
which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a
large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group
of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and
2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training
GMMs adaptively for different target images.