KEYWORDS: Data modeling, Magnetic resonance imaging, Performance modeling, Education and training, Data privacy, Cross validation, Feature extraction, Tumors, Radiomics, Mixtures
Deep learning models have shown potential in medical image analysis tasks. However, training a generalized deep learning model requires huge amounts of patient data that is usually gathered from multiple institutions which may raise privacy concerns. Federated learning (FL) provides an alternative to sharing data across institutions. Nonetheless, FL is susceptible to a few challenges including inversion attacks on model weights, heterogenous data distributions, and bias. This study addresses heterogeneity and bias issues for multi-institution patient data by proposing domain adaptive FL modeling using several radiomics (volume, fractal, texture) features for O6-methylguanine-DNA methyltransferase (MGMT) classification across multiple institutions. The proposed domain adaptive FL MGMT classification inherently offers differential privacy (DP) for the patient data. For domain adaptation two techniques e.g., mixture of experts (ME) with a gating network and adversarial alignment are used for comparison. The proposed method is evaluated using publicly available multi-institution (UPENN-GBM, UCSF-PDGM, RSNA-ASNR-MICCAI BraTS-2021) data set with a total of 1007 patients. Our experiments with 5-fold cross validation suggest that domain adaptive FL offers improved performance with a mean accuracy of 69.93% ± 4.8 % and area under curve of 0.655 ± 0.055 across multiple institutions. In addition, further analysis of probability density of gating network for domain adaptive FL identifies the institution that may bias the global model prediction due to increased heterogeneity for a given input. Our comparison analysis shows that the proposed method with bias identification offers the best predictive performance when compared to different commonly employed FL and baseline methods in the literature.
KEYWORDS: Wavelets, Data modeling, Histopathology, Tumors, Image segmentation, Machine learning, Tissues, Education and training, Medical imaging, Breast cancer
Federated Learning (FL) is a promising machine learning approach for development of data-driven global model using collaborative local models across multiple local institutions. However, the heterogeneity of medical imaging data is one of the challenges within FL. This heterogeneity is caused by the variation in imaging scanner protocols across institutions, which may result in weight shift among local models leading to deterioration in predictive accuracy of global model. The prevailing approaches involve applying different FL averaging techniques to enhance the performance of the global model, ignoring the distinct imaging features of the local domain. In this work, we address both the local and global model weight shift by introducing multiscale amplitude harmonization of the imaging in the local models utilizing Haar and harmonic wavelets. First, we tackle the local model weight shift by transforming the image feature space into multiscale frequency space using multiscale based harmonization. This aims to achieve harmonized image feature space across local models. Second, based on harmonized image feature space, a weighted regularization term is applied to local models, effectively mitigating weight shifts within these models. This weighted regularization assists in managing global model shifts by aggregating the optimized local models. We evaluate the proposed method using publicly available histopathological dataset MoNuSAC2018, TNBC for nuclei segmentation, and Camelyon17 dataset for tumor tissue classification. The average testing accuracies are 96.55%, and 92.47% for classification of tumor tissue while Dice co-efficients are 84.33%, and 84.46% for segmentation of nuclei with Haar and harmonic multiscale based harmonization, respectively. The comparison results for nuclei segmentation and tumor tissue classification using histopathological data show that our proposed methods perform better than the state-of-the-art FL methods.
KEYWORDS: Education and training, Image segmentation, Liver, Data modeling, Deep learning, Cross validation, Kidney, Visualization, Computed tomography, Visual process modeling
Deep learning (DL)-based medical imaging and image segmentation algorithms achieve impressive performance on many benchmarks. Yet the efficacy of deep learning methods for future clinical applications may become questionable due to the lack of ability to reason with uncertainty and interpret probable areas of failures in prediction decisions. Therefore, it is desired that such a deep learning model for segmentation classification is able to reliably predict its confidence measure and map back to the original imaging cases to interpret the prediction decisions. In this work, uncertainty estimation for multiorgan segmentation task is evaluated to interpret the predictive modeling in DL solutions. We use the state-of-the-art nnU-Net to perform segmentation of 15 abdominal organs (spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, prostate/uterus) using 200 patient cases for the Multimodality Abdominal Multi-Organ Segmentation Challenge 2022. Further, the softmax probabilities from different variants of nnU-Net are used to compute the knowledge uncertainty in the deep learning framework. Knowledge uncertainty from ensemble of DL models is utilized to quantify and visualize class activation map for two example segmented organs. The preliminary result of our model shows that class activation maps may be used to interpret the prediction decision made by the DL model used in this study.
Glioblastoma Multiforme (GBM) is one of the most malignant brain tumors among all high-grade brain cancers. Temozolomide (TMZ) is the first-line chemotherapeutic regimen for glioblastoma patients. The methylation status of the O6-methylguanine-DNA-methyltransferase (MGMT) gene is a prognostic biomarker for tumor sensitivity to TMZ chemotherapy. However, the standardized procedure for assessing the methylation status of MGMT is an invasive surgical biopsy, and accuracy is susceptible to resection sample and heterogeneity of the tumor. Recently, radio-genomics which associates radiological image phenotype with genetic or molecular mutations has shown promise in the non-invasive assessment of radiotherapeutic treatment. This study proposes a machine-learning framework for MGMT classification with uncertainty analysis utilizing imaging features extracted from multimodal magnetic resonance imaging (mMRI). The imaging features include conventional texture, volumetric, and sophisticated fractal, and multi-resolution fractal texture features. The proposed method is evaluated with publicly available BraTS-TCIA-GBM pre-operative scans and TCGA datasets with 114 patients. The experiment with 10-fold cross-validation suggests that the fractal and multi-resolution fractal texture features offer an improved prediction of MGMT status. The uncertainty analysis using an ensemble of Stochastic Gradient Langevin Boosting models along with multi-resolution fractal features offers an accuracy of 71.74% and area under the curve of 0.76. Finally, analysis shows that our proposed method with uncertainty analysis offers improved predictive performance when compared with different well-known methods in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.