Modern neuro-oncology workflows are driven by large collections of high-dimensional MRI data obtained using varying acquisition protocols. The concomitant heterogeneity of this data makes extensive manual curation and pre-processing imperative prior to algorithmic use. The limited efforts invested towards automating this curation and processing are fragmented, do not encompass the entire workflow, or still require significant manual intervention. In this work, we propose an artificial intelligence-driven solution for transforming multi-modal raw neuro-oncology MRI Digital Imaging and Communications in Medicine (DICOM) data into quantitative tumor measurements. Our end-to-end framework classifies MRI scans into different structural sequence types, preprocesses the data, and uses convolutional neural networks to segment tumor tissue subtypes. Moreover, it adopts an expert-in-the-loop approach, where segmentation results may be manually refined by radiologists. This framework was implemented as Docker Containers (for command line usage and within the eXtensible Neuroimaging Archive Toolkit [XNAT]) and validated on a retrospective glioma dataset (n = 155) collected from the Washington University School of Medicine, comprising preoperative MRI scans from patients with histopathologically confirmed gliomas. Segmentation results were refined by a neuroradiologist, and performance was quantified using Dice Similarity Coefficient to compare predicted and expert-refined tumor masks. The scan-type classifier yielded a 99.71% accuracy across all sequence types. The segmentation model achieved mean Dice scores of 0.894 (± 0.225) for whole tumor segmentation. The proposed framework can automate tumor segmentation and characterization – thus streamlining workflows in a clinical setting as well as expediting standardized curation of large-scale neuro-oncology datasets in a research setting.
Glioma is the most common form of brain tumor with a high degree of heterogeneity in imaging characteristics, treatment-response, and survival rate. An important factor causing this heterogeneity is the mutation of isocitrate dehydrogenase (IDH) enzyme. The current clinical gold-standard for identifying IDH mutation status involves invasive procedures that involve risk, may fail to capture intra-tumoral spatial heterogeneity or can be inaccessible in low-resource settings. In this study, we propose a deep learning-based method to non-invasively and preoperatively determine IDH status of high- and low-grade gliomas by leveraging their phenotypical characteristics from volumetric MRI scans. For this purpose, we propose a 3D Mask R-CNN-based approach to simultaneously detect and segment glioma as well as classify its IDH status - thus obviating the requirement of any separate tumor segmentation step. The network can operate on routinely acquired MRI sequences and is agnostic to glioma grade. It was trained on patient-cases from publicly available datasets (n = 223) and tested on two hold-out datasets acquired from The Cancer Genome Atlas (TCGA; n = 62) and Washington University School of Medicine (WUSM; n = 261). The model achieved areas under the receiver operating characteristic of 0.83 and 0.87, and areas under the precision-recall curves of 0.78 and 0.79, on the TCGA and WUSM sets, respectively. The model can be used to perform a pre-operative ‘virtual biopsy’ of gliomas, thus facilitating treatment planning, potentially leading to better overall survival.
Detection and segmentation of primary and secondary brain tumors are crucial in Radiation Oncology. Significant efforts have been dedicated to devising deep learning models for this purpose. However, development of a unified model for the segmentation of multiple types of tumors is nontrivial due to high heterogeneity across different pathologies. In this work, we propose BrainTumorNet, a multi-task learning (MTL) scheme for the joint segmentation of high-grade gliomas (HGG) and brain metastases (METS) from multimodal magnetic resonance imaging (MRI) scans. We augment the state-of-the-art DeepMedic1 architecture using this scheme and evaluate its performance on a highly unbalanced hybrid dataset comprising 259 HGG and 58 METS patient-cases. For the HGG segmentation task, the network produces a Dice score of 86.74% for whole tumor segmentation, which is comparable to 87.35% and 87.19% by the task-specific and single-task joint training baselines, respectively. For the METS segmentation task, BrainTumorNet produces an average Dice score of 62.60% thus outperforming the scores of 19.85%, 57.99%, 59.74%, and 44.17% by the two transfer-learned, task-specific, and single-task joint training baseline models, respectively. The trained network retains knowledge across segmentation tasks by exploiting the underlying correlation between pathologies. At the same time, it is discriminative enough to produce competitive segmentations for each task. The hard parameter sharing in the network reduces the computational overhead compared to training task-specific models for multiple tumor types. To our knowledge, this is the first attempt towards developing a single overarching model for the segmentation of different types of brain tumors.
Clinically acquired, multimodal and multi-site MRI datasets are widely used for neuro-oncology research. However, manual preprocessing of such data is extremely tedious and error prone due to high intrinsic heterogeneity. Automatic standardization of such datasets is therefore important for data-hungry applications like deep learning. Despite rapid advances in MRI data acquisition and processing algorithms, only limited effort was dedicated to automatic methodologies for standardization of such data. To address this challenge, we augment our previously developed Multimodal Glioma Analysis (MGA) pipeline with automation tools to achieve processing scale suitable for big data applications. This new pipeline implements a natural language processing (NLP) based scan-type classifier, with features constructed from DICOM metadata based on bag-ofwords model. The classifier automatically assigns one of 18 pre-defined scan types to all scans in MRI study. Using the described data model, we trained three types of classifiers: logistic regression, linear SVM, and multi-layer artificial neural network (ANN) on the same dataset. Their performance was validated on four datasets from multiple sources. ANN implementation achieved the highest performance, yielding an average classification accuracy of over 99%. We also built a Jupyter notebook based graphical user interface (GUI) which is used to run MGA in semi-automatic mode for progress tracking purposes and quality control to ensure reproducibility of the analyses based thereof. MGA has been implemented as a Docker container image to ensure portability and easy deployment. The application can run in a single or batch study mode, using either local DICOM data or XNAT cloud storage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.