Detection and segmentation of primary and secondary brain tumors are crucial in Radiation Oncology. Significant efforts have been dedicated to devising deep learning models for this purpose. However, development of a unified model for the segmentation of multiple types of tumors is nontrivial due to high heterogeneity across different pathologies. In this work, we propose BrainTumorNet, a multi-task learning (MTL) scheme for the joint segmentation of high-grade gliomas (HGG) and brain metastases (METS) from multimodal magnetic resonance imaging (MRI) scans. We augment the state-of-the-art DeepMedic1 architecture using this scheme and evaluate its performance on a highly unbalanced hybrid dataset comprising 259 HGG and 58 METS patient-cases. For the HGG segmentation task, the network produces a Dice score of 86.74% for whole tumor segmentation, which is comparable to 87.35% and 87.19% by the task-specific and single-task joint training baselines, respectively. For the METS segmentation task, BrainTumorNet produces an average Dice score of 62.60% thus outperforming the scores of 19.85%, 57.99%, 59.74%, and 44.17% by the two transfer-learned, task-specific, and single-task joint training baseline models, respectively. The trained network retains knowledge across segmentation tasks by exploiting the underlying correlation between pathologies. At the same time, it is discriminative enough to produce competitive segmentations for each task. The hard parameter sharing in the network reduces the computational overhead compared to training task-specific models for multiple tumor types. To our knowledge, this is the first attempt towards developing a single overarching model for the segmentation of different types of brain tumors.
Retrospective neuro-oncology imaging research relies on standardization of large, heterogeneous sets of clinical images. In particular, many tumor segmentation algorithms require pre- and post- Gadolinium (Gd) T1-weighted MRI scans. Since the presence of contrast agent cannot be reliably inferred from image metadata, we propose an automatic imagebased classifier for this purpose. We proceed with aligning a T1-weighted MR image to a standard atlas space, by selecting one of eight affine transforms produced using different registration parameters and atlases. After resampling to the standard space, we normalize the intensity distribution and compute intensity characteristics inside a pre-built binary mask of likely enhancement. Using a labeled set of 1892 scans, we evaluated logistic regressions with mean, standard deviation, and 95th percentile as possible factors. A univariable logistic regression with standard deviation as factor was most accurate at 98.9% on testing data. The slope coefficient was highly robust with p<1e-6 and Cramer-Rao bound on variance of 1%. The resulting classification script is completely unsupervised. Accuracy on two validation datasets from different sources (totaling 1328 scans) was over 99% on scans with isotropic sampling. Accuracy was lower on highly anisotropic or otherwise lower quality scans. To our knowledge, this is the first attempt to build an automated Gd enhancement classifier for big data applications. We plan to integrate it into XNAT platform for automatic labeling, to enable Gd enhanced image search. The proposed detector performed well on a wide variety of acquisition parameters. Image anisotropy and acquisition artifacts may interfere with accurate detection.
Clinically acquired, multimodal and multi-site MRI datasets are widely used for neuro-oncology research. However, manual preprocessing of such data is extremely tedious and error prone due to high intrinsic heterogeneity. Automatic standardization of such datasets is therefore important for data-hungry applications like deep learning. Despite rapid advances in MRI data acquisition and processing algorithms, only limited effort was dedicated to automatic methodologies for standardization of such data. To address this challenge, we augment our previously developed Multimodal Glioma Analysis (MGA) pipeline with automation tools to achieve processing scale suitable for big data applications. This new pipeline implements a natural language processing (NLP) based scan-type classifier, with features constructed from DICOM metadata based on bag-ofwords model. The classifier automatically assigns one of 18 pre-defined scan types to all scans in MRI study. Using the described data model, we trained three types of classifiers: logistic regression, linear SVM, and multi-layer artificial neural network (ANN) on the same dataset. Their performance was validated on four datasets from multiple sources. ANN implementation achieved the highest performance, yielding an average classification accuracy of over 99%. We also built a Jupyter notebook based graphical user interface (GUI) which is used to run MGA in semi-automatic mode for progress tracking purposes and quality control to ensure reproducibility of the analyses based thereof. MGA has been implemented as a Docker container image to ensure portability and easy deployment. The application can run in a single or batch study mode, using either local DICOM data or XNAT cloud storage.
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) reduces motor symptoms in most patients with Parkinson disease (PD), yet may produce untoward effects. Investigation of DBS effects requires accurate localization of the STN, which can be difficult to identify on magnetic resonance images collected with clinically available 3T scanners. The goal of this study is to develop a high-quality STN atlas that can be applied to standard 3T images. We created a high-definition STN atlas derived from seven older participants imaged at 7T. This atlas was nonlinearly registered to a standard template representing 56 patients with PD imaged at 3T. This process required development of methodology for nonlinear multimodal image registration. We demonstrate mm-scale STN localization accuracy by comparison of our 3T atlas with a publicly available 7T atlas. We also demonstrate less agreement with an earlier histological atlas. STN localization error in the 56 patients imaged at 3T was less than 1 mm on average. Our methodology enables accurate STN localization in individuals imaged at 3T. The STN atlas and underlying 3T average template in MNI space are freely available to the research community. The image registration methodology developed in the course of this work may be generally applicable to other datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.