KEYWORDS: Tumors, Image segmentation, Magnetic resonance imaging, Brain, 3D modeling, Convolution, Neural networks, Medical imaging, Neuroimaging, Cancer
Recent developments in deep learning techniques have gained significant attention in medical image analysis. Deep learning techniques have been shown to give promising results for automating various medical image tasks like segmentation of organs, precise delineation of lesions and automated disease diagnosis.We have demonstrated the utility of deep learning models for finding associations between brain imaging phenotypes and the molecular subtype. In this study Magnetic Resonance (MR) images of the brain with Glioblastoma multiforme (GBM) were used. The Cancer Genome Atlas (TCGA) has grouped GBM into four distinct subtypes, namely - Mesenchymal, Neural, Proneural and Classical. The subtype classification are defined by genomic characteristics, survival outcomes, patient age and response to treatment. Identification of molecular subtype and its associated imaging phenotype could aid in developing precision medicine and personalized treatments for patients.The MR imaging data and molecular subtype information were retrospectively obtained from The Cancer Imaging Archive (TCIA) of patients with high-grade gliomas. From the TCIA, 123 patient cases were manually identified which had the following four MR sequences- a) T1 and b) post-contrast T1-weighted (T1c), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (FLAIR). The MR dataset was further split into 92 and 31 cases for training and testing. The pre-processing of MR images involved skull-stripping, co-registration of MR sequences to T1c, re-sampling of MR volumes to isotropic voxels and segmentation of brain lesion. The lesions in the MR volumes were automatically segmented using a trained convolutional Neural Network (CNN) on BraTS2017 segmentation challenge dataset. From the segmentation maps 64×64×64 cube patches centered around the tumor were extracted from all the four MR sequences and a 3D convolutional neural network was trained for the molecular subtype classification. On the held-out test set, our approach achieved a classification accuracy of 90%. These results on TCIA dataset highlight the emerging role of deep learning in understanding molecular markers from non-invasive imaging phenotypes.
Advances in medical imaging technologies have led to the generation of large databases with high-resolution image volumes. To retrieve images with pathology similar to the one under examination, we propose a content- based image retrieval framework (CBIR) for medical image retrieval using deep Convolutional Neural Network (CNN). We present retrieval results for medical images using a pre-trained neural network, ResNet-18. A multi- modality dataset that contains twenty-three classes and four modalities including (Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Mammogram (MG), and Positron Emission Tomograph (PET)) are used for demonstrating our method. We obtain an average classification accuracy of 92% and the mean average precision of 0.90 for retrieval. The proposed method can assist in clinical diagnosis and training radiologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.