Medulloblastoma (MB) is the most common embryonal tumour of the brain. In order to decide on an optimal therapy, laborious inspection of histopathological tissue slides by neuropathologists is necessary. Digital pathology with the support of deep learning methods can help to improve the clinical workflow. Due to the high resolution of histopathological images, previous work on MB classification involved manual selection of patches, making it a time consuming task. In order to leverage only slide labels for histopathology image classification, weakly supervised approaches first encode small patches into feature vectors using an ImageNet pretrained encoder based on convolutional neural networks. The representations of patches are further utilized to train a data-efficient attention-based learning method. Due to the domain shift between natural images and histopathology images, the encoder is not optimal for feature extraction for MB classification. In this study, we adapt weakly supervised learning for MB classification and examine different histopathological specific encoder architectures and weights for the MB classification task. The results show that ResNet encoders pretrained with histopathology images lead to better MB classification results compared to encoders pretrained on ImageNet. The best performing method uses a ResNet50 architecture, pretrained on histopathology images and achieves an area under the receiver operating curve (AUROC) value of 71.89%, improving the baseline model by 2%.
The manual assessment of chest radiographs by radiologists is a time-consuming and error-prone process that relies on the availability of trained professionals. Deep learning methods have the potential to alleviate the workload of radiologists in pathology detection and diagnosis. However, one major drawback of deep learning methods is their lack of explainable decision-making, which is crucial in computer-aided diagnosis. To address this issue, activation maps of the underlying convolutional neural networks (CNN) are frequently used to indicate the regions of focus for the network during predictions. However, often, an evaluation of these activation maps concerning the actual predicted pathology is missing. In this study, we quantitatively evaluate the usage of activation maps for segmenting pulmonary nodules in chest radiographs. We compare transformer-based, CNN-based, and hybrid architectures using different visualization methods. Our results show that although high performance can be achieved in the classification task across all models, the activation masks show little correlation with the actual position of the nodules.
Large-scale population studies have examined the detection of sinus opacities in cranial MRIs. Deep learning methods, specifically 3D convolutional neural networks (CNNs), have been used to classify these anomalies. However, CNNs have limitations in capturing long-range dependencies across the low and high level features, potentially reducing performance. To address this, we propose an end-to-end pipeline using a novel deep learning network called ConTra-Net. ConTra-Net combines the strengths of CNNs and self-attention mechanisms of transformers to classify paranasal anomalies in the maxillary sinuses. Our approach outperforms 3D CNNs and 3D Vision Transformer (ViT), with relative improvements in F1 score of 11.68% and 53.5%, respectively. Our pipeline with ConTra-Net could serve as an alternative to reduce misdiagnosis rates in classifying paranasal anomalies.
Deep learning (DL) algorithms can be used to automate paranasal anomaly detection from Magnetic Resonance Imaging (MRI). However, previous works relied on supervised learning techniques to distinguish between normal and abnormal samples. This method limits the type of anomalies that can be classified as the anomalies need to be present in the training data. Further, many data points from normal and anomaly class are needed for the model to achieve satisfactory classification performance. However, experienced clinicians can segregate between normal samples (healthy maxillary sinus) and anomalous samples (anomalous maxillary sinus) after looking at a few normal samples. We mimic the clinicians ability by learning the distribution of healthy maxillary sinuses using a 3D convolutional auto-encoder (cAE) and its variant, a 3D variational autoencoder (VAE) architecture and evaluate cAE and VAE for this task. Concretely, we pose the paranasal anomaly detection as an unsupervised anomaly detection problem. Thereby, we are able to reduce the labelling effort of the clinicians as we only use healthy samples during training. Additionally, we can classify any type of anomaly that differs from the training distribution. We train our 3D cAE and VAE to learn a latent representation of healthy maxillary sinus volumes using L1 reconstruction loss. During inference, we use the reconstruction error to classify between normal and anomalous maxillary sinuses. We extract sub-volumes from larger head and neck MRIs and analyse the effect of different fields of view on the detection performance. Finally, we report which anomalies are easiest and hardest to classify using our approach. Our results demonstrate the feasibility of unsupervised detection of paranasal anomalies from MRIs with an AUPRC of 85% and 80% for cAE and VAE, respectively.
Lesion detection in brain Magnetic Resonance Images (MRIs) remains a challenging task. MRIs are typically read and interpreted by domain experts, which is a tedious and time-consuming process. Recently, unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results to provide a quick, initial assessment. So far, these methods only rely on the visual appearance of healthy brain anatomy for anomaly detection. Another biomarker for abnormal brain development is the deviation between the brain age and the chronological age, which is unexplored in combination with UAD. We propose deep learning for UAD in 3D brain MRI considering additional age information. We analyze the value of age information during training, as an additional anomaly score, and systematically study several architecture concepts. Based on our analysis, we propose a novel deep learning approach for UAD with multi-task age prediction. We use clinical T1-weighted MRIs of 1735 healthy subjects and the publicly available BraTs 2019 data set for our study. Our novel approach significantly improves UAD performance with an AUC of 92.60% compared to an AUC-score of 84.37% using previous approaches without age information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.