Depending on the application, multiple imaging modalities are available for diagnosis in the clinical routine. As a result of this, repositories of patient scans often contain mixed modalities. This poses a challenge for image analysis methods, which require special modifications to work with multiple modalities. This is especially critical for deep learning-based methods, which require large amounts of data. Within this context, a typical example is follow-up imaging in acute ischemic stroke patients, which is an important step in determining potential complications from the evolution of a lesion. In this study, we addressed the mixed modalities issue by translating unpaired images between two of the most relevant follow-up stroke modalities, namely non-contrast computed tomography (NCCT) and fluid-attenuated inversion recovery (FLAIR) MRI. For the translation, we use the widely used cycle-consistent generative adversarial network (CycleGAN). To preserve stroke lesions after translation, we implemented and tested two modifications to regularize them: (1) we use manual segmentations of the stroke lesions as an attention channel when training the discriminator networks, and (2) we use an additional gradient-consistency loss to preserve the structural morphology. For the evaluation of the proposed method, 238 NCCT and 244 FLAIR scans from acute ischemic stroke patients were available. Our method showed a considerable improvement over the original CycleGAN. More precisely, it is capable to translate images between NCCT and FLAIR while preserving the stroke lesion’s shape, location, and modality-specific intensity (average Kullback-Leibler divergence improved from 2,365 to 396). Our proposed method has the potential of increasing the amount of available data used for existing and future applications while conserving original patient features and ground truth labels.
The efficacy of stroke treatments is highly time-sensitive, and any computer-aided diagnosis support method that can accelerate diagnosis and treatment initiation may improve patient outcomes. Within this context, lesion identification in MRI datasets can be time consuming and challenging, even for trained clinicians. Automatic lesion localization can expedite diagnosis by flagging datasets and corresponding regions of interest for further assessment. In this work, we propose a deep reinforcement learning agent to localize acute ischemic stroke lesions in MRI images. Therefore, we adapt novel techniques from the computer vision domain to medical image analysis, allowing the agent to sequentially localize multiple lesions in a single dataset. The proposed method was developed and evaluated using a database consisting of fluid attenuated inversion recovery (FLAIR) MRI datasets from 466 ischemic stroke patients acquired at multiple centers. 372 patients were used for training while 94 patients (20% of available data) were employed for testing. Furthermore, the model was tested using 58 datasets from an out-of-distribution test set to investigate the generalization error in more detail. The model achieved a Dice score of 0.45 on the hold-out test set and 0.43 on images from the out-of-distribution test set. In conclusion, we apply deep reinforcement learning to the clinically well-motivated task of localizing multiple ischemic stroke lesions in MRI images, and achieve promising results validated on a large and heterogeneous collection of datasets.
Attention deficit/hyperactivity disorder (ADHD) is characterized by symptoms of inattention, hyperactivity, and impulsivity, which affects an estimated 10.2% of children and adolescents in the United States. However, correct diagnosis of the condition can be challenging, with failure rates up to 20%. Machine learning models making use of magnetic resonance imaging (MRI) have the potential to serve as a clinical decision support system to aid in the diagnosis of ADHD in youth to improve diagnostic validity. The purpose of this study was to develop and evaluate an explainable deep learning model for automatic ADHD classification. 254 T1-weighted brain MRI datsets of youth aged 9-11 were obtained from the Adolescent Brain Cognitive Development (ABCD) Study, and the Child Behaviour Checklist DSM-Oriented ADHD Scale was used to partition subjects into ADHD and non-ADHD groups. A fully convolutional neural network (CNN) adapted from a state-of-the-art adult brain age regression model was trained to distinguish between the neurologically normal children and children with ADHD. Saliency voxel attribution maps were generated to identify brain regions relevant for the classification task. The proposed model achieved an accuracy of 71.1%, sensitivity of 68.4%, and specificity of 73.7%. Saliency maps highlighted the orbitofrontal cortex, entorhinal cortex, and amygdala as important regions for the classification, which is consistent with previous literature linking these regions to significant structural differences in youth with ADHD. To the best of our knowledge, this is the first study applying artiicial intelligence explainability methods such as saliency maps to the classification of ADHD using a deep learning model. The proposed deep learning classification model has the potential to aid clinical diagnosis of ADHD while providing interpretable results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.