KEYWORDS: Magnetic resonance imaging, Brain, Medical imaging, Photonics, Biomedical applications, Functional imaging, Functional magnetic resonance imaging, Biological research
In this research we explored the use of Mutual Connectivity Analysis with local models for classifying Autism Spectrum Disorder (ASD) within the ABIDE II dataset. The focus was on understanding brain region differences between individuals with ASD and healthy controls. We conducted a Multi-Voxel Pattern Analysis (MVPA), using a data-driven method to model non-linear dependencies between pairs of time series. This resulted in high-dimensional feature vectors representing the connectivity measures of the subjects, used for ASD classification. To reduce the dimensionality of the features, we used Kendall’s coefficient method, preparing the vectors for classification using a kernel-based SVM classifier. We compared our approach with methods based on crosscorrelation and Pearson correlation. The results are consistent with current literature, suggesting our method could be a useful tool in ASD research. Further studies are required to refine our method.
This study utilizes a novel method for learning representations from chest X-rays using a memory-driven transformerbased approach. The model is trained on a low-quality version of the MIMIC-CXR dataset, utilizing 17,783 chest X-rays that contain at most 3 views. The model uses a relational memory to record crucial information during the generation process and a memory-driven conditional layer normalization technique to integrate this memory into the transformer's decoder. The dataset is divided into distinct sets for training, validation, and testing. We aim to establish an intuitively comprehensible quantitative metric, through vectorization of the radiology report. This metric leverages the learned representations from our model to classify 14 unique lung pathologies. The F1-score measures classification accuracy, indicating the model's viability in diagnosing lung diseases. We also have introduced the use of Large Language Models (LLMs) for evaluation of the generated reports accuracy. The model's potential applications extend to more robust performance in radiology report generation.
Devices enabled by artificial intelligence (AI) and machine learning (ML) are being introduced for clinical use at an accelerating pace. In a dynamic clinical environment, these devices may encounter conditions different from those they were developed for. The statistical data mismatch between training/initial testing and production is often referred to as data drift. Detecting and quantifying data drift is significant for ensuring that AI model performs as expected in clinical environments. A drift detector signals when a corrective action is needed if the performance changes. In this study, we investigate how a change in the performance of an AI model due to data drift can be detected and quantified using a cumulative sum (CUSUM) control chart. To study the properties of CUSUM, we first simulate different scenarios that change the performance of an AI model. We simulate a sudden change in the mean of the performance metric at a change-point (change day) in time. The task is to quickly detect the change while providing few false-alarms before the change-point, which may be caused by the statistical variation of the performance metric over time. Subsequently, we simulate data drift by denoising the Emory Breast Imaging Dataset (EMBED) after a pre-defined change-point. We detect the change-point by studying the pre- and post-change specificity of a mammographic CAD algorithm. Our results indicate that with the appropriate choice of parameters, CUSUM is able to quickly detect relatively small drifts with a small number of false-positive alarms.
In this work, we utilize a Transformer-based network for precise anatomical landmark detection in chest X-ray images. By combining the strengths of Transformers and UNet architecture, our proposed model achieves robust landmark localization by effectively capturing global context and spatial dependencies. Notably, our method surpasses the current state-of-the-art approaches, exhibiting a significant reduction in Mean Radial Error and a notable improvement in the rate of accurate landmark detection. Each of the landmark points in the labels is presented as a Gaussian heatmap for training the network, using a hybrid loss function, incorporating binary cross-entropy and Dice loss functions, allowing for pixel-wise classification of the heatmaps and segmentation-based training to accurately localize the landmark heatmaps. The promising results obtained highlight the underexplored potential of Transformers in anatomical landmark detection and offer a compelling solution for accurate anatomical landmark detection in chest X-rays. Our work demonstrates the viability of Transformer-based models in addressing the challenges of landmark detection in medical imaging.
The precise placement of catheter tubes and lines is crucial for providing optimal care to critically ill patients. However, the challenge of mispositioning these tubes persists. The timely detection and correction of such errors are extremely important, especially considering the increased demand for these interventions, as seen during the COVID-19 pandemic. Unfortunately, manual diagnosis is prone to error, particularly under stressful conditions, highlighting the necessity for automated solutions. This research addresses this challenge by utilizing deep learning techniques to automatically detect and classify the positions of endotracheal tubes (ETTs) in chest x-ray images. Our approach builds upon recent advancements in deep learning for medical image analysis, providing a sophisticated solution to a critical healthcare challenge. The proposed model achieves remarkable performance, with the area under the ROC scores ranging from 0.961 to 0.993 and accuracy values ranging from 0.961 to 0.999. These results emphasize the effectiveness of the model in accurately classifying ETT positions, highlighting its potential clinical utility. Through this study, we introduce an innovative application of AI in medical diagnostics, with considerations for advancing healthcare practices.
Catheter tubes and lines are one of the most common abnormal findings on a chest x-ray. Misplaced catheters can cause serious complications, such as pneumothorax, cardiac perforation, or thrombosis, and for this reason, assessment of catheter position is of utmost importance. In order to prevent these problems, radiologists usually examine chest x-rays to evaluate their positions after insertion and throughout intensive care. However, this process is both time-consuming and prone to human error. Efficient and dependable automated interpretations have the potential to lower the expenses of surgical procedures, lessen the burden on radiologists, and enhance the level of care for patients. To address this challenge, we have investigated the task of accurate segmentation of catheter tubes and lines in chest x-rays using deep learning models. In this work, we have utilized transfer learning and transformer-based networks where we utilized two different models: a U-Net++-based model with ImageNet pre-training and an efficientnet encoder, which leverages diverse visual features in ImageNet to improve segmentation accuracy, and a transformer-based U-Net architecture due to its capability to handle long-range dependencies in complex medical image segmentation tasks. Our experiments reveal the effectiveness of the U-Net++-based model in handling noisy and artifact-laden images and TransUNET’s potential for capturing complex spatial features. We compare both models using the dice coefficient as the evaluation metric and determine that U-Net++ outperforms TransUNET in terms of these segmentation metrics. Our aim is to achieve more robust and reliable catheter tube detection in chest x-rays, ultimately enhancing clinical decision-making and patient care in critical healthcare settings.
Landmark detection is critical in medical imaging for accurate diagnosis and treatment of diseases. While there are many automated methods for landmark detection, the potential of transformers in this area has not been fully explored. This work proposes a transformer-based network for accurate anatomical landmark detection in chest x-ray images. By leveraging the combined power of transformers and U-Net, our method effectively captures global context and spatial dependencies, leading to robust landmark localization. The proposed method outperforms state-of-the-art methods on Chest x-ray datasets, reducing mean radial error from 5.57 to 4.68 pixels. The experiments show that the transformer-based method can effectively learn complex spatial patterns in medical images. The results of this method show the potential to improve the precision and efficiency of tasks such as surgical planning and detecting abnormalities in medical images.
Schizophrenia is associated with alternations in brain network connectivity. We investigate whether large-scale Granger Causality (lsGC) can capture such alterations using resting-state fMRI data. Our method utilizes dimension reduction combined with the augmentation of source time-series in a predictive time-series model for estimating directed causal relationships among fMRI time-series. As a multivariate approach, lsGC identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here, we examine the ability of lsGC to accurately identify schizophrenia patients from fMRI data using a subset of 31 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsGC as features for classification. After feature extraction, we perform feature selection by Kendall’s tau rank correlation coefficient followed by classification using a support vector machine. For reference, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity, and several other standard methods. Using 100 different training/test data splits with 10-fold cross-validation we obtain mean/std f1-scores of 84.20% ± 20.42% and mean Area Under the receiver operating characteristic Curve (AUC) values of 94.50% ± 15.24% across all tested numbers of features for lsGC, which is significantly better than the results obtained with cross-correlation (AUC=64.50% ± 33.39%, f1-score=46.67% ± 34.01%), and multiple other competing methods, including partial correlation, tangent, precision, and covariance methods. Our results suggest the applicability of lsGC as a potential imaging biomarker for schizophrenia.
The literature suggests that schizophrenia is associated with alterations in brain network connectivity. We investigate whether large-scale Augmented Granger Causality (lsAGC) can capture such alterations using restingstate fMRI data. Our method utilizes dimension reduction combined with the augmentation of source time-series in a predictive time-series model for estimating directed causal relationships among fMRI time-series. As a multivariate approach, lsAGC identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here, we examine the ability of lsAGC to accurately identify schizophrenia patients from fMRI data using a subset of 31 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsAGC as features for classification. After feature extraction, we perform feature selection by Kendall’s tau rank correlation coefficient followed by classification using a support vector machine. For reference, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity, and several other standard methods. Using 30 different training/test data splits with 10-fold cross-validation we obtain mean/std f1-scores of 82.89% ± 17.25% and mean Area Under the receiver operating characteristic Curve (AUC) values of 93.33% ± 12.81% across all tested numbers of features for lsAGC, which is significantly better than the results obtained with cross-correlation (AUC=78.33% ± 25.60%, f1-score=66.22% ± 24.73%), and multiple other competing methods, including partial correlation, tangent, precision, and covariance methods. Our results suggest the applicability of lsAGC as a potential imaging biomarker for schizophrenia.
Changes in brain network connectivity can be observed in schizophrenia and other psychiatric diseases. We investigate whether large-scale Extended Granger Causality (lsXGC) can capture such alterations using resting-state fMRI data. Our method utilizes dimension reduction combined with the augmentation of source timeseries in a predictive time-series model for estimating directed causal relationships among fMRI time-series. As a multivariate approach, lsXGC identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here, we examine the ability of lsXGC to accurately identify schizophrenia patients from fMRI data using a subset of 31 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsXGC as features for classification. After feature extraction, we perform feature selection by Kendall’s tau rank correlation coefficient followed by classification using a support vector machine. For reference, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity, and several other standard methods. Using 100 different training/test data splits with 10-fold cross-validation we obtain mean/std f1-scores of 87.40% ± 19.73% and mean Area Under the receiver operating characteristic Curve (AUC) values of 95.00% ± 13.69% across all tested numbers of features for lsXGC, which is significantly better than the results obtained with cross-correlation (AUC=54.75% ± 30.96%, f1-score=51.10% ± 27.54%), and multiple other competing methods, including partial correlation, tangent, precision, and covariance methods. Our results suggest the applicability of lsXGC as a potential imaging biomarker for schizophrenia.
Deep learning models can be applied successfully in real-work problems; however, training most of these models requires massive data. Recent methods use language and vision, but unfortunately, they rely on datasets that are not usually publicly available. Here we pave the way for further research in the multimodal language-vision domain for radiology. In this paper, we train a representation learning method that uses local and global representations of the language and vision through an attention mechanism and based on the publicly available Indiana University Radiology Report (IU-RR) dataset. Furthermore, we use the learned representations to diagnose five lung pathologies: atelectasis, cardiomegaly, edema, pleural effusion, and consolidation. Finally, we use both supervised and zero-shot classifications to extensively analyze the performance of the representation learning on the IU-RR dataset. Average Area Under the Curve (AUC) is used to evaluate the accuracy of the classifiers for classifying the five lung pathologies. The average AUC for classifying the five lung pathologies on the IU-RR test set ranged from 0.85 to 0.87 using the different training datasets, namely CheXpert and CheXphoto. These results compare favorably to other studies using UI-RR. Extensive experiments confirm consistent results for classifying lung pathologies using the multimodal global local representations of language and vision information.
Alterations in brain network connectivity play an important role in the pathogenesis of schizophrenia. We investigate whether large-scale Kernelized Granger Causality (lsKGC) can capture such alterations using restingstate fMRI data. Our method utilizes dimension reduction combined with the augmentation of source timeseries in a predictive time-series model for estimating directed causal relationships among fMRI time-series. As a multivariate approach, lsKGC identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here, we examine the ability of lsKGC to accurately identify schizophrenia patients from fMRI data using a subset of 31 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsKGC as features for classification. After feature extraction, we perform feature selection by Kendall’s tau rank correlation coefficient followed by classification using a support vector machine. For reference, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity, and several other standard methods. Using 100 different training/test data splits with 10-fold cross-validation we obtain mean/std f1-scores of 84.87% ± 19.78% and mean Area Under the receiver operating characteristic Curve (AUC) values of 93.00% ± 16.61% across all tested numbers of features for lsKGC, which is significantly better than the results obtained with cross-correlation (AUC=53.25% ± 29.29%, f1-score=45.03% ± 30.82%), and multiple other competing methods, including partial correlation, tangent, precision, and covariance methods. Our results suggest the applicability of lsKGC as a potential imaging biomarker for schizophrenia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.