The coronavirus disease 2019 (COVID-19) pandemic had a major impact on global health and was associated with millions of deaths worldwide. During the pandemic, imaging characteristics of chest X-ray (CXR) and chest computed tomography (CT) played an important role in the screening, diagnosis and monitoring the disease progression. Various studies suggested that quantitative image analysis methods including artificial intelligence and radiomics can greatly boost the value of imaging in the management of COVID-19. However, few studies have explored the use of longitudinal multi-modal medical images with varying visit intervals for outcome prediction in COVID-19 patients. This study aims to explore the potential of longitudinal multimodal radiomics in predicting the outcome of COVID-19 patients by integrating both CXR and CT images with variable visit intervals through deep learning. 2274 patients who underwent CXR and/or CT scans during disease progression were selected for this study. Of these, 946 patients were treated at the University of Pennsylvania Health System (UPHS) and the remaining 1328 patients were acquired at Stony Brook University (SBU) and curated by the Medical Imaging and Data Resource Center (MIDRC). 532 radiomic features were extracted with the Cancer Imaging Phenomics Toolkit (CaPTk) from the lung regions in CXR and CT images at all visits. We employed two commonly used deep learning algorithms to analyze the longitudinal multimodal features, and evaluated the prediction results based on the area under the receiver operating characteristic curve (AUC). Our models achieved testing AUC scores of 0.816 and 0.836, respectively, for the prediction of mortality.
Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19.
Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III).
Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest (AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best (AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent (AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I (AUC difference = 0.11 [0.02 to 0.19], p = 0.01; AUC difference = 0.08 [0.01 to 0.15], p = 0.04, respectively). Model II and III results did not change significantly when POv was replaced by POa.
Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.
Background: Imaging biomarkers derived from quantitative computed tomography (QCT) enable to quantify lung diseases and to distinguish their phenotypes. However, variability in radiomic features can have an impact on their diagnosis and prognosis significance. We aim to assess the effect of CT image reconstruction parameters on radiomic features in the PROSPR lung cancer screening cohort (1); thereby identifying more robust imaging features across heterogeneous CT images. Methods: CT feature extraction analysis was performed using a lattice-based texture estimation for data (n = 330) collected from a single CT scanner (Siemens Healthineers, Erlangen, Germany) with two different sets of image reconstruction kernels (medium (I30f), sharp (I50f)). A total of 26 features from three major statistical approaches, graylevel histogram, co-occurrence, and run-length, were computed. Features were calculated and averaged within a range of window sizes (W) from 4mm to 20mm. Furthermore, an unsupervised hierarchal clustering was applied to the features to identify distinct phenotypic patterns for the two kernels. The difference across phenotypes by age, sex, and Lung-Rads was assessed. Results: The results showed two distinct subtypes for two kernels across different window sizes. The heat map generated by radiomic features of the sharper kernel provided more distinct patterns compared to the medium kernel. The extracted features across the two kernels and their corresponding clusters were compared based on different clinical features. Conclusions: Our results suggest a set of radiomic features across different kernels can distinguish distinct phenotypes and can also help to assess the sensitivity of texture analysis to CT variabilities; helping for a better characterization of CT heterogeneity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.