We developed a 3D-image-based unsupervised prediction model, called vox2pred, for predicting the progression of pulmonary diseases based on a conditional generative adversarial network (cGAN). The architecture of the vox2pred model includes a time generator that consists of an encoding convolutional network and a fully connected prediction network, and a discriminator network. The time generator is trained to generate the progression time from the chest 3D CT image volumes of each patient. The discriminator is a patch-based 3D-convolutional network that is trained to differentiate between “predicted pairs” of a chest CT image volume and a predicted progression time from “true pairs” of the chest CT image volume and the corresponding observed progression time of the patient. For a pilot evaluation, we retrospectively collected high-resolution chest CT images of 141 patients with the coronavirus disease 2019 (COVID-19). The progression predictions of the vox2pred model on these patients were compared with those of existing clinical prognostic biomarkers by use of a two-sided t-test with bootstrapping. Concordance index (C-index) and relative absolute error (RAE) were used as measures of the prediction performance. The bootstrap evaluation yielded C-index and RAE values of 87.4% and 18.5% for the vox2pred model, whereas those for the visual assessment of the CT images in terms of a total severity score were 62.4% and 51.8%, and for the total severity score for crazy paving and consolidation (CPC), they were 64.7% and 51.3%, respectively. The increase in the accuracy of the progression prediction by the vox2pred model was statistically significant (p < 0.0001), indicating the high effectiveness of vox2pred as a prediction model for pulmonary disease progression in chest CT.
|