Presentation + Paper
1 April 2024 One-shot estimation of epistemic uncertainty in deep learning image formation with application to high-quality cone-beam CT reconstruction
Author Affiliations +
Abstract
Deep Learning (DL) image synthesis has gained increasing popularity for the reconstruction of CT and cone-beam CT (CBCT) images, especially in combination with physically-principled reconstruction algorithms. However, DL synthesis is challenged by the generalizability of training data and noise in the trained model. Epistemic uncertainty has proven as an efficient way of quantifying erroneous synthesis in the presence of out-of-domain features, but its estimation with Monte Carlo (MC) dropout requires a large number of inference runs, variable as a function of the particular uncertain feature. We propose a single-pass method–the Moment Propagation Model–which approximates the MC dropout by analytically propagating the statistical moments through the network layers, removing the need for multiple inferences and removing errors in estimations from insufficient dropout realizations. The proposed approach jointly computes the change of the expectation and the variance of the input (first two statistical moments) through each network layer, where each moment undergoes a different numerical transformation. The expectation is initialized as the network input; the variance is solely introduced at dropout layers, modeled as a Bernoulli process. The method was evaluated using a 3D Bayesian conditional generative adversarial network (GAN) for synthesis of high-quality head MDCT from low-quality intraoperative CBCT reconstructions. 20 pairs of measured MDCT volumes (120kV, 400 to 550mAs) depicting normal head anatomy, and simulated CBCT volumes (100 to120kV, 32 to 200mAs) were used for training. Scatter, beam-hardening, detector lag and glare were added to the simulated CBCT and were corrected (assuming unknown) prior to reconstruction. Epistemic uncertainty was estimated for 30 heads (outside of the training set) containing simulated brain lesions using the proposed single-pass propagation model, and results were compared to the standard 200-pass dropout approach. Image quality and quantitative accuracy of the estimated uncertainty of lesions and other anatomical sites were further evaluated. The proposed propagation model captured >2HU increase in epistemic uncertainty caused by various hyper- and hypo-density lesions, with <0.31HU error over the brain compared to the reference MC dropout result at 200 inferences and <0.1HU difference to a converged MC dropout estimate at 100 inference passes. These findings indicate a potential 100-fold increase in computational efficiency of neural network uncertainty estimation. The proposed moment propagation model is able to achieve accurate quantification of epistemic uncertainty in a single network pass and is an efficient alternative to conventional MC dropout.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Stephen Z. Liu, Prasad Vagdargi, Craig K. Jones, Mark Luciano, William S. Anderson, Patrick A. Helm, Ali Uneri, Jeffrey H. Siewerdsen, Wojciech Zbijewski, and Alejandro Sisniega "One-shot estimation of epistemic uncertainty in deep learning image formation with application to high-quality cone-beam CT reconstruction", Proc. SPIE 12925, Medical Imaging 2024: Physics of Medical Imaging, 129251E (1 April 2024); https://doi.org/10.1117/12.3006934
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cone beam computed tomography

Education and training

Brain

Deep learning

Computed tomography

Data modeling

Error analysis

Back to Top