Denoising algorithms are sensitive to the noise level and noise power spectrum of the input image and their ability to adapt to this. In the worst-case, image structures can be accidentally removed or even added. This holds up for analytical image filters but even more for deep learning-based denoising algorithms due to their high parameter space and their data-driven nature. We propose to use the knowledge about the noise distribution of the image at hand to limit the influence and ability of denoising algorithms to a known and plausible range. Specifically, we can use the physical knowledge of X-ray radiography by considering the Poisson noise distribution and the noise power spectrum of the detector. Through this approach, we can limit the change of the acquired signal by the denoising algorithm to the expected noise range, and therefore prevent the removal or hallucination of small relevant structures. The presented method allows to use denoising algorithms and especially deep learning-based methods in a controlled and safe fashion in medical x-ray imaging.
KEYWORDS: Denoising, Breast, Education and training, Digital breast tomosynthesis, Tomosynthesis, Computer simulations, Deep learning, X-rays, Breast density, Photons
PurposeHigh noise levels due to low X-ray dose are a challenge in digital breast tomosynthesis (DBT) reconstruction. Deep learning algorithms show promise in reducing this noise. However, these algorithms can be complex and biased toward certain patient groups if the training data are not representative. It is important to thoroughly evaluate deep learning-based denoising algorithms before they are applied in the medical field to ensure their effectiveness and fairness. In this work, we present a deep learning-based denoising algorithm and examine potential biases with respect to breast density, thickness, and noise level.ApproachWe use physics-driven data augmentation to generate low-dose images from full field digital mammography and train an encoder-decoder network. The rectified linear unit (ReLU)-loss, specifically designed for mammographic denoising, is utilized as the objective function. To evaluate our algorithm for potential biases, we tested it on both clinical and simulated data generated with the virtual imaging clinical trial for regulatory evaluation pipeline. Simulated data allowed us to generate X-ray dose distributions not present in clinical data, enabling us to separate the influence of breast types and X-ray dose on the denoising performance.ResultsOur results show that the denoising performance is proportional to the noise level. We found a bias toward certain breast groups on simulated data; however, on clinical data, our algorithm denoises different breast types equally well with respect to structural similarity index.ConclusionsWe propose a robust deep learning-based denoising algorithm that reduces DBT projection noise levels and subject it to an extensive test that provides information about its strengths and weaknesses.
KEYWORDS: Denoising, X-rays, Digital breast tomosynthesis, X-ray imaging, Photons, Mammography, Sensors, Physics, Signal to noise ratio, Interference (communication)
Digital Breast Tomosynthesis (DBT) is becoming increasingly popular for breast cancer screening because of its high depth resolution. It uses a set of low-dose x-ray images called raw projections to reconstruct an arbitrary number of planes. These are typically used in further processing steps like backprojection to generate DBT slices or synthetic mammography images. Because of their low x-ray dose, a high amount of noise is present in the projections. In this study, the possibility of using deep learning for the removal of noise in raw projections is investigated. The impact of loss functions on the detail preservation is analized in particular. For that purpose, training data is augmented following the physics driven approach of Eckert et al.1 In this method, an x-ray dose reduction is simulated. First pixel intensities are converted to the number of photons at the detector. Secondly, Poisson noise is enhanced in the x-ray image by simulating a decrease in the mean photon arrival rate. The Anscombe Transformation2 is then applied to construct signal independent white Gaussian noise. The augmented data is then used to train a neural network to estimate the noise. For training several loss functions are considered including the mean square error (MSE), the structural similarity index (SSIM)3 and the perceptual loss.4 Furthermore the ReLU-Loss1 is investigated, which is especially designed for mammogram denoising and prevents the network from noise overestimation. The denoising performance is then compared with respect to the preservation of small microcalcifications. Based on our current measurements, we demonstrate that the ReLU-Loss in combination with SSIM improves the denoising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.