Prostate cancer is the second-most lethal cancer in men. Since early diagnosis and treatment can drastically increase the 5-year survival rate of patients to >99%, magnetic resonance imaging (MRI) has been utilized due to its high sensitivity of 88%. However, due to lack of access to MRI, transrectal b-mode ultrasound (TRUS)-guided systematic prostate biopsy remains the standard of care for 93% of patients. While ubiquitous, TRUS-guided prostate biopsy suffers from the lack of lesion targeting, resulting in a sensitivity of 48%. To address this gap, we perform a preliminary study to assess the feasibility of localizing clinically significant cancer on b-mode ultrasound images of the prostate as input and propose a deep learning framework that learns to distinguish cancer at the pixel level. The proposed deep learning framework consists of a convolutional network with deep supervision at various scales and a clinical decision module that simultaneously learns to reduce false positive lesion predictions. We evaluated our deep learning framework using b-mode TRUS data with pathology confirmation from 330 patients, including 123 patients with pathology-confirmed cancer. Our results demonstrate the feasibility of using b-mode ultrasound images to localize prostate cancer lesions with a patient-level sensitivity and specificity of 68% and 91% respectively, compared to the reported clinical standard of 48% and 99%. The outcomes of this study show the promise of using a deep learning framework to localize prostate cancer lesions on the universally available b-mode ultrasound images; eventually improving the prostate biopsy procedures and enhancing the clinical outcomes for prostate cancer patients.
The alignment of MRI and ultrasound images of the prostate is crucial in detecting prostate cancer during biopsies, directly affecting the accuracy of prostate cancer diagnosis. However, due to the low signal-to-noise ratio of ultrasound images and the varied imaging properties of the prostate between MRI and ultrasound, it’s challenging to efficiently and accurately align MRI and ultrasound images of the prostate. This study aims to present an effective affine transformation method that can automatically register prostate MRI and ultrasound images. In real-world clinical practice, it may increase the effectiveness of prostate cancer biopsies and the accuracy of prostate cancer diagnosis.
KEYWORDS: Denoising, X-rays, Digital breast tomosynthesis, X-ray imaging, Photons, Mammography, Sensors, Physics, Signal to noise ratio, Interference (communication)
Digital Breast Tomosynthesis (DBT) is becoming increasingly popular for breast cancer screening because of its high depth resolution. It uses a set of low-dose x-ray images called raw projections to reconstruct an arbitrary number of planes. These are typically used in further processing steps like backprojection to generate DBT slices or synthetic mammography images. Because of their low x-ray dose, a high amount of noise is present in the projections. In this study, the possibility of using deep learning for the removal of noise in raw projections is investigated. The impact of loss functions on the detail preservation is analized in particular. For that purpose, training data is augmented following the physics driven approach of Eckert et al.1 In this method, an x-ray dose reduction is simulated. First pixel intensities are converted to the number of photons at the detector. Secondly, Poisson noise is enhanced in the x-ray image by simulating a decrease in the mean photon arrival rate. The Anscombe Transformation2 is then applied to construct signal independent white Gaussian noise. The augmented data is then used to train a neural network to estimate the noise. For training several loss functions are considered including the mean square error (MSE), the structural similarity index (SSIM)3 and the perceptual loss.4 Furthermore the ReLU-Loss1 is investigated, which is especially designed for mammogram denoising and prevents the network from noise overestimation. The denoising performance is then compared with respect to the preservation of small microcalcifications. Based on our current measurements, we demonstrate that the ReLU-Loss in combination with SSIM improves the denoising results.
Mammographic breast density is an important risk marker in breast cancer screening. The ACR BI-RADS guidelines (5th ed.) define four breast density categories that can be dichotomized by the two super-classes dense" and not dense". Due to the qualitative description of the categories, density assessment by radiologists is characterized by a high inter-observer variability. To quantify this variability, we compute the overall percentage agreement (OPA) and Cohen's kappa of 32 radiologists to the panel majority vote based on the two super-classes. Further, we analyze the OPA between individual radiologists and compare the performances to an automated assessment via a convolutional neural network (CNN). The data used for evaluation contains 600 breast cancer screening examinations with four views each. The CNN was designed to take all views of an examination as input and trained on a dataset with 7186 cases to output one of the two super-classes. The highest agreement to the panel majority vote (PMV) achieved by a single radiologist is 99%, the lowest score is 71% with a mean of 89%. The OPA of two individual radiologists ranges from a maximum of 97.5% to a minimum of 50.5% with a mean of 83%. Cohen's kappa values of radiologists to the PMV range from 0.97 to 0.47 with a mean of 0.77. The presented algorithm reaches an OPA to all 32 radiologists of 88% and a kappa of 0.75. Our results show that inter-observer variability for breast density assessment is high even if the problem is reduced to two categories and that our convolutional neural network can provide labelling comparable to an average radiologist. We also discuss how to deal with automated classification methods for subjective tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.