Automated detection and aggressiveness classification of prostate cancer on Magnetic Resonance Imaging (MRI) can help standardize radiologist interpretations, and guide MRI-Ultrasound fusion biopsies. Existing automated methods rely on MRI features alone, disregarding histopathology image information. Histopathology images contain definitive information about the presence, extent, and aggressiveness of cancer. We present a two-step radiology-pathology fusion model, ArtHiFy, Artificial Histopathology-style Features for improving MRI-based prostate cancer detection, that leverages generative models in a multimodal co-learning strategy, enabling learning from resource-rich histopathology, but prediction using resource-poor MRI alone. In the first step, ArtHiFy generates artificial low-resolution histopathology-style features from MRI using a modified Geometry-consistent Generative Adversarial Network (GcGAN). The generated low-resolution histopathology-style features emphasize cancer regions as having less texture variations, mimicking densely packed nuclei in real histopathology images. In the second step, ArtHiFy uses these generated artificial histopathology-style features in addition to MR images in a convolutional neural network architecture to detect and localize aggressive and indolent prostate cancer on MRI. ArtHiFy does not require spatial alignment between MRI and histopathology images during training, and it does not require histopathology images at all during inference, making it clinically relevant for MRI-based prostate cancer diagnosis in new patients. We trained ArtHiFy using prostate cancer patients who underwent radical prostatectomy, and evaluated it on patients with and without prostate cancer. Our experiments showed that ArtHiFy improved prostate cancer detection performance over existing top performing prostate cancer detection models, with statistically significant differences.
Prostate cancer is the second-most lethal cancer in men. Since early diagnosis and treatment can drastically increase the 5-year survival rate of patients to >99%, magnetic resonance imaging (MRI) has been utilized due to its high sensitivity of 88%. However, due to lack of access to MRI, transrectal b-mode ultrasound (TRUS)-guided systematic prostate biopsy remains the standard of care for 93% of patients. While ubiquitous, TRUS-guided prostate biopsy suffers from the lack of lesion targeting, resulting in a sensitivity of 48%. To address this gap, we perform a preliminary study to assess the feasibility of localizing clinically significant cancer on b-mode ultrasound images of the prostate as input and propose a deep learning framework that learns to distinguish cancer at the pixel level. The proposed deep learning framework consists of a convolutional network with deep supervision at various scales and a clinical decision module that simultaneously learns to reduce false positive lesion predictions. We evaluated our deep learning framework using b-mode TRUS data with pathology confirmation from 330 patients, including 123 patients with pathology-confirmed cancer. Our results demonstrate the feasibility of using b-mode ultrasound images to localize prostate cancer lesions with a patient-level sensitivity and specificity of 68% and 91% respectively, compared to the reported clinical standard of 48% and 99%. The outcomes of this study show the promise of using a deep learning framework to localize prostate cancer lesions on the universally available b-mode ultrasound images; eventually improving the prostate biopsy procedures and enhancing the clinical outcomes for prostate cancer patients.
The alignment of MRI and ultrasound images of the prostate is crucial in detecting prostate cancer during biopsies, directly affecting the accuracy of prostate cancer diagnosis. However, due to the low signal-to-noise ratio of ultrasound images and the varied imaging properties of the prostate between MRI and ultrasound, it’s challenging to efficiently and accurately align MRI and ultrasound images of the prostate. This study aims to present an effective affine transformation method that can automatically register prostate MRI and ultrasound images. In real-world clinical practice, it may increase the effectiveness of prostate cancer biopsies and the accuracy of prostate cancer diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.