Automated detection and aggressiveness classification of prostate cancer on Magnetic Resonance Imaging (MRI) can help standardize radiologist interpretations, and guide MRI-Ultrasound fusion biopsies. Existing automated methods rely on MRI features alone, disregarding histopathology image information. Histopathology images contain definitive information about the presence, extent, and aggressiveness of cancer. We present a two-step radiology-pathology fusion model, ArtHiFy, Artificial Histopathology-style Features for improving MRI-based prostate cancer detection, that leverages generative models in a multimodal co-learning strategy, enabling learning from resource-rich histopathology, but prediction using resource-poor MRI alone. In the first step, ArtHiFy generates artificial low-resolution histopathology-style features from MRI using a modified Geometry-consistent Generative Adversarial Network (GcGAN). The generated low-resolution histopathology-style features emphasize cancer regions as having less texture variations, mimicking densely packed nuclei in real histopathology images. In the second step, ArtHiFy uses these generated artificial histopathology-style features in addition to MR images in a convolutional neural network architecture to detect and localize aggressive and indolent prostate cancer on MRI. ArtHiFy does not require spatial alignment between MRI and histopathology images during training, and it does not require histopathology images at all during inference, making it clinically relevant for MRI-based prostate cancer diagnosis in new patients. We trained ArtHiFy using prostate cancer patients who underwent radical prostatectomy, and evaluated it on patients with and without prostate cancer. Our experiments showed that ArtHiFy improved prostate cancer detection performance over existing top performing prostate cancer detection models, with statistically significant differences.
Prostate cancer is the second-most lethal cancer in men. Since early diagnosis and treatment can drastically increase the 5-year survival rate of patients to >99%, magnetic resonance imaging (MRI) has been utilized due to its high sensitivity of 88%. However, due to lack of access to MRI, transrectal b-mode ultrasound (TRUS)-guided systematic prostate biopsy remains the standard of care for 93% of patients. While ubiquitous, TRUS-guided prostate biopsy suffers from the lack of lesion targeting, resulting in a sensitivity of 48%. To address this gap, we perform a preliminary study to assess the feasibility of localizing clinically significant cancer on b-mode ultrasound images of the prostate as input and propose a deep learning framework that learns to distinguish cancer at the pixel level. The proposed deep learning framework consists of a convolutional network with deep supervision at various scales and a clinical decision module that simultaneously learns to reduce false positive lesion predictions. We evaluated our deep learning framework using b-mode TRUS data with pathology confirmation from 330 patients, including 123 patients with pathology-confirmed cancer. Our results demonstrate the feasibility of using b-mode ultrasound images to localize prostate cancer lesions with a patient-level sensitivity and specificity of 68% and 91% respectively, compared to the reported clinical standard of 48% and 99%. The outcomes of this study show the promise of using a deep learning framework to localize prostate cancer lesions on the universally available b-mode ultrasound images; eventually improving the prostate biopsy procedures and enhancing the clinical outcomes for prostate cancer patients.
The alignment of MRI and ultrasound images of the prostate is crucial in detecting prostate cancer during biopsies, directly affecting the accuracy of prostate cancer diagnosis. However, due to the low signal-to-noise ratio of ultrasound images and the varied imaging properties of the prostate between MRI and ultrasound, it’s challenging to efficiently and accurately align MRI and ultrasound images of the prostate. This study aims to present an effective affine transformation method that can automatically register prostate MRI and ultrasound images. In real-world clinical practice, it may increase the effectiveness of prostate cancer biopsies and the accuracy of prostate cancer diagnosis.
Prostate cancer ranks as the second most prevalent cancer among men globally. Accurate segmentation of prostate and the central gland plays a pivotal role in detecting abnormalities within the prostate, paving the way for early detection of prostate cancer, quantitative analysis and subsequent treatment planning. Micro-ultrasound (MUS) imaging is a novel ultrasound technique that operates at frequencies above 20MHz and offers superior resolution compared to conventional ultrasound, making it particularly effective for visualizing fine anatomical structures and pathological changes. In this paper, we leverage deep learning (DL) techniques for the segmentation of prostate and its central gland on micro-ultrasound images, investigating their potential in prostate cancer detection. We trained our DL model on MUS images from 80 patients, utilizing a five-fold cross-validation. We achieved Dice similarity coefficient (DSC) scores of 0.918 and 0.833, and an average surface-to-surface distance (SSD) of 1.176mm and 1.795mm for the prostate and the central gland, respectively. We further evaluated our method on a publicly available MUS dataset, achieving a DSC score of 0.957 and a Hausdorff Distance (HD) of 1.922mm for prostate segmentation. These results outperform the current state-of-the- art (SOTA).
Automated detection of aggressive prostate cancer on Magnetic Resonance Imaging (MRI) can help guide targeted biopsies and reduce unnecessary invasive biopsies. However, automated methods of prostate cancer detection often have a sensitivity-specificity trade-off (high sensitivity with low specificity or vice-versa), making them unsuitable for clinical use. Here, we study the utility of integrating prior information about the zonal distribution of prostate cancers with a radiology-pathology fusion model in reliably identifying aggressive and indolent prostate cancers on MRI. Our approach has two steps: 1) training a radiology-pathology fusion model that learns pathomic MRI biomarkers (MRI features correlated with pathology features) and uses them to selectively identify aggressive and indolent cancers, and 2) post-processing the predictions using zonal priors in a novel optimized Bayes’ decision framework. We compare this approach with other approaches that incorporate zonal priors during training. We use a cohort of 74 radical prostatectomy patients as our training set, and two cohorts of 30 radical prostatectomy patients and 53 biopsy patients as our test sets. Our rad-path-zonal fusion-approach achieves cancer lesion-level sensitivities of 0.77±0.29 and 0.79±0.38, and specificities of 0.79±0.23 and 0.62±0.27 on the two test sets respectively, compared to baseline sensitivities of 0.91±0.27 and 0.94±0.21 and specificities of 0.39±0.33 and 0.14±0.19, verifying its utility in achieving balance between sensitivity and specificity of lesion detection.
Magnetic Resonance Imaging (MRI) is increasingly used to localize prostate cancer, but the subtle features of cancer vs. normal tissue renders the interpretation of MRI challenging. Computational approaches have been proposed to detect prostate cancer, yet variation in intensity distribution across different scanners, and even on the same scanner, poses significant challenges to image analysis via computational tools, such as deep learning. In this study, we developed a conditional generative adversarial network (GAN) to normalize intensity distributions on prostate MRI. We used three methods to evaluate our GAN-normalization. First, we qualitatively compared the intensity of GAN-normalized images to the intensity distributions of statistically normalized images. Second, we visually examined the GAN-normalized images to ensure the appearance of the prostate and other structures were preserved. Finally, we quantitatively evaluated the performance of deep learning holistically nested edge detection (HED) networks to identify prostate cancer on MRI when using raw, statistically normalized, and GAN-normalized images. We found the detection network trained on GAN-normalized images achieved similar accuracy and area under the curve (AUC) scores when compared to the detection networks trained on raw and statistically normalized images. Conditional GANs may hence be an effective tool for normalizing intensity distribution on MRI and can be utilized to train downstream deep learning tasks.
Prostate MRI is increasingly used to help localize and target prostate cancer. Yet, the subtle differences in MRI appearance of cancer compared to normal tissue renders MRI interpretation challenging. Deep learning methods hold promise in automating the detection of prostate cancer on MRI, however such approaches require large, well-curated datasets. Although existing methods that employed fully convolutional neural networks have shown promising results, the lack of labeled data can reduce the generalization of these models. Self-supervised learning provides a promising avenue to learn semantic features from unlabeled data. In this study, we apply the self-supervised strategy of image context restoration to detect prostate cancer on MRI and show this improves model performance for two different architectures (U-Net and Holistically Nested Edge Detector) compared to their purely supervised counterparts. We train our models on MRI exams from 381 men with biopsy confirmed cancer. Our study showed self-supervised models outperform randomly initialized models on an independent test set in a variety of training settings. We performed 3 experiments, where we trained with 5%, 25% and 100% of our labeled data, and observed that the U-Net based pre-training and downstream task outperformed other models. We observed the best improvements when training with 5% of the labeled training data, our selfsupervised U-Nets improve per-pixel Area Under the Curve (AUC, 0.71 vs 0.83) and Dice Similarity coefficient (0.19 vs 0.53). When training with 100% of the data, our U-Net-based pretraining and detection achieved an AUC of 0.85 and Dice similarity coefficient of 0.57.
The use of magnetic resonance-ultrasound fusion targeted biopsy improves diagnosis of aggressive prostate cancer. Fusion of ultrasound & magnetic resonance images (MRI) requires accurate prostate segmentations. In this paper, we developed a 2.5 dimensional deep learning model, ProGNet, to segment the prostate on T2-weighted magnetic resonance imaging (MRI). ProGNet is an optimized U-Net model that weighs three adjacent slices in each MRI sequence to segment the prostate in a 2.5D context. We trained ProGNet on 529 cases where experts annotated the whole gland (WG) on axial T2-weighted MRI prior to targeted prostate biopsy. In 132 cases, experts also annotated the central gland (CG) on MRI. After five-fold cross-validation, we found that for WG segmentation, ProGNet had a mean Dice similarity coefficient (DSC) of 0.91±0.02, sensitivity of 0.89±0.03, specificity of 0.97±0.00, and an accuracy of 0.95±0.01. For CG segmentation, ProGNet achieved a mean DSC 0.86±0.01, sensitivity of 0.84±0.03, specificity of 0.99±0.01, and an accuracy of 0.96±0.01. We then tested the generalizability of the model on the 60-case NCI-ISBI 2013 challenge dataset and on a local, independent 61-case test set. We achieved DSCs of 0.81±0.02 and 0.72±0.02 for WG and CG segmentation on the NCI-ISBI 2013 challenge dataset, and 0.83±0.01 and 0.75±0.01 for WG and CG segmentation on the local dataset. Model performance was excellent and outperformed state-of-art U-Net and holistically-nested edge detector (HED) networks in all three datasets.
Prostate magnetic resonance imaging (MRI) allows the detection and treatment planning of clinically significant cancers. However, indolent cancers, e.g., those with Gleason scores 3+3, are not readily distinguishable on MRI. Thus an image-guided biopsy is still required before proceeding with a radical treatment for aggressive tumors or considering active surveillance for indolent disease. The excision of the prostate as part of radical prostatectomy treatments provides a unique opportunity to correlate whole-mount histology slices with MRI. Through a careful spatial alignment of histology slices and MRI, the extent of aggressive and indolent disease can be mapped on MRI which allows one to investigate MRI-derived features that might be able to distinguish aggressive from indolent cancers. Here, we introduce a framework for the 3D spatial integration of radiology and pathology images in the prostate. Our approach, first, uses groupwise-registration methods to reconstruct the histology specimen prior to sectioning, and incorporates the MRI as a spatial constraint, and, then, performs a multi-modal 3D affine and deformable alignment between the reconstructed histology specimen and the MRI. We tested our approach on 15 studies and found a Dice similarity coefficient of 0.94±0.02 and a urethra deviation of 1.11±0.34 mm between the histology reconstruction and the MRI. Our robust framework successfully mapped the extent of disease from histology slices on MRI and created ground truth labels for characterizing aggressive and indolent disease on MRI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.