The Gleason grade is the most common architectural and morphological assessment of prostate cancer severity and prognosis. There have been numerous algorithms developed to approximate and duplicate the Gleason scoring system, mostly developed in standard H&E brightfield microscopy. Immunofluorescence (IF) image analysis of tissue pathology has recently been proven to be robust in developing prognostic assessments of disease, particularly in prostate cancer. We leverage a method of segmenting gland rings in IF images for predicting the pathological Gleason, both the clinical and the image specific grades, which may not necessarily be the same. We combine these measures with nuclear specific characteristics. In 324 images from 324 patients, our individual features correlate well univariately with the Gleason grades and in a multivariate setting have an accuracy of 85% in predicting the Gleason grade. Additionally, these features correlate strongly with clinical progression outcomes [concordance index (CI) of 0.89], significantly outperforming the clinical Gleason grades (CI of 0.78). Finally, in multivariate models for multiple prostate cancer progression endpoints, replacing the Gleason with these features results in equivalent or improved performances. This work presents the first assessment of morphological gland unit features from IF images for predicting the Gleason grade, and even replacing it in prostate cancer prognostics.
The Gleason score is the most common architectural and morphological assessment of prostate cancer severity and prognosis. There have been numerous quantitative techniques developed to approximate and duplicate the Gleason scoring system. Most of these approaches have been developed in standard H and E brightfield microscopy. Immunofluorescence (IF) image analysis of tissue pathology has recently been proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. In this work we leverage a new method of segmenting gland rings in IF images for predicting the pathological Gleason; both the clinical and the image specific grade, which may not necessarily be the same. We combine these measures with nuclear specific characteristics as assessed by the MST algorithm. Our individual features correlate well univariately with the Gleason grades, and in a multivariate setting have an accuracy of 85% in predicting the Gleason grade. Additionally, these features correlate strongly with clinical progression outcomes (CI of 0.89), significantly outperforming the clinical Gleason grades (CI of 0.78). This work presents the first assessment of morphological gland unit features from IF images for predicting the Gleason grade.
Immunofluorescent (IF) image analysis of tissue pathology has proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. However, while biomarker and glandular morphometric features have been combined as separate predictors in multivariate models, there is a lack of integrative features for biomarkers co-localized within specific morphological sub-types; for example the evaluation of androgen receptor (AR) expression within Gleason 3 glands only. In this work we propose a novel framework employing multiple techniques to generate integrated metrics of morphology and biomarker expression. We demonstrate the utility of the approaches in predicting clinical disease progression in images from 326 prostate biopsies and 373 prostatectomies. Our proposed integrative approaches yield significant improvements over existing IF image feature metrics. This work presents some of the first algorithms for generating innovative characteristics in tissue diagnostics that integrate co-localized morphometry and protein biomarker expression.
Surface electromyographic (SEMG) signals are commonly used as control signals in prosthetic and orthotic devices. Super cial electrodes are placed on the skin of the subject to acquire its muscular activity through this signal. The muscle contraction episode is then in charge of activating and deactivating these devices. Nevertheless, there is no gold standard" to detect muscle contraction, leading to delayed responses and false and missed detections. This fact motivated us to propose a new approach that compares a smoothed version of the SEMG signal with a xed threshold, in order to detect muscle contraction episodes. After preprocessing the SEMG signal, the smoothed version is obtained using a moving average lter, where three di erent window lengths has been evaluated. The detector was tuned by maximizing sensitivity and speci city and evaluated using SEMG signals obtained from the anterior tibial and gastrocnemius muscles, taken during the walking of ve subjects. Compared with traditional detection methods, we obtain a reduction of 3 ms in the detection delay, an increase of 8% in sensitivity but a decrease of 15% in speci city. Future work is directed to the inclusion of a temporal threshold (a double-threshold approach) to minimize false detections and reduce detection delays.
Tissue segmentation is one of the key preliminary steps in the morphometric analysis of tissue architecture. In multi-channel
immunoflurorescent biomarker images, the primary segmentation steps consist of segmenting the nuclei
(epithelial and stromal) and epithelial cytoplasm from 4',6-diamidino-2-phenylindole (DAPI) and cytokeratin 18 (CK18)
biomarker images respectively. The epithelial cytoplasm segmentation can be very challenging due to variability in
cytoplasm morphology and image staining. A robust and adaptive segmentation algorithm was developed for the
purpose of both delineating the boundaries and separating thin gaps that separate the epithelial unit structures. This
paper discusses novel methods that were developed for adaptive segmentation of epithelial cytoplasm and separation of
epithelial units. The adaptive segmentation was performed by computing the non-epithelial background texture of every
CK18 biomarker image. The epithelial unit separation was performed using two complementary techniques: a marker
based, center-initialized watershed transform and a boundary initialized fast marching-watershed segmentation. The
adaptive segmentation algorithm was tested on 926 CK18 biomarker biopsy images (326 patients) with limited
background noise and 1030 prostatectomy images (374 patients) with noisy to very noisy background. The segmentation
performance was measured using two different methods, namely; stability and background textural metrics. It was
observed that the database of 1030 noisy prostatectomy images had a lower mean value (using stability and three
background texture performance metrics) compared to the biopsy dataset of 926 images that had limited background
noise. The average of all four performance metrics yielded 94.32% accuracy for prostatectomy images compared to
99.40% accuracy for biopsy images.
Accurate segmentation of overlapping nuclei is essential in determining nuclei count and evaluating the sub-cellular
localization of protein biomarkers in image Cytometry and Histology. Current cellular segmentation algorithms generally
lack fast and reliable methods for disambiguating clumped nuclei. In immuno-fluorescence segmentation, solutions to
challenges including nuclei misclassification, irregular boundaries, and under-segmentation require reliable separation of
clumped nuclei. This paper presents a fast and accurate algorithm for joint segmentation of cellular cytoplasm and nuclei
incorporating procedures for reliably separating overlapping nuclei. The algorithm utilizes a combination of ideas and is
a significant improvement on state-of-the-art algorithms for this application. First, an adaptive process that includes top-hat
filtering, blob detection and distance transforms estimates the inverse illumination field and corrects for intensity
non-uniformity. Minimum-error-thresholding based binarization augmented by statistical stability estimation is applied
prior to seed-detection constrained by a distance-map-based scale-selection to identify candidate seeds for nuclei
segmentation. The nuclei clustering step also incorporates error estimation based on statistical stability. This enables the
algorithm to perform localized error correction. Final steps include artifact removal and reclassification of nuclei objects
near the cytoplasm boundary as epithelial or stroma. Evaluation using 48 realistic phantom images with known ground-truth
shows overall segmentation accuracy exceeding 96%. It significantly outperformed two state-of-the-art algorithms
in clumped nuclei separation. Tests on 926 prostate biopsy images (326 patients) show that the segmentation
improvement improves the predictive power of nuclei architecture features based on the minimum spanning tree
algorithm. The algorithm has been deployed in a large scale pathology application.
Performance assessment of segmentation algorithms compares segmentation outputs to a handful of manually obtained
ground-truth. This assumes that the ground-truth images are accurate, reliable and representative of the entire image set.
In image cytometry, few ground-truth images are typically used because of the difficulty of manually segmenting images
with large numbers of small objects. This violates the aforementioned assumptions. Automated methods of segmentation
evaluation without ground-truth are needed. We describe a stable and reliable method for evaluating segmentation
performance without ground-truth. Segmentation errors are either statistical or structural. Statistical errors reflect failure
to account for random variations in pixel values while structural errors result from inadequate image description models.
As statistical errors predominate image cytometry, our method focuses on statistical stability assessment. For any image-algorithm
pair, we obtain multiple perturbed variants of the image by applying slight linear blur. We segment the image
and its variants with the algorithm and determine the match between the output from the image and the output from its
variants. We utilized 48 realistic phantom images with known ground-truth and four segmentation algorithms with large
performance differences to assess the efficacy of the method. For each algorithm-image pair, we obtained a ground truth
match score and four different statistical validation scores. Analyses show that statistical validation and ground-truth
validation scores correlate in over 96% of cases. The statistical validation approach reduces segmentation review time
and effort by over 99% and enables assessment of segmentation quality long after an algorithm has been deployed.
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial
progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In
applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and
stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei.
Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing
algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and
cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes
top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse
illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding
based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based
scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local
maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an
artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process
to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated
using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The
algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90%
of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Morphological and architectural characteristics of primary prostate tissue compartments, such as epithelial nuclei (EN)
and cytoplasm, provide critical information for cancer diagnosis, prognosis and therapeutic response prediction. The
subjective and variable Gleason grade assessed by expert pathologists in Hematoxylin and Eosin (H&E) stained
specimens has been the standard for prostate cancer diagnosis and prognosis. We propose a novel morphometric,
glandular object-oriented image analysis approach for the robust quantification of H&E prostate biopsy images.
We demonstrate the utility of features extracted through the proposed method in predicting disease progression post
treatment in a multi-institution cohort of 1027 patients. The biopsy based features were univariately predictive for
clinical response post therapy; with concordance indexes (CI) ≤ 0.4 or ≥ 0.6. In multivariate analysis, a glandular object
feature quantifying tumor epithelial cells not directly associated with an intact tumor gland was selected in a model
incorporating preoperative clinical data, protein biomarker and morphological imaging features. The model achieved a
CI of 0.73 in validation, which was significantly higher than a CI of 0.69 for the standard multivariate model based
solely on clinical features currently used in clinical practice.
This work presents one of the first demonstrations of glandular object based morphological features in the H&E stained
biopsy specimen to predict disease progression post primary treatment. Additionally, it is the largest scale study of the
efficacy and robustness of the proposed features in prostate cancer prognosis.
The Prostate Px prognostic assay offered by Aureon Biosciences is designed to predict progression post primary
treatment for prostate cancer patients based on their diagnostic biopsy specimen. The assay is driven by the automated
image analysis of a diagnostic prostate needle biopsy (PNB) and incorporates pathologist acquired and digitally masked
images which reflect the morphometric (Hematoxylin and Eosin, H&E) and protein expression (immunofluorescence,
IF) properties of the PNB. Up to 9 images (3 H&E and 6 IF) from each of 1027 patients, with varying amounts of
tumor content were included in the study. We wanted to understand what was the minimal tumor volume required to
maintain assay predictive robustness as a result of overall PNB tumor content and assess the impact of pathologist tumor
masking variability.
232 patients were selected who had a minimum of 80% tumor volume in a 20x magnification image. In each of the three
imaging domains (2 different multiplex (Mplex) IF images and one H&E), the tumor volume was artificially reduced in
increments from 80% to 2.5% of the original image area. This simulated decreasing amounts of tumor as well as
variations in digital tumor masking.
The univariate predictive power of individual imaging domains remained robust down to the 10% tumor level, whereas
the total assay was robust through the 20% to 10% tumor level. This work presents one of the first assessments of the
variety in tumor amounts on the predictive power of a commercially available prognostic assay that is reliant on multiple
bioimaging domains.
The Prostate Px prognostic assay offered by Aureon Biosciences is designed to predict progression post primary
treatment for prostate cancer patients based on their diagnostic biopsy specimen. The assay is driven by the automated
image analysis of biological specimens. Three different histological sections are analyzed for morphometric as well as
immunofluorescence protein expression properties within areas of tumor digitally masked by expert pathologists.
The assay was developed on a multi-institution cohort of up to 9 images from each of 1027 patients. The variation in
histological sections, staining, pathologist tumor masking and the region of image acquisition all have the potential to
significantly impact imaging features and consequently the reproducibility of the assay's results for the same patient.
This study analyzed the reproducibility of the assay in 50 patients who were re-processed within 3 months in a blinded
fashion as de-novo patients.
The key assay results reported were in agreement in 94% of the cases. The two independent endpoints of risk
classification reproduced results in 90% and 92% of the predictions. This work presents one of the first assessments of
the reproducibility of a commercial assay's results given the inherent variations in images and quantitative imaging
characteristics in a commercial setting.
Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and
cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two
feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy
images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from
the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the
EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We
demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of
1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for
predicting cancer recurrence (p ≤ 0.0001). In multivariate analysis, an MST feature was selected for a model
incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set,
which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features
currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first
demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest
scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.