KEYWORDS: Visualization, Statistical modeling, Cancer, Visual process modeling, Solid state lighting, Data modeling, Tissues, Solids, Prostate cancer, Imaging systems
Gleason Score (GS) is the principal histological grading system to support the quantification of cancer aggressiveness. This analysis is carried out by expert pathologists but with reported evidence of moderate agreement among pathologists (kappa values less than 0.5). This fact can be prone to errors that directly affect the diagnosis and subsequent treatment. Current deep learning approaches have been proposed to support such visual pattern quantification but there exist a remarked on expert annotations that overfit representations. Besides, the supervised representation is limited to model the high reported visual variability intra Gleason grades. This work introduces a semi-Supervised Learning (SSL) approach that initially uses a reduced set of annotated visual patterns to built several GS deep representations. Then, the set of deep models automatically propagates annotations to unlabeled patches. The most confident predicted samples are used to retrain the ensemble deep representation. Over a patch-based framework with a total of 26259 samples, coded from 886 tissue microarrays, the proposed approach achieved remarkable results between grades three and four. Interestingly, the proposed SSL with only the 10% of samples achieves more general representation, achieving averages scores of ~75.93% and ∼ 71.88% concerning two expert pathologists
Semisupervised learning (SSL) techniques explore the progressive discovery of the hidden latent data structure by propagating supervised information on unlabeled data, which are thereafter used to reinforce learning. These schemes are beneficial in remote sensing, where thousands of new images are added every day, and manual labeling results are prohibitive. Our work introduces an ensemble-based semisupervised deep learning approach that initially takes a subset of labeled data Dl, which represents the latent structure of the data and progressively propagates labels automatically from an expanding set of unlabeled data Du. The ensemble is a set of classifiers whose predictions are collated to derive a consolidated prediction. Only those data having a high-confidence prediction are considered as newly generated labels. The proposed approach was exhaustively validated on four public datasets, achieving appreciable results compared to the state-of-the-art methods in most of the evaluated configurations. For all datasets, the proposed approach achieved a classification F1-score and recall of up to 90%, on average. The SSL and recursive scheme also demonstrated an average gain of ∼2 % at the last training stage in such large datasets.
Histopathological tissue analysis is the most effective and definitive method to prognosis cancer and stratify the aggressiveness of the disease. The Gleason Score (GS) is the most powerful grading system based on architectural tumor pattern quantification. This score characterizes cancer tumor tissue, such as the level of cell differentiation on histopathological images. The reported GS is described as the sum of two principal grades present in a particular image, and ranged from 6 (cancer grow slowly) to 10 (cancer cells spread more rapidly). A main drawback of GS is the pathological dependency on histopathological region stratification, which strongly impacts the clinical procedure to treat the disease. The agreement among experts has been quantified with a kappa index of: ~0.71. Even worse, a higher uncertainty is reported for intermediate grade stratification. This work presents a like-inception deep architecture that is able to differentiate between intermediate and close GS grades. Each image herein evaluated was split-up into regional patches that correspond to a single GS grade. A set of training patches were augmented according to appearance image variations of each grade. Then, a transfer learning scheme was implemented to adapt a bi-Gleason tumor patterns prediction among close levels. The proposed approach was evaluated on public set of 886 tissue H&E stained images with different GS grades, achieving an average accuracy of 0.73% between grades three and four.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.