PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10956, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increased interest in medical imaging has resulted in development of a variety of image analysis systems. Many of these systems follow the ‘computer-aided diagnosis’ paradigm. In this paradigm, the main function of the image analysis system is to help medical professionals (e.g. radiologists, pathologists, dermatologists) in their decision-making, instead of making decisions on their behalf. If a system is designed to help medical professionals, its logic, development methodology and evaluation should be transparent to its users.
In this talk, we will describe how to develop an image analysis system: how to translate medical knowledge into algorithms, how to supplement this knowledge with pattern recognition methods, and how to evaluate such systems with carefully designed reader studies with the participation of medical professionals of varying levels of experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate cell counting in microscopic images is important for medical diagnoses and biological studies. However, manual cell counting is very time-consuming, tedious, and prone to subjective errors. We propose a new density regression-based method for automatic cell counting that reduces the need to manually annotate experimental images. A supervised learning-based density regression model (DRM) is trained with annotated synthetic images (the source domain) and their corresponding ground truth density maps. A domain adaptation model (DAM) is built to map experimental images (the target domain) to the feature space of the source domain. By use of the unsupervised learning-based DAM and supervised learning-based DRM, a cell density map of a given target image can be estimated, from which the number of cells can be counted. Results from experimental immunofluorescent microscopic images of human embryonic stem cells demonstrate the promising performance of the proposed counting method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging serial sections in electron microcopy (EM) is an important volume EM approach for neuronal circuit reconstruction, which has advantages of larger imaging volume and non-destructive for tissue sections. However, the continuity between sections is destroyed when the tissue block is cut into sections physically, and sections suffer stretching, folding and distorting individually during section preparation and imaging. As a result, image registration is a challenging task to recover the continuity of the neurite. The traditional methods use the SIFT or block matching method to extract landmarks between the adjacent sections, which is doubtful when the neurite direction is not perpendicular to the section plane. To get round the difficulty of reliable landmark extraction, we propose a skeleton-based image registration method for serial EM sections of the nerve tissue. The virtual skeletons are traced across the sections after an initial approximate rigid alignment. Then we make assumption that the skeleton shape is smooth adequately in z direction. In company with the constraints that the displacements of the skeleton points in the same section are smooth and small, an energy function is proposed to calculate the new positions of the skeleton points for all of the sections. Finally, the sections are warped according to the adjusted positions of skeleton points. The proposed method is highly automatic and could recover the 3D continuity of the neurite. We demonstrate that our method outperforms the state-of-the-art methods on serial EM sections including a synthetic test case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital pathology, deep learning approaches have been increasingly applied and shown to be effective in analyzing digitized tissue specimen images. Such approaches have, in general, chosen an arbitrary scale or resolution at which the images are analyzed for several reasons, including computational cost and complexity. However, the tissue characteristics, indicative of cancer, tend to present at differing scales. Herein, we propose a framework that enables deep convolutional neural networks to perform multiscale histological analysis of tissue specimen images in an efficient and effective manner. A deep residual neural network is shared across multiple scales, extracting high-level features. The high-level features from multiple scales are aggregated and transformed in a way that the scale information is embedded in the network. The transformed features are utilized to classify tissue images into cancer and benign. The proposed method is compared to other methodologies to combine the feature from different scales. These competing methods combine the multi-scale features via 1) concatenation 2) addition and 3) convolution. Tissue microarrays (TMAs) were employed to evaluate the proposed method and the other competing methods. Three TMAs, including 225 benign and 377 cancer tissue samples, were used as training dataset. Two TMAs with 151 benign and 252 cancer tissue samples was utilized as testing dataset. The proposed method obtained an accuracy of 0.953 and the area under the receiver operating characteristics curve (AUC) of 0.971 (95% CI: 0.955-0.987), outperforming other competing methods. This suggests that the proposed multiscale approaches via a shared neural network and scale embedding scheme, could aid in improving digital pathology analysis and cancer pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of papers have established that a high density of tumor-infiltrating lymphocytes (TILs) is highly correlated with a better prognosis for many different cancer types. More recently, some studies have shown that the spatial interplay between different subtypes of TILs (e.g. CD3, CD4, CD8) is more prognostic of disease outcome compared to just metrics related to TIL density. A challenge with TIL subtyping is that it relies on quantitative immunofluoresence or immunohistochemistry, complex and tissue-destructive technologies. In this paper we present a new approach called PhenoTIL to identify TIL sub-populations and quantify the interplay between these sub-populations and show the association of these interplay features with recurrence in early stage lung cancer. The approach comprises a Dirichlet Process Gaussian Mixture Model that clusters lymphocytes on H&E images. The approach was evaluated on a cohort of N=178 early stage non-small cell lung cancer patients, N=100 being used for model training and N=78 being used for independent validation. A Linear Discriminant Analysis classifier was trained in conjunction with 186 PhenoTIL features to predict the likelihood of recurrence in the test set. The PhenoTIL features yielded an AUC=0.84 compared to an approach involving just TIL density alone (AUC=0.58). In addition, a Kaplan-Meier analysis showed that the PhenoTIL features were able to statistically significantly distinguish early from late recurrence (p = 4 ∗ 10 −5 ).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In diabetic nephropathy (DN), hyperglycemia drives a progressive thickening of glomerular filtration surfaces, increased cell proliferation as well as mesangial expansion and a constriction of capillary lumens. This leads to progressive structural changes inside the Glomeruli. In this work, we make a study of structural glomerular changes in DN from a graph-theoretic standpoint, using features extracted from Minimal Spanning Trees (MSTs) constructed over intercellular distances in order to classify the “packing signatures” of different DN stages. We further investigate the significance of the competing effects of Volume change measured here in 2Dimensional Pixel span area (Area) on one hand and increased cell proliferation on the other in determining the packing patterns. Towards that we formulate the problem as Dynamic Bayesian Network (DBN). From our preliminary results we do postulate that volume expansion caused by internal pressure as capillary lumens constriction has perhaps has a greater effect in the early stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A proper cancer diagnosis is imperative for determining the medical treatment for a patient. It necessitates a good staging and classification of the tumor alongside with additional factors to predict response to treatment. Mitotic count-based tumor proliferation grade provides the most reproducible and independent prognostic value. In practice, pathologists examine H&E-stained, giga-pixel-sized digital whole-slide images of a tissue specimen for counting the mitotic index. Considering the enormity of the images, focus for analysis is centered on specific, so-called, high-power- fields (HPFs) on the periphery of the invasive parts of the tumor. Selection of the HPFs is very subjective. Additionally, tumor heterogeneity impacts both the region selection and the quality of the area analyzed. Several efforts have been made to automate the tumor proliferation score estimation by counting the mitotic figures in certain regions-of-interest. But the region selection algorithms are inconspicuous and do not ensure to encompass the crucial regions interesting for pathological analysis, thereby, making the grading sub-optimal. In this work, we aim at addressing this problem by proposing to visualize a distance weighted mitotic distribution in the entire invasive tumor region. Our approach provides a holistic view of the mitotic activity and localizes active proliferating regions in the tumor with tissue architecture context, enabling the pathologists to more objectively select the HPFs. We propose a deep learning-based framework to generate the mitotic activity heat-maps. Additionally, in the framework, we develop a number of significant tools for digital pathology; a semi-supervised tumor region delineation tool, a fast nuclei segmentation and detection tool, and a mitotic figure localization tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer is the most common cancer for men in Western countries, counting 1.1 million new diagnoses every year. The incidence is expected to increase further, due to the growing elderly population. This is leading to a significantly increased workload for pathologists. The burden of this time-consuming and repetitive workload has the potential to be decreased by computational pathology, e.g., by automatically screening prostate biopsies. The current state-of-the-art in many computational pathology tasks use patch-based convolutional neural networks. Developing such algorithms require detailed annotations of the task-specific classes on whole-slide images, which are challenging to create due to low availability of the pathologists. Therefore, it would be beneficial to be able to train using labels the pathologist already provides for regular clinical practice in the form of a report. However, these reports correspond to whole-slide images which are of such a high resolution that current accelerator cards cannot process them at once due to memory constraints. We developed a method, streaming stochastic gradient descent, to train a convolutional neural network end-to-end with entire high resolution images and slide-level labels extracted from pathology reports. Here we trained a neural network on 2812 whole prostate biopsies, at a input size of 8000x8000 pixels, equivalent to 50x total magnification, for a binary classification, cancerous or benign. We achieved an accuracy of 84%. These results show that we may not need expensive annotations to train classification networks in this domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antibody development is crucial for immunohistochemistry (IHC) applications. To improve the efficiency of primary antibody screening processes, we developed a computer aided detection scheme to automatically identify the non-negative tissue slides which indicate reactive antibodies. A dataset with 564 digital IHC whole slide images were used for algorithm training and testing, each of which was labeled by pathologist as a negative (i.e., no staining) or non-negative (i.e., pure background or partial staining) slide. To avoid unnecessary computations, color deconvolution was first applied to low resolution whole slide images and histogram based image features were extracted from each unmixed single stain image. Then, different classifiers were built using the low resolution image features computed from the training dataset through ten-fold cross validation. The trained model was tested over the testing dataset. Results indicated that linear supported vector machine (LSVM) method yielded the highest area under ROC curve. To further improve the accuracy, our scheme utilized the LSVM classifier score to identify the slides for which additional analysis was needed. The additional analysis was performed through dividing the original whole slide image into non-overlapping tiles and extracting high resolution image features from each tile. The tile-based features are then used to form a bag-of-words (BoW) representation of the corresponding whole slide image, based on which a second classifier was built to perform the predictions. The results showed that the proposed scheme can effectively perform negative versus non-negative classification with high accuracy and thus reduce pathologists’ manual reviewing time for antibody screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic cancer grading and high-grade cancer detection for radical prostatectomy (RP) specimens can benefit pathological assessment for prognosis and post-surgery treatment decision making. We developed and validated an automatic system which grades cancerous tissue as high-grade (Gleason grade 4 and higher) vs. low-grade (Gleason grade 3) on digital histopathology whole-slide images (WSIs). We combined this grading system with our previouslyreported cancer detection system to build a high-grade cancer detection system which automatically finds high-grade cancerous foci on WSIs. The system was tuned on a 3-patient data set and cross-validated against expert-drawn contours on a separate 68-patient data set comprising 286 mid-gland whole-slide images of RP specimens. The system uses machine learning techniques to classify each region of interest (ROI) on the slide as cancer or non-cancer and each cancerous ROI as high-grade or low-grade cancer. We used leave-one-patient-out cross-validation to measure the performance of cancer grading for classified ROIs with three different classifiers and the performance of the high-grade cancer detection system on a per tumor focus basis. The best performing (Fisher) classifier yielded an area under the receiver-operating characteristic curve of 0.87 for cancer grading. The system yielded error rates of 19.5% and 23.4% for pure high-grade (Gleason 4+4, 5+5) and high-grade (Gleason Score ≥ 7) cancer detection, respectively. The system demonstrated potential for practical computation speeds. Upon successful multi-centre validation, this system has the potential to assist the pathologist to find high-grade cancer more efficiently, which benefits the selection and guidance of adjuvant therapy and prognosis post RP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several different approaches used to treat prostate cancer, depending on age and general health conditions of the patient but also how severe the cancer is. To determine the latter, Gleason grading is used. The grade is determined by a pathologist, based on structures in histology samples from prostate biopsies. To determine the diagnosis, both the most common Gleason grade but also the highest Gleason grade occurring is used. Since the tumours typically split up the more malignant they are, single cells of Gleason grade 5, the highest and most malignant Gleason grade, can occur intermingled with benign tissue. Therefore, it is of great importance to fid even very small areas of the highest grade. This is what we aim to automatically do in this work. We have trained a convolutional neural network, with a ResNet design, to classify small areas of tissue in high magnification as either Gleason 5 or non-Gleason 5. The dataset used is generated from whole slide images from Skåne University Hospital, and consists in total of 19680 small images with the size 128×128 pixels in 40X. We try to make the algorithm more robust to stain variations, which is a common issue for this type of data, by using colour augmentation. The best accuracy we achieve for classification of Gleason 5 versus non-Gleason 5 images is 92%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tissue classification on histological images is a useful alternative to manual histology analysis, and has been well-studied in a variety of machine learning approaches. However, classification of whole slide images at high resolution is a difficult and computationally-intensive task. In addition, many tissue analysis tasks are targeted at identifying rare or small regions of tissue. In colon cancer, small groups of tumor cells (tumor buds) exist on the front edge of the invasive tumor region and are an important indicator of cancer aggressiveness. These small objects are difficult or impossible to detect when examining an image at lower resolution, while running the classifier at an appropriate high resolution can be time consuming. In this work, a two-tier convolutional neural network classification approach is explored to identify small but important tissue regions on whole-slide tissue scans. The first tier is a coarse-level classifier trained with patches extracted from the image at a low power field (4x optical magnification), designed to identify two main tissue types: tumor and nontumor areas. Regions that are likely to contain tumor buds (non-tumor regions) are passed to a fine-level classifier that classifies the patches into 9 additional tissue types at a high-power field (40x). The system achieves a 43% reduction in processing time (3 hours to 1.7 hours for a 19,200-by-19,200 pixel image). The two-tier classifier provides an efficient whole-slide tissue classification by narrowing down the regions of interest, increasing the chances of tumor buds being identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we present an automated approach to classify prostate cancer (PCa) whole slide images (WSIs) as high or low cancer aggressiveness using features derived from persistent homology, a tool of topological data analysis (TDA). This extends previous work on the use of these features for representing the characteristics of prostate cancer architecture in region of interest (ROI) images, and demonstrates the value of features derived from persistent homology to predict cancer aggressiveness of WSIs on an ROI basis. We compute persistence on ROI images and summarize persistence as a persistence image. Using this summary we construct a random forest classifier to predict cancer aggressiveness. We demonstrate the potential of persistent homology to capture the architectural differences between low and high grade prostate cancers in a feature representation that lends itself well to machine learning approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a framework for learning feature representations for variable-sized regions of interest (ROIs) in breast histopathology images from the convolutional network properties at patch-level. The proposed method involves fine-tuning a pre-trained convolutional neural network (CNN) by using small fixed-sized patches sampled from the ROIs. The CNN is then used to extract a convolutional feature vector for each patch. The softmax probabilities of a patch, also obtained from the CNN, are used as weights that are separately applied to the feature vector of the patch. The final feature representation of a patch is the concatenation of the class-probability weighted convolutional feature vectors. Finally, the feature representation of the ROI is computed by average pooling of the feature representations of its associated patches. The feature representation of the ROI contains local information from the feature representations of its patches while encoding cues from the class distribution of the patch classification outputs. The experiments show the discriminative power of this representation in a 4-class ROI-level classification task on breast histopathology slides where our method achieved an accuracy of 66.8% on a data set containing 437 ROIs with different sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging mass spectrometry (IMS) is a novel molecular imaging technique to investigate how molecules are distributed between tumors and within tumor region in order to shed light into tumor biology or find potential biomarkers. Convolutional neural networks (CNNs) have proven to be very potent classifiers often outperforming other machine learning algorithms, especially in computational pathology. To overcome the challenge of complexity and high-dimensionality of the IMS data, the proposed CNNs are either very deep or use large kernels, which results in large amount of parameters and therefore a high computational complexity. An alternative is down-sampling the data, which inherently leads to a loss of information. In this paper, we propose using dilated CNNs as a possible solution to this challenge, since it allows for an increase of the receptive field size, neither by increasing the network parameters nor by decreasing the input signal resolution. Since the mass signature of cancer biomarkers are distributed over the whole mass spectrum, both locally- and globally-distributed patterns need to be captured to correctly classify the spectrum. By experiment, we show that employing dilated convolutions in the architecture of a CNN leads to a higher performance in tumor classification. Our proposed model outperforms the state-of-the-art for tumor classification in both clinical lung and bladder datasets by 1-3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manual annotation of Hematoxylin and Eosin (H&E) stained tissue images for deep learning classification is difficult, time consuming, and error-prone particularly for multi-class and rare-class problems. Chemical probes in immunohistochemistry (IHC) or immunofluorescence (IF) can automatically tag cellular structures; however, chemical labeling is difficult to use in training a deep classifier for H&E images (e.g. through serial sectioning and registration). In this work, we leverage the novel Multiplexed Immuno-Fluorescencent (MxIF) microscopy method developed by General Electric Global Research Center (GE GRC) which allows sequential, stain-image-bleach (SSB) application of protein markers on formalin-fixed, paraffin-embedded(FFPE) samples followed by traditional H&E staining to build chemically-annotated tissue maps of nuclei, cytoplasm, and cell membranes. This allows us to automate the creation of ground truth class-label maps for training an H&E-based tissue classifier. In this study, a tissue microarray consisting of 149 breast cancer and normal tissue cores were stained using MxIF for our three analytes, followed by traditional H&E staining. The MxIF stains for each TMA core were combined to create a “Virtual H&E” image, which is registered with the corresponding real H&E images. Each MxIF stained spot was segmented to obtain a class-label map for each analyte, which was then applied to the real H&E image to build a dataset consisting of the three analytes. A convolutional neural network (CNN) was then trained to classify this dataset. This system achieved an overall accuracy of 70%, suggesting that the MxIF system can provide useful labels for identifying hard to distinguish structures. A U-net was also trained to generate pseudo-IF stains from H&E and resulted in similar results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Primary management for head and neck squamous cell carcinoma (SCC) involves surgical resection with negative cancer margins. Pathologists guide surgeons during these operations by detecting SCC in histology slides made from the excised tissue. In this study, 192 digitized histological images from 84 head and neck SCC patients were used to train, validate, and test an inception-v4 convolutional neural network. The proposed method performs with an AUC of 0.91 and 0.92 for the validation and testing group. The careful experimental design yields a robust method with potential to help create a tool to increase efficiency and accuracy of pathologists for detecting SCC in histological images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurately counting cells in microscopic images is important for medical diagnoses and biological studies, but manual cell counting is very tedious, time-consuming, and prone to subjective errors, and automatic counting can be less accurate than desired. To improve the accuracy of automatic cell counting, we propose here a novel method that employs deeply-supervised density regression. A fully convolutional neural network (FCNN) serves as the primary FCNN for density map regression. Innovatively, a set of auxiliary FCNNs are employed to provide additional supervision for learning the intermediate layers of the primary CNN to improve network performance. In addition, the primary CNN is designed as a concatenating framework to integrate multi-scale features through shortcut connections in the network, which improves the granularity of the features extracted from the intermediate CNN layers and further supports the final density map estimation. The experimental results on immunofluorescent images of human embryonic stem cells demonstrate the superior performance of the proposed method over other state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histologic assessment of stromal tumor infiltrating lymphocytes (sTIL) as a surrogate of the host immune response has been shown to be prognostic and potentially chemo-predictive in triple-negative and HER2-positive breast cancers. The current practice of manual assessment is prone to intra- and inter-observer variability. Furthermore, the inter-play of sTILs, tumor cells, other microenvironment mediators, their spatial relationships, quantity, and other image-based features have yet to be determined exhaustively and systemically. Towards analysis of these aspects, we developed a deep learning based method for joint region-level and nucleus-level segmentation and classification of breast cancer H&E tissue whole slide images. Our proposed method simultaneously identifies tumor, fibroblast, and lymphocyte nuclei, along with key histologic region compartments including tumor and stroma. We also show how the resultant segmentation masks can be combined with seeding approaches to yield accurate nucleus classifications. Furthermore, we outline a simple workflow for calibrating computational scores to human scores for consistency. The pipeline identifies key compartments with high accuracy (Dice= overall: 0.78, tumor: 0.83, and fibroblasts: 0.77). ROC AUC for nucleus classification is high at 0.89 (micro-average), 0.89 (lymphocytes), 0.90 (tumor), and 0.78 (fibroblasts). Spearman correlation between computational sTIL and pathologist consensus is high (R=0.73, p<0.001) and is higher than inter-pathologist correlation (R=0.66, p<0.001). Both manual and computational sTIL scores successfully stratify patients by clinical progression outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated segmentation of tissue and cellular structure in H&E images is an important first step towards automated histopathology slide analysis. For example, nuclei segmentation can aid with detecting pleomorphism and epithelium segmentation can aid in identification of tumor infiltrating lymphocytes etc. Existing deep learning-based approaches are often trained organ-wise and lack diversity of training data for multi-organ segmentation networks. In this work, we propose to augment existing nuclei segmentation datasets using cycleGANs. We learn an unpaired mapping from perturbed randomized polygon masks to pseudo-H&E images. We generate over synthetic H&E patches from several different organs for nuclei segmentation. We then use an adversarial U-Net with spectral normalization for increased training stability for segmentation. This paired image-to-image translation style network not only learns the mapping form H&E patches to segmentation masks but also learns an optimal loss function. Such an approach eliminates the need for a hand-crafted loss which has been explored significantly for nuclei segmentation. We demonstrate that the average accuracy for multi-organ nuclei segmentation increases to 94.43% using the proposed synthetic data generation and adversarial U-Net-based segmentation pipeline as compared to 79.81% when no synthetic data and adversarial loss was used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alzheimer’s disease (AD), one of the most common cause of dementia, is a complex neurodegenerative disease marked by amyloid-β (Aβ) plaques and hyperphosphorylated tau tangles. Genome-wide association studies have identified rare variants of genes that implicate novel biological underpinnings of AD, unearthing untapped insights into modulation of innate immune pathways. Recent studies have implicated crucial functions of microglia (brain’s resident immune cells) clustering around Aβ plaques, such as plaque compaction and containment, suggesting a beneficial impact on limiting the extent of neuronal damage. In order to test this hypothesis, extraction of neuronal damage characteristics in correlation with microglia coverage is required on a single plaque level. We utilized immunohistochemistry and confocal microscopy to collect 3D image data sets from an AD mouse model. For the quantitative correlative assessment of the heterogeneity of microglia clustering and plaque-associated neuronal damage, we developed a multi-step image analysis pipeline consisting of (a) U-Net based automated region of interest (ROI) detection algorithm (96 % true positive rate), (b) FIJI-based custom-built image profiling tool that creates biologically meaningful image features from ROIs (plaques), and (c) Spotfire-based data visualization dashboard. Our proof-of-concept data set shows that plaque-associated microglia clustering correlates with lower neuronal damage in a disease stage and plaque size-dependent manner. This novel platform has validated our working hypothesis on protective functions of microglia during AD pathology. Future applications of the plaque profiling pipeline will enable unbiased quantitative assessment of potential neuroprotective effects by pharmacological or genetic interventions in preclinical AD models with amyloid pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hirschsprung’s disease is a motility disorder that requires the assessment of the Auerbach’s (myenteric) plexus located in muscularis propria layer. In this paper, we describe a fully automated method for segmenting muscularis propria (MP) from histopathology images of intestinal specimens using a method based on convolutional neural network (CNN). Such a network has the potential to learn intensity, textural, and shape features from the manual segmented images to accomplish distinction between MP and non-MP tissues from histopathology images. We used a dataset consisted of 15 images and trained our model using approximately 3,400,000 image patches extracted from six images. The trained CNN was employed to determine the boundary of MP on 9 test images (including 75,000,000 image patches). The resultant segmentation maps were compared with the manual segmentations to investigate the performance of our proposed method for MP delineation. Our technique yielded an average Dice similarity coefficient (DSC) and absolute surface difference (ASD) of 92.36 ± 2.91% and 1.78 ± 1.57 mm2 respectively, demonstrating that the proposed CNNbased method is capable of accurately segmenting MP tissue from histopathology images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terminal duct lobular units (TDLUs) are structures in the breast which involute with the completion of childbearing and physiological ageing. Women with less TDLU involution are more likely to develop breast cancer than those with more involution. Thus, TDLU involution may be utilized as a biomarker to predict invasive cancer risk. Manual assessment of TDLU involution is a cumbersome and subjective process. This makes it amenable for automated assessment by image analysis. In this study, we developed and evaluated an acini detection method as a first step towards automated assessment of TDLU involution using a dataset of histopathological whole-slide images (WSIs) from the Nurses’ Health Study (NHS) and NHSII. The NHS/NHSII is among the world's largest investigations of epidemiological risk factors for major chronic diseases in women. We compared three different approaches to detect acini in WSIs using the U-Net convolutional neural network architecture. The approaches differ in the target that is predicted by the network: circular mask labels, soft labels and distance maps. Our results showed that soft label targets lead to a better detection performance than the other methods. F1 scores of 0.65, 0.73 and 0.66 were obtained with circular mask labels, soft labels and distance maps, respectively. Our acini detection method was furthermore validated by applying it to measure acini count per mm2 of tissue area on an independent set of WSIs. This measure was found to be significantly negatively correlated with age.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Follicular Lymphoma (FL) is the second most common subtype of lymphoma in the Western World. It is a low-grade lymphoma arising from Germinal Centre (GC) B cells. The neoplasm predominantly consists of back-to-back arrangement of nodules or follicles of transformed GC B cells with the replacement of lymph node architecture and loss of normal cortex and medullary differentiation, which is preserved in non-neoplastic or reactive lymph node. There is a growing interest in studying different cell subsets inside and on the periphery of the follicles to direct curative therapies and minimize treatment-related complications. To facilitate this analysis, we develop an automated method for follicle detection from images of CD8 stained histopathological slides. The proposed method is trained on eight whole digital slides. The method is inspired by U-net to segment follicles from the whole slide images. The results on an independent dataset resulted in an average Dice similarity coefficient of 85.6% when compared to an expert pathologist’s annotations. We expect that the method will play a considerable role for comparing the ratios of different subsets of cells inside and at the periphery of the follicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging (HSI) is being shown as an emerging modality with a great potential in disease diagnosis and surgical cancer resection. Herein, we evaluate feasibility of the HSI to discriminate and diagnose colon cancer metastasis in a liver from five hematoxylin and eosin stained histopathological specimens. They were collected from the same patient during intraoperative frozen section analysis. Cancer and non-cancer spectra along with corresponding spatial maps were estimated from hyperspectral images by means of spectral unmixing. It was found that maximal angle between cancer spectra is 1.02 degrees less than minimal angle between cancer vs. non-cancer spectra. Thus, spectrum angle mapper was used for pixel-based diagnosis of cancer yielding sensitivity between 81.23% and 97.12%, specificity between 85.85% and 97.3%, and accuracy between 86.85% and 96.92%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is the second largest cause of cancer death among women after skin cancer. Mitotic count is an important biomarker for predicting the breast cancer prognosis according to Nottingham Grading System. Pathologists look for tumour areas and select 10 HPF(high power field) images and assign a grade based on the number of mitotic counts. Mitosis detection is a tedious task because the pathologist has to inspect a larger area. The pathologist’s views about mitotic cell are also subjective. Because of these problems, an assisting tool for the pathologist will generalize and reduce the time for diagnosis. Due to recent advancements in whole slide imaging, CAD(computer-aided diagnosis) systems are becoming popular. Mitosis detection for scanner images is difficult because of variability in shape, color, texture and its similar appearance to apoptotic nuclei, darkly stained nuclei structures. In this paper, the mitotic detection task is carried out with state of the art object detector (Faster R-CNN) and classifiers (Resnet152, Densenet169, and Densenet201) for ICPR 2012 dataset. The Faster R-CNN is used in two ways. In first, it was treated as an object detector which gave an F1-score of 0.79 while in second, it was treated as a Region Proposal Network followed by an ensemble of classifiers giving an F1-score 0.75.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex ‘Big Data’ questions that involve machine learning require large datasets for training. This is particularly problematic for Deep Learning methods in the biomedical imaging domain and specifically Digital Pathology. Transfer Learning has been shown to be a promising method for training classifiers on smaller sized datasets. In this work we investigate the effectiveness of aggregated Transfer Learning using VGG19 trained on ImageNet and then fine-tuning parameters with tissue histopathological patches from breast cancer metastatic tissue patches to classify soft tissue sarcoma patches. We compare results with and without transfer learning, and fine tuning applied to different layers. From the results, it is apparent that fine-tuning earlier VGG19 convolutional blocks with breast cancer patches and applying bottleneck feature extraction to soft tissue sarcoma can have an adverse effect on accuracy and other performance measures. Nevertheless, the aggregated approach is a promising method for digital pathology and requires much more investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The scarcity of large histopathological datasets can be problematic for Deep Learning in medical imaging and digital pathology. However, transfer Learning has been shown to be promising for the effective training of classifiers on smaller datasets. ImageNet is a popular dataset that is commonly used for transfer learning in various domains. The features extracted from the ImageNet dataset are generalizable and can be applied to alternative tasks and datasets. Deep Learning typically requires a vast amount of data for training, however, in our study we interrogated two datasets with patches extracted from only 30 whole slide images (WSIs) and 60 WSIs respectively. As a consequence, we decided to extract features and feed them into separate classifier models such as a fully connected softmax layer, Support Vector Machines (SVM) and Logistic Regression. This study demonstrated that for the small dataset, the best pretrained feature extractor was DenseNet201, whereas the best model for training was a fully connected softmax layer with a reported accuracy of 88.20% and an average f1-score of 0.881. For the larger dataset size, the best feature extractor was InceptionResNetV2 where the highest accuracy and f1-score of 90.60% and 0.908 was produced when classified using a fully connected softmax layer. All models, apart from ResNet50 demonstrated an improvement in performance when pretraining using ImageNet for bottleneck feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological patterns of tissues are important index for pathologists to tell the difference between cancer and non-cancer cells. However, diagnoses with human eyes and experience have limitations. For example, ovarian cancers are categorized into 4 types in the morphological forms. This classification does not thoroughly correspond to the malignancy. Even worse, there are cases that medicines are not effective when patients have the same type of ovarian cancer. That is why, the new method to diagnose the cancer cells are demanded. In this paper, we measured and analyzed the hyperspectral data of colon cancer nuclei and ovarian cancer nuclei and proved that hyperspectral camera has potential to distinguish the cancer in the early stage and to find the novel classification which corresponds to the cancer malignancy. Machine learning methods enabled us to distinguish four stages of colon canceration with 98.9% accuracy. In addition, two groups of ovarian cancer specimens created based on the hyperspectral data showed a significant difference on their cumulative survival curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The morphological features that pathologists use to differentiate neoplasms from normal tissue are nonspecific to tissue type. For example, if given a Ki67 stained biopsy of neuroendocrine or breast tumor, a pathologist would be able to correctly identify morphologically abnormal cells in both samples but may struggle to identify the origin of both samples. This is also true for other pathological malignancies such as carcinomas, sarcomas, and leukemia. This implies that computer algorithms trained to recognize tumor from one site should be able to identify tumor from other sites with similar tumor subtypes. Here, we present the results of an experiment that supports this hypothesis. We train a deep learning system to distinguish tumor from non-tumor regions in Ki67 stained neuroendocrine tumor digital slides. Then, we test the same, unmodified, deep learning model to distinguish breast cancer from non-cancer regions. When applied to a sample of 96 high power fields, our system achieved a cumulative pixel-wise accuracy of 86% across these high-power fields. To our knowledge, our results are the first to formally demonstrate generalized segmentation of tumors from different sites of origin through image analysis. This paradigm has the potential to help with the design of tumor identification algorithms as well as the composition of the datasets they draw from.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional neural networks (CNNs) have been popularly used to solve the problem of cell/nuclei classification and segmentation in histopathology images. Despite their pervasiveness, CNNs are fine-tuned on specific, large and labeled datasets as these datasets are hard to collect and annotate. However, this is not a scalable approach. In this work, we aim to gain deeper insights into the nature of the problem. We used a cervical cancer dataset with cells labeled into four classes by an expert pathologist. By employing pre-training on this dataset, we propose a one-shot learning model for cervical cell classification in histopathology tissue images. We extract regional maximum activation of convolutions (R-MAC) global descriptors and train a one-shot learning memory module with the goal of using it for various cancer types and eliminate the need for expensive, difficult to collect, large, labeled whole slide image (WSI) datasets. Our model achieved 94.6% accuracy in detecting the four cell classes on the test dataset. Further, we present our analysis of the dataset and features to better understand and visualize the problem in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently in the field of digital pathology, there have been promising advances with regards to deep learning for pathological images. These methods are often considered “black boxes”, where tracing inputs to outputs and diagnosing errors is a difficult task. This is important as neural networks are fragile, and dataset variation, which in digital pathology is attributed to biological variance, can cause low accuracy. In deep learning, this is typically addressed by adding data to the training set. However, training is costly and time-consuming to create and may not address all variation seen in these images. Digitized histology carries a great deal of variation across many dimensions (color / stain variation, lighting intensity, presentation of a disease, etc.), and some of these “low-level” image variations may cause a deep network to break due to their fragility. In this work, we use a unique dataset – cases of serially-registered H and E tissue samples from oral cavity cancer (OCC) patients – to explore the errors of a classifier trained to identify and segment different tissue types. Registered serial sections allow us to eliminate variability due to biological structure and focus on image variability including staining and lighting, and try to identify sources of error that may cause deep learning to fail. We find that perceptually-insignificant changes in an image (minor lighting and color shifts) can result in extremely poor classification performance, even when the training process tries to prevent overfitting. This suggests that great care must be taken to augment and normalize datasets to prevent errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tumor budding is a recently recognized, independent prognostic factor in colorectal cancer, but lacks a standardized assessment methodology. Although staining with pan-cytokeratin has been shown to mitigate the issue of lack of reproducible, intra-observer agreement, usage of this antibody remains expensive and is limited in clinical practice. We propose an automated image analysis framework to take advantage of the visual superiority of pan-cytokeratin and the routine use of H&E to detect and quantify tumor budding. Our framework has demonstrated promising ability to identify tumor regions of colorectal slides – 92.0% accuracy, 94.5% sensitivity, and 85% specificity – across four independent datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional cell nucleus detection relies on pathologists with microscopes, which is a tedious, costly and time consuming progress. We develop a deep learning and stochastic processing method to auto-segment those microscopy images, named as Quick-in-process(Qip)-Net. Qip-Net was proposed as an automated method to detect cell nucleus under various conditions, such as randomized cell types, different magnifications, and varying image backgrounds. The network is constructed based on regions with convolution neural network features (RCNN). It is trained by 663 original images and their corresponding masks from Kaggle website. The results showed that Qip-Net could rapidly segment the cell nuclei from the testing dataset of complex and disruptive surroundings with better S-2 score around 3% compared to U-Net.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neurodegenerative diseases including Alzheimer’s affect millions around the world, and this number is projected to increase over the years unless a breakthrough is made. There are several theories on the pathogenesis of neurodegenerative diseases, with the amyloid cascade and tau theory being the most prominent ones. The formation of amyloid plaques and tau tangles collapses capillaries in the brain, thereby inducing hypoxia and destruction of neurons from loss of nourishment. While we do understand some of the changes that occur in the brain’s vasculature form the pathogenesis of these diseases, they have not yet been mathematically characterized with precision. A computational pipeline is presented here to analyze optically sectioned mice brain sections imaged via two-photon microscopy and characterize various vasculature parameters which are known to deteriorate from neurogenerative diseases. Our proposed pipeline aims to quantify various brain vasculature parameters, such as, vessel tortuosity, diameter, volume and length, as well the degree of difference to understand disease pathogenesis with the eventual hope of providing drug intervention to regress or minimize these changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.