PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12933, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histopathology images involve the analysis of tissue samples to diagnose several diseases, such as cancer. The analysis of tissue samples is a time-consuming procedure, manually made by medical experts, namely pathologists. Computational pathology aims to develop automatic methods to analyze Whole Slide Images (WSI), which are digitized histopathology images, showing accurate performance in terms of image analysis. Although the amount of available WSIs is increasing, the capacity of medical experts to manually analyze samples is not expanding proportionally. This paper presents a full automatic pipeline to classify lung cancer WSIs, considering four classes: Small Cell Lung Cancer (SCLC), non-small cell lung cancer divided into LUng ADenocarcinoma (LUAD) and LUng Squamous cell Carcinoma (LUSC), and normal tissue. The pipeline includes a self-supervised algorithm for pre-training the model and Multiple Instance Learning (MIL) for WSI classification. The model is trained with 2,226 WSIs and it obtains an AUC of 0.8558 ± 0.0051 and a weighted f1-score of 0.6537 ± 0.0237 for the 4-class classification on the test set. The capability of the model to generalize was evaluated by testing it on the public The Cancer Genome Atlas (TCGA) dataset on LUAD and LUSC classification. In this task, the model obtained an AUC of 0.9433 ± 0.0198 and a weighted f1-score of 0.7726 ± 0.0438.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional neural networks (CNNs) are known to fail if a difference exists in the data they are trained and tested on, known as domain shifts. This sensitivity is particularly problematic in computational pathology, where various factors, such as different staining protocols and stain providers, introduce domain shifts. Many solutions have been proposed in the literature to address this issue, with data augmentation being one of the most popular approaches. While data augmentation can significantly enhance the performance of a CNN in the presence of domain shifts, it does not guarantee robustness. Therefore, it would be advantageous to integrate generalization to specific sources of domain shift directly into the network’s capabilities when known to be present in the real world. In this study, we draw inspiration from roto-translation equivariant CNNs and propose a customized layer to enhance domain generalization and the CNN’s ability to handle variations in staining. To evaluate our approach, we conduct experiments on two publicly available, multi-institutional datasets: CAMELYON17 and MIDOG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The field of histopathology, which involves visual examination of tissue samples at a microscopic scale, is very important for the diagnosis of cancer. Although this task is currently performed by human experts, the design of computer vision-based systems to assist human experts is an interesting research area. This problem is ideal for the application of computer-based image analysis; especially, with the great success of convolutional neural networks (CNNs) in image segmentation and classification in the last decade. However, applying CNNs to this problem is challenging for a number of reasons, such as excessive high resolution (involving huge computational burden), variations in sample processing, and insufficient annotation. In this current work, we propose a CNN-based approach to tackle the problem of prostate cancer grading from Whole Slide Images (WSIs). We use a patch-based, multi-step training algorithm to address the challenges of large image size, tissue sample variations and partial annotation. Then we propose two novel classification strategies using an ensemble of CNN models to classify tissue slide images into different ISUP grades (1 – 5). We demonstrate the efficacy of our method on the publicly available large scale Prostate cANcer graDe Assessment (PANDA) Challenge dataset. The effectiveness of the technique is measured using Cohen’s quadratic kappa score. The results are shown to be highly accurate (kappa score of 0.88) and better than other leading state-of-the art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-Aided Diagnosis, Prognosis, and Predictive Analysis I
Whole Slide Image (WSI) analysis plays a pivotal role in computer-aided diagnosis and disease prognosis in digital pathology. While the emergence of deep learning and self-supervised learning (SSL) techniques helps capture relevant information in WSIs, directly relying on deep features overlooks essential domain-specific information captured by traditional handcrafted features. To address this issue, we propose fusing handcrafted and deep features in the multiple instance learning (MIL) framework for WSI classification. Inspired by advancements in transformers, we propose a novel cross-attention fusion mechanism “CA-Fuse-MIL,” to learn complementary information from handcrafted and deep features. We demonstrate that Cross-Attention fusion outperforms WSI classification using either just handcrafted or deep features. On the TCGA Lung Cancer dataset, our proposed fusion technique boosts the accuracy by upto 5.21% and 1.56% over two different set of deep features baseline. We also explore a variant of CA-Fuse-MIL which utilizes multiple cross-attention layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is the most common cancer diagnosed in women and causes over 40,000 deaths annually in the United States. In early-stage, HR+, HER2- invasive breast cancer, the Oncotype DX (ODX) Breast Cancer Recurrence Score Test predicts the risk of recurrence and the benefit of chemotherapy. However, this gene assay is costly and time-consuming, making it inaccessible to many patients. This study proposes a novel deep-learning approach, Deep-ODX, which performs ODX recurrence risk prediction based on routine H&E histopathology images. Deep-ODX is a multiple-instance learning model that leverages a cross-attention neural network, for instance, aggregation. We train and evaluate Deep-ODX on a whole slide image dataset collected from 151 breast cancer patients. As a result, Deep-ODX achieves 0.862 AUC on our dataset, outperforming the existing deep learning models. This study indicates that deep learning methods can predict ODX results from histopathology images, offering a potentially cost-effective prognostic solution with broader accessibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pancreatic ductal adenocarcinoma (PDAC) is an aggressive disease with a dismal prognosis. Despite efforts to improve therapy outcomes in PDAC, overall survival remains at 2 to 5 years following initial diagnosis. To date, there are no established predictive or prognostic biomarkers for PDAC tumors. The availability of digitized H&E stained whole slide images (WSI) has led to an uptake in deep learning-based approaches toward comprehensive, automatic interrogation of tumor-specific attributes for disease diagnosis and prognosis. However, a significant challenge with the interrogation of large WSIs (gigabytes in size) is that only a small portion of the tissue (i.e. ROIs) contains information pertinent to diagnosis or prognosis. In this work, we investigated whether “highattention” ROIs (i.e. patch regions) identified by an attention-driven model to differentiate tumor from benign regions, may also be associated with survival outcomes in PDAC patients. The attention model was developed using a total of n = 461 WSI of H&E-stained pancreatic tumors, from two public repositories. Our approach first identifies attention maps (i.e. ROIs) using clustering-constrained-attention multiple-instance learning (CLAM), on WSI labeled as PDAC versus benign pancreas. Subsequently, the learned attention maps are employed within a LASSO regularized Cox-hazard proportional model to distinguish between high and low survival-risk groups of PDAC patients. Results were evaluated via a log-rank test and compared with established demographic variables (age, sex, race) to predict survival risk. While individual demographic variables did not demonstrate significant differences in survival risk, the attention-driven WSI features yielded significant stratification of low and highrisk groups in both the training (p = 0.0014, Hazard Ratio (HR), 2.0 (95 % Confidence Interval (CI) 1.3 -3.1)) and the test set (p = 0.0012 HR = 2.0 (95 % CI 1.3 -2.6)). Following a large, multi-institutional validation, our deep-learning approach may allow for designing more precise prognostic and predictive histopathological biomarkers for PDAC tumors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cancer prognosis and survival outcome predictions are crucial for therapeutic response estimation and for stratifying patients into various treatment groups. Medical domains concerned with cancer prognosis are abundant with multiple modalities, including pathological image data and non-image data such as genomic information. To date, multimodal learning has shown potential to enhance clinical prediction model performance by extracting and aggregating information from different modalities of the same subject. This approach could outperform single modality learning, thus improving computer-aided diagnosis and prognosis in numerous medical applications. In this work, we propose a cross-modality attention-based multimodal fusion pipeline designed to integrate modality-specific knowledge for patient survival prediction in non-small cell lung cancer (NSCLC). Instead of merely concatenating or summing up the features from different modalities, our method gauges the importance of each modality for feature fusion with cross-modality relationship when infusing the multimodal features. Compared with single modality, which achieved c-index of 0.5772 and 0.5885 using solely tissue image data or RNA-seq data, respectively, the proposed fusion approach achieved c-index 0.6587 in our experiment, showcasing the capability of assimilating modality-specific knowledge from varied modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tissue segmentation is a routine preprocessing step to reduce the computational cost of whole slide image (WSI) analysis by excluding background regions. Traditional image processing techniques are commonly used for tissue segmentation, but often require manual adjustments to parameter values for atypical cases, fail to exclude all slide and scanning artifacts from the background, and are unable to segment adipose tissue. Pen marking artifacts in particular can be a potential source of bias for subsequent analyses if not removed. In addition, several applications require the separation of individual cross-sections, which can be challenging due to tissue fragmentation and adjacent positioning. To address these problems, we develop a convolutional neural network for tissue and pen marking segmentation using a dataset of 200 H&E stained WSIs. For separating tissue cross-sections, we propose a novel post-processing method based on clustering predicted centroid locations of the cross-sections in a 2D histogram. On an independent test set, the model achieved a mean Dice score of 0.981 ± 0.033 for tissue segmentation and a mean Dice score of 0.912 ± 0.090 for pen marking segmentation. The mean absolute difference between the number of annotated and separated cross-sections was 0.075 ± 0.350. Our results demonstrate that the proposed model can accurately segment H&E stained tissue cross-sections and pen markings in WSIs while being robust to many common slide and scanning artifacts. The model with trained model parameters and post-processing method are made publicly available as a Python package called SlideSegmenter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist’s TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mouse brain offers unique features for the study of genes involved in human brain development. Specifically, genetic manipulation, such as gene inactivation, so easily achieved in the mouse, allows us to explore the effects of genes on brain morphogenesis. Using a high-throughput neuroanatomical screen on coronal and parasagittal brain sections involving 1566 mutants lines developed by the International Mouse Phenotyping Consortium, was published a list of 198 genes whose inactivation lead to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of segmentation were necessary since manual segmentation of a single brain takes approximately 1 hour. Our work consisted in applying deep learning methods to produce automated segmentation of 24 anatomical regions which were used in the aforementioned screen. The dataset comprises about 2000 annotated images each of 1 GB in size which required compression. Training was achieved for each region of interest and with two image resolution (512x256 and 2048x1024) using a U-Net and an Attention U-Net architecture. At 2048x1024, an overall DSC (Dice Score Coefficient) of 0.90 ± 0.01 was achieved for all 24 regions with the best performance for the total brain (DSC 0.99 ± 0.01) and the worst for the fibers of the pons (DSC 0.71 ± 0.18). Using a one command line, the end-user is now able to pre-analyze images automatically then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. We estimate the time saved by 6 to 10 times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-throughput imaging techniques have catalyzed significant strides in regenerative medicine, predominantly through advancements in stem cell research. Despite this, the analysis of these images often overlooks important biological implications due to the persistent challenge posed by artifacts during segmentation. In addressing this challenge, this study introduces a new deep learning architecture: a cross-structure, artifact-free U-Net (AFU-Net) model designed to optimize in vitro virtual nuclei staining of stem cells. This innovative framework, inspired by U-Net-based models, incorporates a cross-structure noise removal pre-processing layer. This layer has shown proficiency in handling artifacts frequently found on the peripheries of bright-field images used in stem cell manufacturing processes. In our extensive analysis using a gradient-density dataset of mesenchymal stem cell images, our model consistently outperformed established models in the domain. Specifically, when assessed using critical segmentation evaluation metrics— Segmentation Covering (SC) and Variation of Information (VI)—the proposed model yielded impressive results. It achieved a mean SC of 0.979 and a mean VI of 0.194, standing out from other standard configurations. Further optimization was evident in scenarios involving overlapping tiling, where the model was tasked with countering artifacts from segmented cells. Here, within a cell media setting, the model reached an elevated mean SC of 0.980 and a reduced mean VI of 0.187. The outcomes from our investigations signify a marked enhancement in the standardization and efficiency of stem cell image analysis. This facilitates a more nuanced understanding of cellular analytics derived from label-free images, bridging crucial gaps in both research and clinical applications of stem cell methodologies. While the primary focus has been on stem cells, the potential applicability of our architecture holds promise for broader realms, encompassing various biological and medical imaging contexts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding the way cells communicate, co-locate, and interrelate is essential to understanding human physiology. Hematoxylin and eosin (H&E) staining is ubiquitously available both for clinical studies and research. The Colon Nucleus Identification and Classification (CoNIC) Challenge has recently innovated on robust artificial intelligence labeling of six cell types on H&E stains of the colon. However, this is a very small fraction of the number of potential cell classification types. Specifically, the CoNIC Challenge is unable to classify epithelial subtypes (progenitor, endocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), or connective subtypes (fibroblasts, stromal). In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E. We leveraged multiplexed immunofluorescence (MxIF) histology imaging to identify 14 subclasses of cell types. We performed style transfer to synthesize virtual H&E from MxIF and transferred the higher density labels from MxIF to these virtual H&E images. We then evaluated the efficacy of learning in this approach. We identified helper T and progenitor nuclei with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02) respectively on virtual H&E. This approach represents a promising step towards automating annotation in digital pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of artificial intelligence in healthcare is a current hot topic, generating tons of excitement and pushing multiple academic medical centers, startups, and large established IT companies to dive into clinical AI model development. However, amongst that excitement, one topic that has lacked direction is how healthcare institutions, from small clinical practices to large health systems, should approach AI model deployment. Unlike typical healthcare IT implementations, AI models have special considerations that must be addressed prior to moving them into clinical practice. This talk will review the major issues surrounding clinical AI implementations and present a scalable, standardized, and responsible framework for AI deployment that can be adopted by many different healthcare organizations, departments, and functional areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generating realistic tissue images with annotations is a challenging task that is important in many computational histopathology applications. Synthetically generated images and annotations are valuable for training and evaluating algorithms in this domain. To address this, we propose an interactive framework generating pairs of realistic colorectal cancer histology images with corresponding glandular masks from glandular structure layouts. The framework accurately captures vital features like stroma, goblet cells, and glandular lumen. Users can control gland appearance by adjusting parameters such as the number of glands, their locations, and sizes. The generated images exhibit good Frechet Inception Distance (FID) scores compared to the state-of-the-art image-to-image translation model. Additionally, we demonstrate the utility of our synthetic annotations for evaluating gland segmentation algorithms. Furthermore, we present a methodology for constructing glandular masks using advanced deep generative models, such as latent diffusion models. These masks enable tissue image generation through a residual encoder-decoder network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eosinophilic esophagitis (EoE) is a chronic and relapsing disease characterized by esophageal inflammation. Symptoms of EoE include difficulty swallowing, food impaction, and chest pain which significantly impact the quality of life, resulting in nutritional impairments, social limitations, and psychological distress. The diagnosis of EoE is typically performed with a threshold (15 to 20) of eosinophils (Eos) per high-power field (HPF). Since the current counting process of Eos is a resource-intensive process for human pathologists, automatic methods are desired. Circle representation has been shown as a more precise, yet less complicated, representation for automatic instance cell segmentation such as CircleSnake approach. However, the CircleSnake was designed as a single-label model, which is not able to deal with multi-label scenarios. In this paper, we propose the multi-label CircleSnake model for instance segmentation on Eos. It extends the original CircleSnake model from a single-label design to a multi-label model, allowing segmentation of multiple object types. Experimental results illustrate the CircleSnake model’s superiority over the traditional Mask R-CNN model and DeepSnake model in terms of average precision (AP) in identifying and segmenting eosinophils, thereby enabling enhanced characterization of EoE. This automated approach holds promise for streamlining the assessment process and improving diagnostic accuracy in EoE analysis. The source code has been made publicly available at https://github.com/yilinliu610730/ EoE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise identification of multiple cell classes in high-resolution Giga-pixel whole slide imaging (WSI) is critical for various clinical scenarios. Building an AI model for this purpose typically requires pixel-level annotations, which are often unscalable and must be done by skilled domain experts (e.g., pathologists). However, these annotations can be prone to errors, especially when distinguishing between intricate cell types (e.g., podocytes and mesangial cells) using only visual inspection. Interestingly, a recent study showed that lay annotators, when using extra immunofluorescence (IF) images for reference (referred to as molecular-empowered learning), can sometimes outperform domain experts in labeling. Despite this, the resource-intensive task of manual delineation remains a necessity during the annotation process. In this paper, we explore the potential of bypassing pixel-level delineation by employing the recent segment anything model (SAM) on weak box annotation in a zero-shot learning approach. Specifically, we harness SAM’s ability to produce pixel-level annotations from box annotations and utilize these SAM-generated labels to train a segmentation model. Our findings show that the proposed SAM-assisted molecular-empowered learning (SAM-L) can diminish the labeling efforts for lay annotators by only requiring weak box annotations. This is achieved without compromising annotation accuracy or the performance of the deep learning-based segmentation. This research represents a significant advancement in democratizing the annotation process for training pathological image segmentation, relying solely on non-expert annotators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The instance segmentation of whole cells and of the respective sub-cellular compartments - nuclei, cytosol, and membrane is key to enable the quantification of biomarker signal(s) (e.g. HER2, PDL1, PD1) at a single cell level in digital histopathology images. Instance segmentation of the whole-cell objects is typically obtained using deep learning models trained on large-scale datasets of manual and pixel-precise annotations. Aiming for a segmentation model in the immunofluorescence (IF) domain and starting with an available manually labeled dataset in the immunohistochemistry (IHC) stain domain, we translate this dataset of whole cell instances to the target domain using known CycleGan-based stain translation methods. To further increase the size of the training data while limiting the associated annotation burden, we propose to additionally leverage – through the introduction of two target adaptative losses, two additional datasets that are weakly labeled for nucleus centers and nucleus masks respectively. The introduced losses map the five class-probability maps output of the model (nucleus center, cell center, nucleus body, cytosol, membrane) to the binary class configuration expected by the nucleus center and nucleus mask datasets. We show quantitatively on a test set of manually labeled IF FOVs that the approach yields an increased accuracy of the detected and segmented cell instances compared to a baseline model trained solely on the translated dataset of whole cell instances. The results as well indicate the ability of the approach to fill the residual domain gap between the source and target domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital and computational pathology, semantic segmentation can be considered as the first step toward assessing tissue specimens, providing the essential information for various downstream tasks. There exist numerous semantic segmentation methods and these often face challenges as they are applied to whole slide images, which are high-resolution and gigapixel-sized, and thus require a large amount of computation. In this study, we investigate the feasibility of an efficient semantic segmentation approach for whole slide images, which only processes the low-resolution pathology images to obtain the semantic segmentation results as equivalent as the results that can be attained by using high-resolution images. We employ five advanced semantic segmentation models and conduct three types of experiments to quantitatively and qualitatively test the feasibility of the efficient semantic segmentation approach. The quantitative experimental results demonstrate that, provided with low-resolution images, the semantic segmentation methods are inferior to those with high-resolution images. However, using low-resolution images, there is a substantial reduction in the computational cost. Furthermore, the qualitative analysis shows that the results obtained from low-resolution images are comparable to those from high-resolution images, suggesting the feasibility of the low-to-high semantic segmentation in computational pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we have developed a method to detect anomalies in histology slides containing tissues sourced from multiple organs of rats. In the nonclinical phase of drug development, candidate drugs are typically tested on animals such as rats, and a postmortem assessment is conducted based on human evaluation of histology slides. Findings in those histology slides manifest as anomalous departures from expectation on Whole Slide Images (WSIs). Our proposed method, makes use of a StyleGAN2 and ResNet based encoder to identify anomalies in WSIs. Using these models, we train an image reconstruction pipeline only on an anomaly-free (’normal’) dataset. We then use this pipeline to identify anomalies using the reconstruction quality measured by Structural Similarity Index (SSIM). Our experiments were carried out on 54 WSIs across 40 different organ types and achieved a patch-level classification accuracy of 88%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cell-based in vitro skin models are an effective method for testing new medical compounds without any animal harming in the process. Histology serves as a cornerstone for evaluating in vitro models, providing critical insights into their structural integrity and functionality. The recently published BSGC score is a method to assess the quality of in vitro epidermal models, based on visual examination of histopathological images. However, this is very time-consuming and requires a high level of expertise. Therefore, this paper presents a method for automatic evaluation of three-dimensional in vitro epidermal models that involves segmentation and classification of epidermal layers in cross-sectional histopathological images. The input images are first pre-processed and in an initial classification step low-quality skin models are filtered. Subsequently, the individual epidermal strata are segmented and a masked image is generated for each stratum. The strata are scored individually using the masked images with a classification network per stratum. Finally the individual scores are merged into an overall weighted score per image. With an accuracy of 81% for the overall scoring the method provides promising results and allows for significant time savings and less subjectivity compared to the manual scoring process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head and neck squamous cell carcinoma (HNSCC) has a high mortality rate. In this study, we developed a Stokes-vectorderived polarized hyperspectral imaging (PHSI) system for H&E-stained pathological slides with HNSCC and built a dataset to develop a deep learning classification method based on convolutional neural networks (CNN). We use our polarized hyperspectral microscope to collect the four Stokes parameter hypercubes (S0, S1, S2, and S3) from 56 patients and synthesize pseudo-RGB images using a transformation function that approximates the human eye’s spectral response to visual stimuli. Each image is divided into patches. Data augmentation is applied using rotations and flipping. We create a four-branch model architecture where each branch is trained on one Stokes parameter individually, then we freeze the branches and fine-tune the top layers of our model to generate final predictions. Our results show high accuracy, sensitivity, and specificity, indicating that our model performed well on our dataset. Future works can improve upon these results by training on more varied data, classifying tumors based on their grade, and introducing more recent architectural techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid on-site cytologic evaluation (ROSE) for samples taken during endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) is effective to improve the diagnostic yield and to avoid repeated puncture procedures. However, ROSE is not widely used due to limited human resources, especially limited availability of staff who performs benign and malignant classification. Thus, we developed an artificial intelligence system to assist in ROSE diagnoses. We had used in-house dataset consists of about 750 cells cropped from 26 pathological slides labeled by a pulmonologist and cytotechnologist for training and testing. Since false positive should be minimized as much as possible to prevent a repeat bronchoscopy, specificity should be maximized while maintaining acceptable sensitivity. Thus, we introduced a critical index Spec@sens0.9, which means the specificity when sensitivity is 0.9. We compared the following three methods, 1. Conventional learning without contrastive learning-based pretraining, 2. General contrastive learning with positive and negative samples like SimCLR, and 3. Proposal method that is a contrastive learning with positive, hard negative and easy negative samples based on distance metric in embedding spaces. Spec@sens0.9 by the three methods were found to be 0.879/0.935/0.944, respectively. Since the dataset in this study consists of limited number of labels, we thought that the pretraining by contrastive learning, that should work as a self-supervised data augmentation, was effective to improve the model’s performance. Our proposal method was effective to enhance the model’s performance compared with the general contrastive learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human epidermal growth factor receptor 2 (HER2) serves as a prognostic and predictive biomarker for breast cancer. Recently, there has been an increasing number of studies evaluating the feasibility of utilizing H&E WSIs for determining HER2 status through innovative data-driven deep learning methods, taking advantage of the ubiquitous availability of H&E WSIs. One of the main challenges with these data-driven methods is the need for large-scale datasets with high quality annotations, which can be expensive to curate. Therefore, in this study, we explored both the region-of-interest (ROI)-based supervised and the attention-based multiple-instance-learning (MIL) weakly supervised methods for predicting HER2 status on H&E WSIs to evaluate whether avoiding labor-intensive tumor annotation will compromise the final prediction performance. The ROI-based method involved an Inception-v3 along with an aggregation step to combine the patch-level predictions into a WSI-level prediction. On the other hand, the attention-based MIL methods explored ImageNet pretrained ResNet, H&E image pretrained ResNet, and H&E image pretrained vision transformer (ViT) as encoders for WSI-level HER2 prediction. Experiments are carried out on N = 355 WSIs available in public domain with HER2 status determined by IHC and ISH and annotations of breast invasive carcinoma. The dataset was split into training/validation/test set with 80/10/10 ratio. Our results demonstrate that the attention-based ViT MIL method is able to reach similar accuracy as the ROI-based method on the independent test set (AUC of 0.79 (95% CI: 0.63-0.95) versus 0.88 (95% CI: 0.63-0.9) respectively), and thus reduces the burden of labor-intensive annotations. Furthermore, the attention mechanism enhances interpretability of the results and offers insights into the reliability of the predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-Aided Diagnosis, Prognosis, and Predictive Analysis II
Endometrial cancer (EC) is the most common gynecologic malignancy in the United States. Hormone therapies and hysterectomy are viable treatments for early-stage EC and atypical endometrial hyperplasia (AEH), a high-risk precursor to EC. Prediction of patient response to hormonal treatment is useful for patients to make treatment decisions. We have previously developed a mix-supervised model: a weakly supervised deep learning model for hormonal treatment response prediction based on pathologist-annotated AEH and EC regions on whole slide images of H&E stained slides. The reliance on pathologist annotation in applying the model to new cases is cumbersome and subject to inter-observer variability. In this study, we automate the task of ROI detection by developing a supervised deep learning model to detect AEH and EC regions. This model achieved a patch-wise AUROC performance of 0.974 (approximate 95% CI [0.972, 0.976]). The mixsupervised model yielded a patient-level AUROC of 0.76 (95% CI [0.59, 0.92]) with ROIs detected by our new model on a hold-out test set in the task of classifying patients into responders and non-responders. As a comparison, the original model as tested on pathologist-annotated ROIs achieved an AUROC of 0.80 with 95% CI [0.63, 0.95]. Our results demonstrate the potential of using weakly supervised deep learning and supervised ROI detection model for predicting hormonal treatment response in endometrial cancer patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose Bayesian Multiple Instance Learning (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an exploration of Federated Learning (FL) in medical imaging, focusing on Computational Pathology (CP) with Whole Slide Images (WSIs) for head and neck cancer. While previous FL approaches in healthcare targeted radiology, genetics, and Electronic Health Records (EHRs), our research addresses the understudied area of CP datasets. Our aim is to develop robust AI models for CP datasets without sacrificing data privacy and security. To this end, we demonstrate the use of FL applied to a CP dataset consisting of papillary thyroid carcinoma, specifically focusing on the rare and aggressive variant called Tall Cell Morphology (TCM). Patients with TCM require more aggressive treatment and rigorous follow-up due to its aggressiveness and increased recurrence rates. In this work, we perform a simulated FL training experiment by dividing a dataset into three virtual ”clients”. We locally train a Convolutional Neural Network (CNN), to classify patches of tissue labelled from the local WSI dataset as “tall” (expressing TCM) or “non-tall”. Models are then aggregated and convergence is ensured through the Federated Averaging (FedAVG) algorithm. The decentralized approach of FL creates a secure and privacy-preserving collaborative training environment, keeping individual client data local through horizontal data partitioning. This enables collective training of deep learning models on distributed data, benefiting from a diverse and rich dataset while safeguarding patient privacy. We compare the efficacy of the FL-trained model to a centralized model (trained using all ”client” data together) using accuracy, sensitivity, specificity, and F-1 score. Our findings indicate that the simulated FL models exhibit performance on par with or superior to centralized learning, achieving accuracy scores between 75-87%, while centralized learning attains an accuracy of 82%. This novel approach holds promise for revolutionizing computational pathology and contributing to more effective medical decision-making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early-stage non-small cell lung cancer (NSCLC) patients have a relatively high recurrence rate within the first five years of surgery, reflecting a need to predict post-surgical recurrence and offer personalized adjuvant therapies. Quantitative features extracted from radiology and pathology images can provide valuable information for the NSCLC recurrence prediction task, with radiomic features capturing global tumor phenotypes and pathomic features capturing local cellular and tumor microenvironment information. In this study, we propose to combine radiomic and pathomic features to predict progression-free survival within five years of curative resection in early-stage lung adenocarcinoma (LUAD), the most common subtype of NSCLC. Using 106 cases from the National Lung Screening Trial dataset, we extracted radiomic features from lung nodules on pre-surgery computed tomography (CT) scans guided by radiologist’s segmentation and pathomic features from hematoxylin and eosin (H&E)-stained whole slide images (WSIs) of the resected tissue. We leveraged both hand-crafted and deep features in each modality and used a Cox proportional hazards model. Models were trained with 5-fold cross-validation with ten repetitions, and metrics such as the concordance index (C-index) were calculated by the mean performance on the test set. The fused model using combined radiomic and pathomic features has a C-index of 0.634. Our study shows that combining radiomic and pathomic features results in a more accurate progression-free survival prediction model as compared to only using radiomic features (C-index=0.612), pathomic features (C-index=0.584), or clinical features (C-index= 0.477).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emerging spatially resolved molecular imaging techniques, such as co-detection by indexing (CODEX), have enabled researchers to uncover distinct cellular structures in histological kidney sections. Spatial proteomics can provide users with the intensity level of proteins synthesized in the tissue in the same histology tissue section. However, the mapping of cell type proportions and molecular signatures can be challenging which might have contributed to the limited use of these technologies in clinical practice. Developing a computational model that handles such highdimensional whole-slide imaging (WSI) data from CODEX requires applying advanced machine learning techniques to address common challenges such as interpretability, efficiency, and usability. In this study, we propose a computational pipeline for CODEX mapping on biopsy images that features an automated registration module that utilizes nuclei segmentation in both modalities. Our pipeline provides an explainable prediction and mapping of cell type clusters on histology and analyzes the heterogeneity of molecular features in the predicted clusters. For mapping, we used an unsupervised clustering analysis of uniform manifold approximation and projection (UMAP)- reduced features to enable visualizing the predicted clusters onto the histological tissue image. To test our proposed pipeline, we used a high-dimensional CODEX panel that comprises 44 markers and visualized the intensities and the predicted clusters on whole slide images (WSI) in a set of renal histology samples collected atIndiana University. Our results delineated 14 distinct cell clusters which demonstrated high fidelity between labeled objects and specific markers. Notably, 88% of cells in the “podocytes” dominant UMAP cluster were found to have a high level of podocalyxin, although it is adjacent to two other clusters dominated by renal vasculature cells. Out of 626 features examined, 44 were central to the “podocyte” cluster, accounting for approximately 50% of its variance (p < 0.05). This study can improve the understanding of the cell type proportions and kidney functions of tissue structures, which can contribute to the human biomolecular kidney atlas; a step towards substantial advancements in the field of kidney cell biology research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eosinophilic Esophagitis (EoE) is a chronic, immune/antigen-mediated esophageal disease, characterized by symptoms related to esophageal dysfunction and histological evidence of eosinophil-dominant inflammation. Owing to the intricate microscopic representation of EoE in imaging, current methodologies which depend on manual identification are not only labor-intensive but also prone to inaccuracies. In this study, we develop an open-source toolkit, named Open-EoE, to perform end-to-end whole slide image (WSI) level eosinophil (Eos) detection using one line of command via Docker. Specifically, the toolkit supports three state-of-the-art deep learning-based object detection models. Furthermore, OpenEoE further optimizes the performance by implementing an ensemble learning strategy, and enhancing the precision and reliability of our results. The experimental results demonstrated that the Open-EoE toolkit can efficiently detect Eos on a testing set with 289 WSIs. At the widely accepted threshold of ≥ 15 Eos per high power field (HPF) for diagnosing EoE, the Open-EoE achieved an accuracy of 91%, showing decent consistency with pathologist evaluations. This suggests a promising avenue for integrating machine learning methodologies into the diagnostic process for EoE. The docker and source code has been made publicly available at https://github.com/hrlblab/Open-EoE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When dealing with giga-pixel digital pathology in whole-slide imaging, a notable proportion of data records holds relevance during each analysis operation. For instance, when deploying an image analysis algorithm on whole-slide images (WSI), the computational bottleneck often lies in the input-output (I/O) system. This is particularly notable as patch-level processing introduces a considerable I/O load onto the computer system. However, this data management process could be further paralleled, given the typical independence of patch-level image processes across different patches. This paper details our endeavors in tackling this data access challenge by implementing the Adaptable IO System version 2 (ADIOS2). Our focus has been constructing and releasing a digital pathology-centric pipeline using ADIOS2, which facilitates streamlined data management across WSIs. Additionally, we’ve developed strategies aimed at curtailing data retrieval times. The performance evaluation encompasses two key scenarios: (1) a pure CPU-based image analysis scenario (“CPU scenario”), and (2) a GPU-based deep learning framework scenario (“GPU scenario”). Our findings reveal noteworthy outcomes. Under the CPU scenario, ADIOS2 showcases an impressive two-fold speed-up compared to the brute-force approach. In the GPU scenario, its performance stands on par with the cutting-edge GPU I/O acceleration framework, NVIDIA Magnum IO GPU Direct Storage (GDS). From what we know, this appears to be among the initial instances, if any, of utilizing ADIOS2 within the field of digital pathology. The source code has been made publicly available at https://github.com/hrlblab/adios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Podocytes, specialized epithelial cells that envelop the glomerular capillaries, play a pivotal role in maintaining renal health. The current description and quantification of features on pathology slides are limited, prompting the need for innovative solutions to comprehensively assess diverse phenotypic attributes within Whole Slide Images (WSIs). In particular, understanding the morphological characteristics of podocytes, terminally differentiated glomerular epithelial cells, is crucial for studying glomerular injury. This paper introduces the Spatial Pathomics Toolkit (SPT) and applies it to podocyte pathomics. The SPT consists of three main components: (1) instance object segmentation, enabling precise identification of podocyte nuclei; (2) pathomics feature generation, extracting a comprehensive array of quantitative features from the identified nuclei; and (3) robust statistical analyses, facilitating a comprehensive exploration of spatial relationships between morphological and spatial transcriptomics features. The SPT successfully extracted and analyzed morphological and textural features from podocyte nuclei, revealing a multitude of podocyte morphomic features through statistical analysis. Additionally, we demonstrated the SPT’s ability to unravel spatial information inherent to podocyte distribution, shedding light on spatial patterns associated with glomerular injury. By disseminating the SPT, our goal is to provide the research community with a powerful and user-friendly resource that advances cellular spatial pathomics in renal pathology. The toolkit’s implementation and its complete source code are made openly accessible at the GitHub repository: https://github.com/hrlblab/spatial_pathomics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative oblique back-illumination microscopy (qOBM) is a novel technology for label-free imaging of thick (unsectioned) tissue specimens, demonstrating high spatial resolution and 3-D capabilities. The grayscale contrast however, is unfamiliar to pathologist and histotechnicians without familiarization, limiting its adoption. We used deep learning techniques to convert qOBM into virtual H&E, observing successful conversion of both healthy and tumor thick (unsectioned) specimens. Transfer learning was demonstrated on a second collection of qOBM and H&E images of human astrocytoma specimens. With some improvement in robustness and generalizability, we anticipate that this approach can find clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The adoption of artificial intelligence and digital pathology shows immense promise for transforming healthcare through enhanced efficiency, cost-effectiveness, and patient outcomes. However, real-world clinical deployment of deep learning systems faces major obstacles, including the significant staining variability inherent to histopathology workflows. Differences in protocols, reagents, and scanners cause considerable distribution shifts that undermine model generalization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this work is to reduce the complexity of cell and cell neighborhood annotations in studies for spatial immunity. Specifically, we use a method inspired by spectral angle mapping to collapse multichannel images into class-level representations. We will demonstrate that these class maps assist in characterizing immune cell infiltration in renal pathologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Crohn’s disease (CD) is a chronic and relapsing inflammatory condition that affects segments of the gastrointestinal tract. CD activity is determined by histological findings, particularly the density of neutrophils observed on Hematoxylin and Eosin stains (H&E) imaging. However, understanding the broader morphometry and local cell arrangement beyond cell counting and tissue morphology remains challenging. To address this, we characterize six distinct cell types from H&E images and develop a novel approach for the local spatial signature of each cell. Specifically, we create a 10-cell neighborhood matrix, representing neighboring cell arrangements for each individual cell. Utilizing t-SNE for non-linear spatial projection in scatter-plot and Kernel Density Estimation contour-plot formats, our study examines patterns of differences in the cellular environment associated with the odds ratio of spatial patterns between active CD and control groups. This analysis is based on data collected at the two research institutes. The findings reveal heterogeneous nearest-neighbor patterns, signifying distinct tendencies of cell clustering, with a particular focus on the rectum region. These variations underscore the impact of data heterogeneity on cell spatial arrangements in CD patients. Moreover, the spatial distribution disparities between the two research sites highlight the significance of collaborative efforts among healthcare organizations. All research analysis pipeline tools are available at https://github.com/MASILab/cellNN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning (ML) based whole slide imaging biomarkers have great potential to improve the efficiency and consistency of biomarker quantification, thereby facilitating the development of prognosis models for personalized medicine. Assessment methods in this area are still under-developed. Using the public TiGER (Tumor InfiltratinG lymphocytes in breast cancER) challenge data, we developed a deep neural network-based algorithm for automated tumorinfiltrating lymphocytes (TILs) scoring from whole slide images (WSIs) of biopsies and surgical resections of human epidermal growth factor receptor-2 positive (HER2+) and triple-negative breast cancer (TNBC) patients. The purpose of this study is to assess our model’s performance on a new independent dataset. Seven pathologists independently assessed 320 pre-selected regions of interests (ROIs) across 32 WSIs for TILs scoring. Our results show that there is substantial variability among pathologists in scoring TILs density. We also observed a systematic discrepancy between the ML-based TILs scoring and the pathologists’ manual scoring that led us to develop a calibration between the two. Our calibration reduced the discrepancy, increasing the intra-class-correlation coefficient (ICC) from 0.35 (95% CI [-0.062, 0.625]) for uncalibrated scores to 0.67 (95% CI [0.6, 0.736]) after calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discovery of explainable biomarkers is a complex process which is typically driven by a-priori hypothesis and expert annotations. This contribution introduces an almost entirely annotation and hypothesis free workflow to discover predictive biomarkers derived from cell phenotypes. It relies on self-supervised learning, clustering and survival analysis of cell centric image patches. The workflow is successfully evaluated on mIF images of 2L+ mNSCLC samples from a clinical study (NCT01693562). Two potential biomarkers are identified that closely align with the known relevant biology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eosinophilic Esophagitis (EoE) is a chronic immune disease most commonly diagnosed through examination of biopsy tissue taken from the esophagus. Currently, trained pathologists spend hours manually examining biopsy slides for identifying and counting eosinophils, indicators for EoE. Given the success of deep learning models in automating other areas of medical image analysis, we wondered: can deep learning networks be trained for accurately segmenting and counting eosinophils in esophagus tissue biopsy slides? However, existing efforts have not sufficiently evaluated different deep learning models, hyperparameters, or metrics. Additionally, many are built on hundreds of annotated training images, which pathology labs often do not have the resources to generate. To address this, we present a comprehensive evaluation of five deep learning architectures and fine-tune their hyperparameters. We rely primarily on location-based metrics to count true positives (TP), false positives (FP), and false negatives (FN), and conduct a limited data analysis to see how models respond to varying amounts of training data. We find that UNet++ performs best of the evaluated models. Even though dice and IoU values remained similar across models, TPs and FPs varied greatly, highlighting the importance of including counting based metrics when comparing cell segmentation methods. Furthermore, we conduct sliding window experiments to study the effect of patch size and stride size in generating training data on the model performance. We find that TP counts do not vary greatly, while FP counts can differ significantly when different sliding window settings are used. Our limited data analysis revealed that eight training images is sufficient for most models for reliable results, allowing deep learning to be used as an efficient aid to pathologists. Our work provides helpful comparative information for future cell segmentation applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiplex brightfield imaging offers the significant advantage ofsimultaneously analyzing multiple biomarkers on a single slide, as opposed to single biomarker labeling on multiple consecutive slides. This potentially enables investigating interactions between biomarkers to be analyzed to gain a better understanding of the tumor microenvironment as well as improved predictive and prognostic abilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmenting microvascular structures, such as arterioles, venules, and capillaries, from human kidney whole slide images (WSI) in renal pathology has garnered significant interest. The current manual segmentation approach is laborious and impractical for large-scale digital pathology images. To address this, deep learning-based methods have emerged for automatic segmentation. However, a gap exists in current deep learning segmentation methods, as they are typically designed and limited by using single-site single-scale data for training. In this paper, we introduce a novel single dynamic network method (Omni-Seg), which harnesses multi-site multi-scale training data, utilizing partially labeled images where only one tissue type is labeled per training image for microvascular structure segmentation. We train a single deep network using images from two datasets, HuBMAP and NEPTUNE, with different scales (40×, 20×, 10×, 5×). Our experimental results demonstrate that our approach achieves higher Dice Similarity Coefficient (DSC) and Intersection over Union (IoU) scores. This proposed method empowers renal pathologists with a computational tool for quantitatively assessing renal microvascular structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study explores the efficacy of diffusion probabilistic models for generating synthetic histopathological images, specifically canine Perivascular Wall Tumours (cPWT), to supplement limited datasets for deep learning applications in digital pathology. This research evaluates an open-source medical domain-focused diffusion model called Medfusion, where the model was trained on a small (1,000 patches) and a large dataset (17,000 patches) of cPWT images to compare performance on the different sized datasets. A Receiver Operating Characteristic (ROC) study was implemented to investigate the ability of six veterinary medical professionals and pathologists to discern between generated and real cPWT patch images. The participants engaged in two separate rounds, where each round corresponded to models that had been trained on the two different sized datasets. The ROC study revealed mean average Area Under the Curve (AUC) values close to 0.5 for both rounds. The results from this study suggests that diffusion models can create histopathological patch images that are convincingly realistic where our participants often struggled to reliably differentiate between generated and real images. This underscores the potential of these models as a valuable tool for augmenting digital pathology datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational pathology, integrating computational methods and digital imaging, has shown to be effective in advancing disease diagnosis and prognosis. In recent years, the development of machine learning and deep learning has greatly bolstered the power of computational pathology. However, there still remains the issue of data scarcity and data imbalance, which can have an adversarial effect on any computational method. In this paper, we introduce an efficient and effective data augmentation strategy to generate new pathology images from the existing pathology images and thus enrich datasets without additional data collection or annotation costs. To evaluate the proposed method, we employed two sets of colorectal cancer datasets and obtained improved classification results, suggesting that the proposed simple approach holds the potential for alleviating the data scarcity and imbalance in computational pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medulloblastoma (MB) is the most common embryonal tumour of the brain. In order to decide on an optimal therapy, laborious inspection of histopathological tissue slides by neuropathologists is necessary. Digital pathology with the support of deep learning methods can help to improve the clinical workflow. Due to the high resolution of histopathological images, previous work on MB classification involved manual selection of patches, making it a time consuming task. In order to leverage only slide labels for histopathology image classification, weakly supervised approaches first encode small patches into feature vectors using an ImageNet pretrained encoder based on convolutional neural networks. The representations of patches are further utilized to train a data-efficient attention-based learning method. Due to the domain shift between natural images and histopathology images, the encoder is not optimal for feature extraction for MB classification. In this study, we adapt weakly supervised learning for MB classification and examine different histopathological specific encoder architectures and weights for the MB classification task. The results show that ResNet encoders pretrained with histopathology images lead to better MB classification results compared to encoders pretrained on ImageNet. The best performing method uses a ResNet50 architecture, pretrained on histopathology images and achieves an area under the receiver operating curve (AUROC) value of 71.89%, improving the baseline model by 2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation of kidney layer structures, including cortex, outer stripe, inner stripe, and inner medulla within human kidney whole slide images (WSI) plays an essential role in automated image analysis in renal pathology. However, the current manual segmentation process proves labor-intensive and infeasible for handling the extensive digital pathology images encountered at a large scale. In response, the realm of digital renal pathology has seen the emergence of deep learning-based methodologies. However, very few, if any, deep learning based approaches have been applied to kidney layer structure segmentation. Addressing this gap, this paper assesses the feasibility of performing deep learning based approaches on kidney layer structure segmetnation. This study employs the representative convolutional neural network (CNN) and Transformer segmentation approaches, including Swin-Unet, Medical-Transformer, TransUNet, U-Net, PSPNet, and DeepLabv3+. We quantitatively evaluated six prevalent deep learning models on renal cortex layer segmentation using mice kidney WSIs. The empirical results stemming from our approach exhibit compelling advancements, as evidenced by a decent Mean Intersection over Union (mIoU) index. The results demonstrate that Transformer models generally outperform CNN-based models. By enabling a quantitative evaluation of renal cortical structures, deep learning approaches are promising to empower these medical professionals to make more informed kidney layer segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Z-disks are complex structures that delineate repeating sarcomeres in striated muscle. They play significant roles in cardiomyocytes such as providing mechanical stability for the contracting sarcomere, cell signalling and autophagy. Changes in Z-disk architecture have been associated with impaired cardiac function. Hence, there is a strong need to create tools to segment Z-disks from microscopy images, that overcome traditional limitations such as variability in image brightness and staining technique. In this study, we apply deep learning based segmentation models to extract Z-disks in images of striated muscle tissue. We leverage a novel Airyscan confocal dataset, which comprises high resolution images of Z-disks of healthy heart tissue, stained with Affimers for specific Z-disk proteins. We employed an interactive labelling tool, Ilastik to obtain ground truth segmentation masks and use the resulting data set to train and evaluate the performance of several state-of-the-art segmentation networks. On the test set, UNet++ achieves best segmentation performance for Z-disks in cardiomyocytes, with an average Dice score of 0.91 and outperforms other established segmentation methods including UNet, FPN, DeepLabv3+ and pix2pix. However, pix2pix demonstrates improved generalisation, when tested on an additional dataset of cardiomyocytes with a titin mutation. This is the first study to demonstrate that automated machine learning-based segmentation approaches may be used effectively to segment Z-disks in confocal microscopy images. Automated segmentation approaches and predicted segmentation masks could be used to derive morphological features of Z-disks (e.g. width and orientation), and subsequently, to quantify disease-related changes to cardiac microstructure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cell types present in a biopsy provide information on disease processes and organ health, and are useful in a research setting. Multiplex imaging technologies like CODEX can provide spatial context for protein expression and detect cell types on a whole slide basis. The CODEX workflow also allows for hematoxylin and eosin (H&E) staining on the same sections used in molecular imaging. Deep learning can automate the process of histological analysis, reducing time and effort required. We seek to automatically segment and classify cells from histologically stained renal tissue sections using deep learning, with CODEX generated cell labels as a ground truth. Image data consisted of brightfield H&E whole slide images (WSIs) from a single institution, collected from human reference kidneys. Nuclei were segmented using deep learning, and CODEX markers were measured for each nucleus. Cells and their markers were clustered in an unsupervised manner, and assigned labels according to upregulated markers and spatial biological priors. Classified cell types included: proximal tubules, distal tubules, vessels, interstitial cells, and general glomerular cells. Cell maps were used to train a Deeplab V3+ semantic segmentation network. Cell maps were successfully created in all sections, with ~65% used for training and ~35% used for testing. The trained network achieved a balanced accuracy of 0.75 across all cell types. We were able to automatically segment and classify nuclei from various cell types directly from H&E stained WSIs. In future work, we intend to expand the dataset to include more CODEX markers (and therefore more granular cell types), and more samples with more variability, to test the robustness of the model to new data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diabetic nephropathy (DN), a common complication of diabetes mellitus, remains a leading cause of endstage renal disease. Histopathological assessment of renal biopsy remains the gold standard for diagnosis. Accurate diagnosis is crucial for timely intervention and personalized management plans. Machine learning (ML) models can analyze digital pathology slides, learn DN biomarkers, and aid in DN staging. Developing ML models can be challenging for the limited availability of annotated images, subjectivity in histopathology interpretation, and histology artifacts. Molecular profiling such as single-cell RNA sequencing (SC) and spatial transcriptomics (ST) can contribute to better understanding of cellular heterogeneity and molecular pathways. Clinical use of molecular tests is limited due to the absence of well-established protocols specific to DN diagnosis. In this study, we propose a framework for correlating glomerular histomorphometry with spatially resolved transcriptomics to better understand the histologic spectrum of DN. The framework uses manual tissue labels by experienced users, and hybrid labels by combining user input and unsupervised clustering of molecular data. Clustering is performed on the gene expression levels of disease biomarkers and on the cell type decomposition of tissue by integration with SC reference data from KPMP. We used a dataset of 6 DN and 3 normal cases, with frozen section histology, ST, and SC collected at Seoul National University Hospital, Seoul, South Korea. Our initial experiments identified a correlation between the imaging features histomorphometry and disease label. Our cloud-based prototype visualizes both gene markers and cell type decomposition as a heatmap on histology, enables molecular-informed validation of structures, enables adding manual labels, and visualizes the clusters on histology. In conclusion, our framework can analyze the correlation between histomorphometry and tissue labels generated in a molecular-informed environment. Our cloud-based prototype can aid the diagnosis process by visualizing these correlations overlaid on digital slides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-cell sequencing and proteomics have been critical for the study of human disease. However, highly multiplexed microscopy has revolutionized spatial biology by measuring cell expression from ~50 proteins while maintaining spatial locations of cells. This presents unique computational challenges; acquiring manual annotations across so many image channels is challenging, therefore supervised learning methods for classification are undesirable. To overcome this limitation we have developed a decision-tree classifier for the multiclass annotation of renal cells that is analogous to well-established flow cytometry-based cell analyses. We demonstrate this method of cell annotation in a dataset of 54 kidney biopsies from patients with three different pathologies: 25 with lupus nephritis, 23 with renal allograft rejection, and six with non-autoimmune conditions. Biopsies were iteratively stained and imaged using the PhenoCycler protocol to acquire high-resolution, full-section images with a 43-marker panel. Nucleus segmentation was performed using Cellpose2.0 and whole cell segmentation was approximated by dilating the nucleus masks. In our decision tree, cells are sequentially sorted into marker-negative and marker-positive populations using their mean fluorescence intensity (MFI). A multi-Otsu threshold, in conjunction with manual spot checking, is used for determining the optimal MFI threshold for each branching of the decision tree. Marker order is based upon well-established, hierarchical expression of immunological cell markers created in consultation with expert immunologists. We have further developed another algorithm to probe microtubule organizing center polarization, an important immunologic behavior. Ultimately, we were able to assign biologically-defined cell classes to 1.59 million of 2.19 million cells captured in tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lupus nephritis (LN) is a severe manifestation of systemic lupus erythematosus, with up to 30% of LN patients progressing to end-stage kidney disease within ten years of diagnosis. Spatial relationships between specific types of immune cells and kidney structures hold valuable information clinically and biologically. Thus, we develop a modular computational pipeline to analyze the spatially resolved molecular features from high-plex immunofluorescence imaging data. Here, we present three modules of the pipeline, with the goal of achieving multiclass segmentation of renal cells and structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In computational pathology, random sampling of patches during training of Multiple Instance Learning (MIL) methods is computationally efficient and serves as a regularization strategy. Despite its promising benefits, questions concerning performance trends for varying sample sizes and its influence on model interpretability remain. Addressing these, we reach an optimal performance enhancement of 1.7% using thirty percent of patches on the CAMELYON16 dataset, and 3.7% with only eight samples on the TUPAC16 dataset. We also find interpretability effects are strongly dataset-dependent, with interpretability impacted on CAMELYON16, while remaining unaffected on TUPAC16. This reinforces that both the performance and interpretability relationships with sampling are closely task-specific. End-to-end training with 1024 samples reveals improvements across both datasets compared to pre-extracted features, further highlighting the potential of this efficient approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.