Some tumor resection procedures, such as Mohs surgery, utilize intraoperative histology for tumor margin assessments. Gold standard rapid histology methods are time-consuming for patients under anesthetic and rapid freezing techniques are prone to artifacts. The recent development of microscopy with ultraviolet surface excitation (MUSE) introduces a new possibility for the rapid imaging of the cut tissue surface using fluorescent dyes. The high attenuation of ultraviolet light limits MUSE signals to thicknesses close to typical histology sections. To generate MUSE images with familiar H&E-like contrast, recent work has explored the transformation of MUSE images to “virtual” H&E-like images using unsupervised deep learning models trained on unpaired images of separate tissues treated with each stain. Here, we present a method for acquiring registered images of the same tissue with MUSE and real H&E imaging using sequential staining and dye removal. Tissue blocks are flash frozen and sectioned for mounting onto slides and staining with MUSE fluorescent dyes. After MUSE imaging, a sequential immersion of the slides in increasing concentrations of ethyl alcohol followed by rehydration, similar to steps in paraffin-based histology processing, is sufficient to remove all fluorescent dyes. Rinsed tissue slides are then subjected to traditional H&E staining and brightfield imaging. Data of registered image fields of skin and pancreas are presented along with initial machine learning-based transformations from MUSE to H&E contrast. This protocol will be useful to obtain paired images for training, testing, and quantitative validation of virtual H&E reconstructions from MUSE images.
Automated segmentation of tissue and cellular structure in H&E images is an important first step towards automated histopathology slide analysis. For example, nuclei segmentation can aid with detecting pleomorphism and epithelium segmentation can aid in identification of tumor infiltrating lymphocytes etc. Existing deep learning-based approaches are often trained organ-wise and lack diversity of training data for multi-organ segmentation networks. In this work, we propose to augment existing nuclei segmentation datasets using cycleGANs. We learn an unpaired mapping from perturbed randomized polygon masks to pseudo-H&E images. We generate over synthetic H&E patches from several different organs for nuclei segmentation. We then use an adversarial U-Net with spectral normalization for increased training stability for segmentation. This paired image-to-image translation style network not only learns the mapping form H&E patches to segmentation masks but also learns an optimal loss function. Such an approach eliminates the need for a hand-crafted loss which has been explored significantly for nuclei segmentation. We demonstrate that the average accuracy for multi-organ nuclei segmentation increases to 94.43% using the proposed synthetic data generation and adversarial U-Net-based segmentation pipeline as compared to 79.81% when no synthetic data and adversarial loss was used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.