Identifying the major blood vessels during laparoscopic surgeries is important to prevent injuries to the vessel that could complicate the procedure. Current mitigation strategies involve the use of fluorescence or contrast dyes, but present challenges such as patient preparation time, potential adverse reactions, and the need for specialized imaging modalities. In this study, we explore the potential of Near InfraRed (NIR) bands for dye-free major blood vessel identification, the generation of a False-RGB image with NIR bands that closely resemble the RGB image of tissues, and the enhancement of this image using a proposed contrast enhancement technique. Ten multispectral images in the NIR spectrum were captured, and a False-RGB image was generated using the 702 nm, 821 nm, and 833 nm bands as the red, green, and blue channels, respectively. The contrast enhancement algorithm successfully increased the vein contrast with an average gain of 1.5, as measured by the Michelson contrast ratio.
Gastrointestinal (GI) tract endoscopy plays a pivotal role in the detection of a spectrum of malignancies, including superficial lesions and vascular irregularities. While conventional White Light Imaging (WLI) delivers clear GI tract imagery, it often lacks the capability to adequately enhance the visualization of vascular structures, essential for precise disease diagnosis. Although Narrow Band Imaging (NBI) enhances the visualization of superficial vessels, its availability is not universal across endoscopy systems. In contexts where advanced imaging techniques like NBI are absent, enhancing visualization under white light illumination holds promise for improving diagnostic accuracy. An innovative approach involving approximate spectral color estimation has been proposed in this paper, which relies on the relative proportions of red, green, and blue (RGB) components in a spectral color to infer the spectral component from an RGB image. By applying a composite of three spectral estimates to the RGB channels, we generate pseudo-colored images that accentuate structural details. Enhanced images using diverse spectral estimate combinations were captured from two patients under both WLI and NBI and analyzed for visualizing various GI tract structures. The enhanced images show clear improvement when compared to the original image for the same region. The comparison of enhanced images captured under the two different light sources shows relatively higher improvement for enhancement under WLI compared to the predominantly monochromatic images yielded by NBI. The findings underscore that our proposed method generates a spectrum of distinctly colored images, in contrast to the predominantly monochromatic images yielded by NBI. This empowers clinicians to opt for their preferred color combinations, in turn simplifying the diagnostic process.
Structural Magnetic Resonance Imaging (MRI) is an effective tool for understanding the brain tissue and can differentiate between a neurotypical and a pathology affected brain. Brain segmentation of the different regions is often the first step in quantifying the extent of pathological infection. This important step, however, is difficult in developing fetal brains as a direct result of the relatively small volume of the brain and incomplete development. Manual segmentation is time consuming, cumbersome and prone to human errors. Hence, there is a crucial need to automate the segmentation process for the diagnosis of pathology and for potential intervention and treatment. In this paper, we study state-of-the-art learning based framework for multilabel atlas based segmentation, VoxelMorph on the FeTa 2022 dataset. Essentially, our work addresses the lack of standard brain volumes of pathologies by training a segmentation model only on neurotypical brains. We learn generalizable deformation parameters using the VoxelMorph architecture. We observe learning based atlas registration to achieve an average Dice score of 0.62 in the pathological FeTa 2022 MRI dataset, with an improvement of 0.07 over symmetric normalization based iterative atlas registration.
Breast cancer is the second largest cause of cancer death among women after skin cancer. Mitotic count is an important biomarker for predicting the breast cancer prognosis according to Nottingham Grading System. Pathologists look for tumour areas and select 10 HPF(high power field) images and assign a grade based on the number of mitotic counts. Mitosis detection is a tedious task because the pathologist has to inspect a larger area. The pathologist’s views about mitotic cell are also subjective. Because of these problems, an assisting tool for the pathologist will generalize and reduce the time for diagnosis. Due to recent advancements in whole slide imaging, CAD(computer-aided diagnosis) systems are becoming popular. Mitosis detection for scanner images is difficult because of variability in shape, color, texture and its similar appearance to apoptotic nuclei, darkly stained nuclei structures. In this paper, the mitotic detection task is carried out with state of the art object detector (Faster R-CNN) and classifiers (Resnet152, Densenet169, and Densenet201) for ICPR 2012 dataset. The Faster R-CNN is used in two ways. In first, it was treated as an object detector which gave an F1-score of 0.79 while in second, it was treated as a Region Proposal Network followed by an ensemble of classifiers giving an F1-score 0.75.
Retinopathy of Prematurity (ROP) is a fibrovascular proliferative disorder, which affects the developing peripheral retinal vasculature of premature infants. Early detection of ROP is possible in stage 1 and stage 2 characterized by demarcation line and ridge with width which separates vascularised retina and the peripheral retina. To detect demarcation line/ ridge from neonatal retinal images is a complex task because of low contrast images. In this paper we focus on detection of ridge, the important landmark in ROP diagnosis, using Convolutional Neural Network(CNN). Our contribution is to use a CNN-based model Mask R-CNN for demarcation line/ridge detection allowing clinicians to detect ROP stage 2 better. The proposed system applies a pre-processing step of image enhancement to overcome poor image quality. In this study we use labelled neonatal images and we explore the use of CNN to localize ridge in these images. We used a dataset of 220 images of 45 babies from the KIDROP project. The system was trained on 175 retinal images with ground truth segmentation of ridge region. The system was tested on 45 images and reached detection accuracy of 0.88, showing that deep learning detection with pre-processing by image normalization allows robust detection of ROP in early stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.