This study investigates the efficacy of the red, green, and blue channels in color fundus photography on the deep learning classification of retinopathy of prematurity (ROP). We used a total of 200 color fundus images from four ROP stages and applied the transfer learning for deep learning classification. To enhance visibility, contrast limiting adaptive histogram equalization (CLAHE) was utilized. Multi-color-channel fusion approach was tested to determine its effect on ROP classification. For individual channel classification, the green channel demonstrated the best results, with an accuracy of 80.5%, sensitivity of 61%, and specificity of 87%. Multi-color-channel fusion provided slightly better performance than green channel with an accuracy of 81%, sensitivity of 62%, and specificity of 87.33%. After CLAHE, the red-only, green-only, and RGB-fusion showed comparable performance, with accuracies of 83.5%, 84%, and 84.25, sensitivities of 67%, 68% and 68.5%, and specificities of 89%, 89.33% and 89.50%, respectively. This observation suggests that the red channel after contrast enhancement can provide sufficient information for ROP stage classification.
Accurate differentiation of uveal melanoma and choroidal nevi is critical for optimal patient care, preventing unnecessary procedures for benign lesions while ensuring timely intervention for potentially malignant cases. This study aimed to validate deep learning classification of these lesions and to evaluate the impact of different color fusion options on classification performance. To evaluate the effect of color fusion options on the classification performance, we tested early fusion, intermediate fusion, and late fusion using ultra-widefield retinal images. Specificity, sensitivity, F1-score, accuracy, and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to assess the performance of the deep learning model. The results show that the color fusion options significantly impacted the deep learning classification performance, with intermediate fusion emerging as the best strategy, outperforming both single-color learning and the other fusion strategies. The intermediate fusion strategy had an accuracy of 89.72%, sensitivity of 85.05%, specificity of 91.64, F1 score of 0.8492 and an AUC of 0.9335. These compelling results emphasize the vast potential of deep learning to enhance the accuracy of diagnosis and classification of UM and choroidal nevi, leading to improved patient outcomes and optimized treatment strategies. By harnessing the power of deep learning and color fusion strategies, this study not only provides valuable insights into the application of these approaches in the field of ophthalmology but also highlights their critical significance in automating the classification of UM and choroidal nevi.
The wall-to-lumen ratio (WLR) of retinal blood vessels promises a sensitive marker for functional assessment of eye conditions. However, in vivo measurement of vessel wall thickness and lumen diameter is still technically challenging, hindering the wide application of WLR in research and clinical settings. In this study, we demonstrate the feasibility of using optical coherence tomography (OCT) as one practical method for in vivo quantification of WLR in the retina. Based on three-dimensional vessel tracing, lateral en face and axial B-scan profiles of individual vessels were constructed. By employing adaptive depth segmentation that traces each blood vessel for en face OCT projection, the vessel wall thickness and lumen diameter could be reliably quantified. A comparative study of control and 5xFAD mice confirmed WLR as a sensitive marker of the eye condition.
A convolutional neural network (CNN) with multimodal fusion options was developed for artery-vein (AV) segmentation in OCT angiography (OCTA). We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. OCT-only architecture is limited for segmentation of large AV branches. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture provide competitive performances for AV segmentation with further details. Compared to OCTA-only architecture, the late fusion architecture is slightly better, while the early fusion architecture is slightly worse.
As one modality extension of optical coherence tomography (OCT), OCT angiography (OCTA) provides unrivaled capability for depth-resolved visualization of retinal vasculature at the microcapillary level resolution. For OCTA image construction, repeated OCT scans from one location are required to identify blood vessels with active blood flow. The requirement for multi-scan-volumetric OCT can reduce OCTA imaging speed, which will induce eye movements and limit the image field-of-view. In principle, the blood flow should also affect the reflectance brightness profile along the vessel direction in a single-scan-volumetric OCT. In this article, we report a retinal vascular connectivity network (RVC-Net) for deep learning OCTA construction from single-scan-volumetric OCT. We compare the effects of RVC with three adjacent B-scans and a single B-scan input models into RVC-Net. The structural-similarity index measure (SSIM) loss function was selected to optimize deep learning contrast enhancement of microstructures, i.e., microcapillaries, in OCT. This was confirmed by comparing RVC-Net performances with SSIM and mean-squared-error (MSE) loss functions. The involvement of RVC and SSIM loss function enabled microcapillary resolution OCTA construction from singlescan- volumetric OCT.
Early detection of diabetic retinopathy (DR) is an essential step to prevent vision losses. This study is to conduct comparative optical coherence tomography (OCT) and OCT-Angiography (OCTA) analysis, and to identify quantitative features for robust detection of early DR. Five quantitative OCT features were derived to analyze the outer retinal band intensity in the central fovea, parafovea and perifovea regions. Similarly, eight quantitative OCTA features were established to analyze the superficial and deep vascular plexuses. OCT and OCTA images of 21 eyes from healthy control subjects, 20 eyes from diabetic patients without retinopathy (NoDR), and 21 eyes from mild DR patients were used for this study. Comparative analysis revealed that quantitative OCT features related to the Inner Segment ellipsoid (ISe) has the best sensitivity for objective differentiation of all cohorts.
The purpose of this study is to use optical coherence tomography (OCT) to characterize the reflectance profiles of retinal blood vessels and to use these features for artery-vein classification in OCT angiography (OCTA). The retinal arteries and veins show unique features in the depth-resolved OCT. Both the upper and lower side of the retinal arteries have hyperreflective boundaries. However, retinal veins reveal only hyper-reflective boundary at the upper side. In both small and large arteries, relatively uniform lumen intensity was observed. On the other hand, the vein lumen intensity was dependent on the vessel size; the bottom half of the lumen of small veins show a hyper-reflective zone while the bottom half of the lumen of big veins a hypo-reflective zone.
KEYWORDS: Optical coherence tomography, Angiography, Veins, Arteries, RGB color model, Network architectures, Near infrared, Image segmentation, Eye, Control systems
Early disease diagnosis and effective treatment assessment are crucial to prevent vision loss. Retinal arteries and veins can be affected differently by different eye diseases, e.g., arterial narrowing and venous beading in diabetic retinopathy (DR). Therefore, differential artery-vein (AV) analysis can provide valuable information for early disease detection and better stage classification. However, manual, or semi-automated methods for AV identification are inefficient in a clinical setting. This study is to demonstrate the use of deep learning for automated AV classification in optical coherence tomography angiography (OCTA). We present ‘AV-Net’, a fully convolutional network (CNN) based on a modified Ushaped architecture. The input to AV-Net is a 2-channel system that combines grayscale enface OCT and OCTA. The enface OCT is a near infrared image, equivalent to a fundus image, which provides the vessel intensity profiles. In contrast, the OCTA contains the information of the blood flow strength, and vessel geometric features. The output of AV-Net is an RGB (red-green-blue) image, with R and B corresponding to arteries and veins, respectively, and the G channel represents the background. The dataset in this study is comprised of images from 50 individuals (20 controls and 30 DR patients). Transfer learning and regularization techniques, such as data augmentation and cross validation, were employed during training to prevent overfitting. The results reveal robust vessel segmentation and AV classification. A fully automated platform is essential for fostering efficient clinical deployment of AI-based screening, diagnosis, and treatment evaluation.
Diabetic retinopathy (DR) is a leading cause of preventable blindness. Early detection and reliable stage classification are essential to ensure prompt medical interventions. Recent study suggests that the outer retina, i.e., photoreceptors, can be affected by early DR. We demonstrate here the potential of using quantitative OCT features in outer retina for objective detection and stage classification of DR. The OCT intensity change is observed to be mostly sensitive, compared to retina thickness and bandwidth, to DR stages. It is also confirmed that the relative intensity changes of photoreceptor outer segment are more sensitive than inner segment for DR classification.
High-resolution ophthalmic imaging is imperative for detecting subtle changes of photoreceptor abnormality at the early stage of retinal diseases. However, optical resolution in retinal imaging is inherently limited by the low numerical aperture of the ocular optics. Virtually structured detection (VSD) has been demonstrated to break the diffraction limit of imaging systems by shifting the high-frequency components to the passing bandwidth of the imaging system. However, its implementation for human subjects remains a challenge due to the uncertain cut-off frequency of the modulation transfer function (MTF) required for VSD processing. This study demonstrates an objective method to derive the MTF from spectral profiles, enabling quantitative estimation of the optimal cut-off frequency. A custom-built line-scan scanning laser ophthalmoscopy was developed, and two-dimensional line-profile patterns were acquired at a 25 kHz frame rate. We found that the MTF profiles exhibited significant differences between subjects as well as view fields. VSD-based super-resolution images exhibited improved resolution and contrast to differentiate individual photoreceptors compared to the equivalent wide-field imaging. Besides, the motility process on the VSD image further improved the image quality as the photoreceptors revealed clear boundaries and more integrated shape, compared to that in the VSD image. We anticipate that the VSD-based imaging will provide a simple, low-cost, and phase-artifact-free strategy to achieve super-resolution retinal ophthalmoscopy.
Early detection of diabetic retinopathy (DR) is an essential step to prevent vision losses. This study is the first effort to explore convolutional neural networks (CNNs) for transfer-learning based optical coherence tomography angiography (OCTA) detection and classification of DR. We employed transfer-learning using a pre-trained CNN, VGG16, based on the ImageNet dataset for classification of OCTA images. To prevent overfitting, data augmentation, e.g. rotations, flips, and zooming, and 5-fold cross-validation were implemented. A dataset comprising of 131 OCTA images from 20 control, 17 diabetic patients without DR (NoDR), and 60 nonproliferative DR (NPDR) patients were used for preliminary validation. Best classification performance was achieved with fine-tuning nine layers of the sixteen-layer CNN model.
Diabetic retinopathy (DR) is a major ocular manifestation of diabetes. DR can cause irreversible damage to the retina if not intervened timely. Therefore, early detection and reliable classification are essential for effective management of DR. As DR progresses into the proliferative stage (PDR), manifestation of localized neovascularization and complex capillary meshes are observed in the retina. These vascular complex structures can be quantified as biomarkers of transition of DR from no-proliferative to proliferative stage (NPDR). This study investigates four optical coherence tomography angiography (OCTA) features, i.e. vessel complexity index (VCI), fractal dimension (FD), four-point crossover (FCO), and blood vessel tortuosity (BVT), to quantify vascular complexity to distinguish NPDR from PDR eyes. OCTA images from 20 control, 60 NPDR and 56 PDR patients were analyzed. The univariate analysis showed that, with the progression of DR, all four complexity features increased with statistical significance (ANOVA, P < 0.05). A posthoc study showed that, only VCI and BVT were able to distinguish between NPDR and PDR. A multivariate logistic regression identified VCI and BVT as the most significant feature combination for NPDR vs PDR classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.