This study investigates the efficacy of the red, green, and blue channels in color fundus photography on the deep learning classification of retinopathy of prematurity (ROP). We used a total of 200 color fundus images from four ROP stages and applied the transfer learning for deep learning classification. To enhance visibility, contrast limiting adaptive histogram equalization (CLAHE) was utilized. Multi-color-channel fusion approach was tested to determine its effect on ROP classification. For individual channel classification, the green channel demonstrated the best results, with an accuracy of 80.5%, sensitivity of 61%, and specificity of 87%. Multi-color-channel fusion provided slightly better performance than green channel with an accuracy of 81%, sensitivity of 62%, and specificity of 87.33%. After CLAHE, the red-only, green-only, and RGB-fusion showed comparable performance, with accuracies of 83.5%, 84%, and 84.25, sensitivities of 67%, 68% and 68.5%, and specificities of 89%, 89.33% and 89.50%, respectively. This observation suggests that the red channel after contrast enhancement can provide sufficient information for ROP stage classification.
Accurate differentiation of uveal melanoma and choroidal nevi is critical for optimal patient care, preventing unnecessary procedures for benign lesions while ensuring timely intervention for potentially malignant cases. This study aimed to validate deep learning classification of these lesions and to evaluate the impact of different color fusion options on classification performance. To evaluate the effect of color fusion options on the classification performance, we tested early fusion, intermediate fusion, and late fusion using ultra-widefield retinal images. Specificity, sensitivity, F1-score, accuracy, and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to assess the performance of the deep learning model. The results show that the color fusion options significantly impacted the deep learning classification performance, with intermediate fusion emerging as the best strategy, outperforming both single-color learning and the other fusion strategies. The intermediate fusion strategy had an accuracy of 89.72%, sensitivity of 85.05%, specificity of 91.64, F1 score of 0.8492 and an AUC of 0.9335. These compelling results emphasize the vast potential of deep learning to enhance the accuracy of diagnosis and classification of UM and choroidal nevi, leading to improved patient outcomes and optimized treatment strategies. By harnessing the power of deep learning and color fusion strategies, this study not only provides valuable insights into the application of these approaches in the field of ophthalmology but also highlights their critical significance in automating the classification of UM and choroidal nevi.
A convolutional neural network (CNN) with multimodal fusion options was developed for artery-vein (AV) segmentation in OCT angiography (OCTA). We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. OCT-only architecture is limited for segmentation of large AV branches. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture provide competitive performances for AV segmentation with further details. Compared to OCTA-only architecture, the late fusion architecture is slightly better, while the early fusion architecture is slightly worse.
As one modality extension of optical coherence tomography (OCT), OCT angiography (OCTA) provides unrivaled capability for depth-resolved visualization of retinal vasculature at the microcapillary level resolution. For OCTA image construction, repeated OCT scans from one location are required to identify blood vessels with active blood flow. The requirement for multi-scan-volumetric OCT can reduce OCTA imaging speed, which will induce eye movements and limit the image field-of-view. In principle, the blood flow should also affect the reflectance brightness profile along the vessel direction in a single-scan-volumetric OCT. In this article, we report a retinal vascular connectivity network (RVC-Net) for deep learning OCTA construction from single-scan-volumetric OCT. We compare the effects of RVC with three adjacent B-scans and a single B-scan input models into RVC-Net. The structural-similarity index measure (SSIM) loss function was selected to optimize deep learning contrast enhancement of microstructures, i.e., microcapillaries, in OCT. This was confirmed by comparing RVC-Net performances with SSIM and mean-squared-error (MSE) loss functions. The involvement of RVC and SSIM loss function enabled microcapillary resolution OCTA construction from singlescan- volumetric OCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.