Purpose: This work aims to automatically identify the fovea on 2-dimensional fundus-autofluorescence (FAFs) images in patients with age-related-macular-degeneration (AMD) using the definitions from 3-dimensional spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Segmenting the fovea, a highly specialized area of the retina, in vicinity of hypo-autofluorescence in FAF images will aid in objective evaluation of AMD based structural disease features with respect to distance from fovea. Methods: A semi-automated software was used to create fovea-annotations in volumetric SD-OCT images. Acquired FAF images for the same SD-OCT visits were registered to the enface SD-OCT projections, to create a pixel-to-pixel overlap between registered FAFs and SD-OCTs. A U-Net based segmentation network, trained using OCT-registered-FAFs and corresponding foveal-annotations from SD-OCTs, was used to automatically segment foveas from the registered 2D FAF images. Results: The dataset consisted of multimodal-images from AMD patients with 900 (80%) images used for training and 222 (20%) images used in the test-set. The mean euclidean-distance-error for the test-set w.r.t the OCT-determined-ground-truth was found to be 103.5±81.4 µm, and which improved to 83.4±57.9 µm with data-augmentation-based-training. Fovea-identification in FAF images with advanced-AMD disease consisting of geographic-atrophy (GA) test subset were compared between 3 sources and the OCTdetermined-ground-truth: (1) the U-Net algorithm (using the GA test subset (111.7±46.7 μm)), (2) readers at the Wisconsin-reading-center (165±77.5 μm) and a (3) retina-physician (169.9±109.4 μm). Conclusion: Our work demonstrates the potential of using 2D FAF images to predict foveal-locations, especially in visuallychallenging-scenarios where hypo-autofluorescent fovea is surrounded with advanced-disease that alters the normal autofluorescence patterns. The results demonstrate that the developed algorithm has clinically useful performance in segmenting the fovea in FAF images which will enable critical correlation with visual-acuity and the basis for referencing the standardized measures of features relative to the fovea – such as monitoring and tracking the growth of GA and other retina-disease related changes.
Purpose: This work aims to identify areas with sub-retinal-pigment-epithelium (sub-RPE) accumulations on 2- dimensional (2D) color-fundus-photographs (CFPs) in patients with age-related macular degeneration (AMD) using the definitions in spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Detecting and quantifying areas of RPE elevations (most notably drusen) in CFPs will aid in objective evaluation of AMDseverity-scores as well as patient selection and monitoring in clinical trials. Methods: A retinal-layersegmentation algorithm for SD-OCTs was used to automatically identify areas with RPE elevations and build the ground-truth 2D binary maps for training. CFP was registered to the enface projection images of SD-OCT to overlay OCT-defined drusen areas on CFP images. A 2D-UNet segmentation network was trained using bilateral stereo CFP pairs in a Siamese architecture that share OCT-defined drusen areas as ground-truth. Results: Dataset consists of AMD patients with 127 train and 23 test eyes. Dice-similarity-coefficient for the predictions on CFPs was found to be 0.70±0.13 (mean±std), and overall accuracy was 0.73. 89% of test eyes exhibited drusen area prediction error <1mm2 compared to reading-center measures. Conclusion: Our work demonstrates the potential of using 2D CFP images to predict areas of sub-RPE elevations as defined in 3D-SDOCT imaging. Qualitative evaluation of the mismatch between the two imaging modalities shows regions with complementary features in a subset of the cases making it challenging to achieve optimal segmentation. However, the results show clinically useful performance in CFPs that can be used to quantify accumulations in the sub-RPE space which are the key pathologic biomarkers of AMD relevant to patient selection and trial outcome measure designs.
Purpose: This work investigates a semi-supervised approach for automatic detection of hyperreflective foci (HRF) in spectral-domain optical coherence tomography (SD-OCT) imaging. Starting with a limited annotated data set containing HRFs, we aim to build a larger data set and then a more robust detection model. Methods: Faster RCNN model for object detection was trained in a semi-supervised manner whereby high confidence detections from the current iteration are added to the training set in subsequent iterations after manual verification. With each iteration the size of the training set is increased by including model detected additional cases. We expect the model to be more accurate and robust as the number of training iterations increase. We performed experiments in a data set consisting over 170,000 SD-OCT B scans. The models were tested in a data set consisting of 30 patients (3630 B scans). Results: Across iterations the model performance improved with final model yielding precision=0.56, recall=0.99, and F1-score=0.71. As the number of training example increases the model detects cases with more confidence. The high false positive rate is associated with additional detections that capture instances of elevated reflectivity which upon review were found to represent questionable cases rather than definitive HRFs due to confounding factors. Conclusion: We demonstrate that by starting with a small data set of HRFs we are able to search the occurrences of other HRFs in the data set in a semi-supervised fashion. This method provides an objective, time, and cost-effective alternative to laborious manual inspection of B-scans for HRF occurrences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.