Purpose: This work aims to identify areas with sub-retinal-pigment-epithelium (sub-RPE) accumulations on 2- dimensional (2D) color-fundus-photographs (CFPs) in patients with age-related macular degeneration (AMD) using the definitions in spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Detecting and quantifying areas of RPE elevations (most notably drusen) in CFPs will aid in objective evaluation of AMDseverity-scores as well as patient selection and monitoring in clinical trials. Methods: A retinal-layersegmentation algorithm for SD-OCTs was used to automatically identify areas with RPE elevations and build the ground-truth 2D binary maps for training. CFP was registered to the enface projection images of SD-OCT to overlay OCT-defined drusen areas on CFP images. A 2D-UNet segmentation network was trained using bilateral stereo CFP pairs in a Siamese architecture that share OCT-defined drusen areas as ground-truth. Results: Dataset consists of AMD patients with 127 train and 23 test eyes. Dice-similarity-coefficient for the predictions on CFPs was found to be 0.70±0.13 (mean±std), and overall accuracy was 0.73. 89% of test eyes exhibited drusen area prediction error <1mm2 compared to reading-center measures. Conclusion: Our work demonstrates the potential of using 2D CFP images to predict areas of sub-RPE elevations as defined in 3D-SDOCT imaging. Qualitative evaluation of the mismatch between the two imaging modalities shows regions with complementary features in a subset of the cases making it challenging to achieve optimal segmentation. However, the results show clinically useful performance in CFPs that can be used to quantify accumulations in the sub-RPE space which are the key pathologic biomarkers of AMD relevant to patient selection and trial outcome measure designs.
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) images are a series of Bscans which capture the volume of the retina and reveal structural information. Diseases of the outer retina cause changes to the retinal layers which are evident on SD-OCT images, revealing disease etiology and risk factors for disease progression. Quantitative thickness information of the retina layers provide disease relevant data that reveal important aspects of disease pathogenesis. Manually labeling these layers is extremely laborious, time consuming and costly. Recently, deep learning algorithms have been used for automating the process of segmentation. While retinal volumes are inherently 3 dimensional, state-of-the-art segmentation approaches have been limited in their utilization of the 3 dimensional nature of the structural information. Methods: In this work, we train a 3D-UNet using 150 retinal volumes and test using 191 retinal volumes from a hold out test set (with AMD severity grade ranging from no disease through the intermediate stages to the advanced disease, and presence of geographic atrophy). The 3D deep features learned by the model captures spatial information simultaneously from all the three volumetric dimensions. Since unlike the ground truth, the output of 3D-UNet is not single pixel wide, we perform a column wise probabilistic maximum operation to obtain single pixel wide layers, for quantitative evaluations. Results: We compare our results to the publicly available OCT Explorer and deep learning based 2D-UNet algorithms and observe a low error within 3.11 pixels with respect to the ground truth locations (for some of the most challenging or advanced stage AMD eyes with AMD severity score: 9 and 10). Conclusion: Our results show that both qualitatively and quantitatively there is a significant advantage of extracting and utilizing 3D features over the traditionally used OCT Explorer or 2D-UNet.
KEYWORDS: Image segmentation, 3D modeling, Retina, Image processing algorithms and systems, Detection and tracking algorithms, Signal to noise ratio, 3D image processing, Image contrast enhancement, Medical image reconstruction, Medical image processing
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) is a much utilized imaging modality in retina clinics to inspect the integrity of retinal layers in patients with age related macular degeneration. Spectralis and Cirrus are two of the most widely used SD-OCT vendors. Due to the stark difference in intensities and signal to noise ratio’s between the images captured by the two instruments, a model trained on images from one instrument performs poorly on the images of the other instrument. Methods: In this work, we explore the performance of an algorithm trained on images obtained from the Heidelberg Spectralis device on Cirrus images. Utilizing a dataset containing Heidelberg images and Cirrus images, we address the problem of accurately segmenting images on one domain with an algorithm developed on another domain. In our approach we use unpaired CycleGAN based domain adaptation network to transform the Cirrus volumes to the Spectralis volumes, before using our trained segmentation network. Results: We show that the intensity distribution shifts towards the Spectralis domain when we domain adapt Cirrus images to Spectralis images. Our results show that the segmentation model performs significantly better on the domain translated volumes (Total Retinal Volume Error: 0.17±0.27mm3, RPEDC Volume Error: 0.047±0.05mm3) compared to the raw volumes (Total Retinal VolumeError: 0.26±0.36mm3, RPEDC Volume Error: 0.13±0.15mm3) from the Cirrus domain and that such domain adaptation approaches are feasible solutions. Conclusions: Both our qualitative and quantitative results show that CycleGAN domain adaptation network can be used as an efficient technique to perform unpaired domain adaptation between SD-OCT images generated from different devices. We show that a 3D segmentation model trained on Spectralis volume performs better on domain adapted Cirrus volumes, compared to raw Cirrus volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.