Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the
diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma
progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more
recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been
reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a
multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs
and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the
retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from
both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or
background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion
(by subject). A significant improvement in classification accuracy is obtained using the multimodal approach
over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).
While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous
segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions
remains an important challenge. Previously proposed methods have used simple cost functions or optimized a
combination of the same, but little has been done to design cost functions using learned features from a training
set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation
of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability
maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested
on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that
presented with glaucoma. The mean unsigned border position error was computed with respect to the average of
manual tracings from two independent observers and compared to our previously reported results. A significant
improvement was noted in the overall means which reduced from 9.25 ± 4.03μm to 6.73 ± 2.45μm (p < 0.01)
and is also comparable with the inter-observer variability of 8.85 ± 3.85μm.
The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust
stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with
glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions
and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper,
multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and
gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity
based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale
space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D
segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In
experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape,
and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.
Glaucoma is a group of diseases which can cause vision loss and blindness due to gradual damage to the optic
nerve. The ratio of the optic disc cup to the optic disc is an important structural indicator for assessing the
presence of glaucoma. The purpose of this study is to develop and evaluate a method which can segment the
optic disc cup and neuroretinal rim in spectral-domain OCT scans centered on the optic nerve head. Our method
starts by segmenting 3 intraretinal surfaces using a fast multiscale 3-D graph search method. Based on one of
the segmented surfaces, the retina of the OCT volume is flattened to have a consistent shape across scans and
patients. Selected features derived from OCT voxel intensities and intraretinal surfaces were used to train a
k-NN classifier that can determine which A-scans in the OCT volume belong to the background, optic disc cup
and neuroretinal rim. Through 3-fold cross validation with a training set of 20 optic nerve head-centered OCT
scans (10 right eye scans and 10 left eye scans from 10 glaucoma patients) and a testing set of 10 OCT scans (5
right eye scans and 5 left eye scans from 5 different glaucoma patients), segmentation results of the optic disc
cup and rim for all 30 OCT scans were obtained. The average unsigned errors of the optic disc cup and rim were
1.155 ± 1.391 pixels (0.035 ± 0.042 mm) and 1.295 ± 0.816 pixels (0.039 ± 0.024 mm), respectively.
KEYWORDS: Independent component analysis, Retina, Signal detection, Video, Visualization, Reflectivity, Computer simulations, Detection and tracking algorithms, Functional magnetic resonance imaging, Signal to noise ratio
To overcome the difficulty in detection of loss of retinal activity, a functional-Retinal Imaging Device (f-RID) was
developed. The device, which is based on a modified fundus camera, seeks to detect changes in optical signals that
reflect functional changes in the retina. Measured changes in reflectance in response to the visual stimulus are on the
order of 0.1% to 1% of the total reflected intensity level, which makes the functional signal difficult to detect by
standard methods because it is masked by other physiological signals and by noise.
In this paper, we present a new Independent Component Analysis (ICA) algorithm used to analyze the video sequences
from a set of experiments with different patterned stimuli from cats and humans. The ICA algorithm with priors (ICA-P)
uses information about the stimulation paradigms to increase the signal detection thresholds when compared to
traditional ICA algorithms. The results of the analysis show that we can detect signal levels as low as 0.01% of the total
reflected intensity. Also, improvement of up to 30dB in signal detection over traditional ICA algorithms is achieved. The
study found that in more than 80% of the in-vivo experiments the patterned stimuli effects on the retina can be detected
and extracted.
In the early stages of some retinal diseases, such as glaucoma, loss of retinal activity may be difficult to detect with today's clinical instruments. Many of today's instruments focus on detecting changes in anatomical structures, such as the nerve fiber layer. Our device, which is based on a modified fundus camera, seeks to detect changes in optical signals that reflect functional changes in the retina. The functional imager uses a patterned stimulus at wavelength of 535nm. An intrinsic functional signal is collected at a near infrared wavelength. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1% of the total reflected intensity level, which makes the functional signal difficult to detect by standard methods because it is masked by other physiological signals and by imaging system noise. In this paper, we analyze the video sequences from a set of 60 experiments with different patterned stimuli from cats. Using a set of statistical techniques known as Independent Component Analysis (ICA), we estimate the signals present in the videos. Through controlled simulation experiments, we quantify the limits of signal strength in order to detect the physiological signal of interest. The results of the analysis show that, in principle, signal levels of 0.1% (-30dB) can be detected. The study found that in 86% of the animal experiments the patterned stimuli effects on the retina can be detected and extracted. The analysis of the different responses extracted from the videos can give an insight of the functional processes present during the stimulation of the retina.
Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.
KEYWORDS: Principal component analysis, Retina, Reflectivity, Video, Independent component analysis, Signal detection, Signal processing, Visualization, Optical imaging, Algorithm development
An optical imaging device of retina function (OID-RF) has been developed to measure changes in blood oxygen saturation due to neural activity resulting from visual stimulation of the photoreceptors in the human retina. The video data that are collected represent a mixture of the functional signal in response to the retinal activation and other signals from undetermined physiological activity. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1.0% of the total reflected intensity level which makes the functional signal difficult to detect by standard methods since it is masked by the other signals that are present. In this paper, we apply principal component analysis (PCA), blind source separation (BSS), using Extended Spatial Decorrelation (ESD) and independent component analysis (ICA) using the Fast-ICA algorithm to extract the functional signal from the retinal videos. The results revealed that the functional signal in a stimulated retina can be detected through the application of some of these techniques.
An optical imaging device of retina function (OID-RF) has been constructed to record changes in reflected 700-nm light from the fundus caused by retinal activation in response to a visual 535-nm stimulus. The resulting images reveal areas of the retina activated by visual stimulation. This device is a modified fundus camera designed to provide a patterned, moving visual stimulus over a 45-degree field of view to the subject in the green wavelength portion of the visual spectrum while simultaneously imaging the fundus in another, longer wavelength range. Data was collected from 3 normal subjects and recorded for 13 seconds at 4 Hz; 3 seconds were recorded during pre-stimulus baseline, 5 seconds during the stimulus, and 5 seconds post-stimulus. This procedure was repeated several times and, after image registration, the images were averaged to improve signal to noise. The change in reflected intensity from the retina due to the stimulus was then calculated by comparison to the pre-stimulus state. Reflected intensity from areas of stimulated retina began to increase steadily within 1 second after stimulus onset and decayed after stimulus offset. These results indicated that a functional optical signal can be recorded from the human eye.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.