Automated detection and quantification of spatio-temporal retinal changes is an important step to objectively assess disease progression and treatment effects for dynamic retinal diseases such as diabetic retinopathy (DR). However, detecting retinal changes caused by early DR lesions such as microaneurysms and dot hemorrhages from longitudinal pairs of fundus images is challenging due to intra and inter-image illumination variation between fundus images. This paper explores a method for automated detection of retinal changes from illumination normalized fundus images using a deep convolutional neural network (CNN), and compares its performance with two other CNNs trained separately on color and green channel fundus images. Illumination variation was addressed by correcting for the variability in the luminosity and contrast estimated from a large scale retinal regions. The CNN models were trained and evaluated on image patches extracted from a registered fundus image set collected from 51 diabetic eyes that were screened at two different time-points. The results show that using normalized images yield better performance than color and green channel images, suggesting that illumination normalization greatly facilitates CNNs to quickly and correctly learn distinctive local image features of DR related retinal changes.
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Diabetic macular edema (DME) characterized by discrete white{yellow lipid deposits due to
vascular leakage is one of the most severe complication seen in diabetic patients that cause
vision loss in affected areas. Such vascular leakage can be treated by laser surgery. A regular
follow{up and laser photocoagulation can reduce the risk of blindness by 90%. In an automated
retina screening system, it is thus very crucial to make the segmentation of such hard exudates
accurate and register these images taken over time to a reference co-ordinate system to make the
necessary follow-ups more precise. We introduce a novel method of ethnicity based statistical
atlas for exudates segmentation and follow-up. Ethnic background plays a significant role in
retinal pigment epithelium, visibility of the choroidal vasculature and overall retinal luminance in
patients and retinal images. Such statistical atlas can thus help to provide a solution, simplify the
image processing steps and increase the detection rate. In this paper, bright lesion segmentation
is investigated and experimentally verified for the gold standard built from African American
fundus images.
40 automatically generated landmark points on the major vessel arches with macula and
optic centers are used to warp the retinal images. PCA is used to obtain a mean shape of the
retinal major arches (both lower and upper). The mean of the co-ordinates of the macula and
optic disk center are obtained resulting 42 landmark points and together they provide a reference
co-ordinate frame ( or the atlas co-ordinate frame) for the images. The retinal funds images of an
ethnic group without any artifact or lesion are warped to this reference co-ordinate frame from
which we obtain a mean image representing the statistical measure of the chromatic distribution
of the pigments in the eye of that particular ethnic group.
400 images of African American eye has been used to build such a gold standard for this ethnic
group. Any test image of the patient of that ethnic group is first warped to the reference frame
and then a distance map is obtained with this mean image. Finally, the post-processing schemes
are applied on the distance map image to enhance the edges of the exudates. A multi-scale and
multi-directional steerable filters along with the Kirsch edge detector was found to be promising.
Experiments with the publicly available HEI-MED dataset showed the good performance of the
proposed method. We achieved the lesion localization fraction (LLF) of 82.5% at 35% of
non{lesion localization fraction (NLF) on the FROC curve.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.