Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.
This paper presents an optimization method to reduce blocking artifacts in JPEG images by utilizing the image gradient information. A closed-form solution is derived for the optimization method. To address the computational feasibility aspect of the large matrices involved in the closed-form solution, a sliding window approach is devised. The performance of the developed method is compared with several blocking artifacts reduction methods in the literature and also with the deblocking filter deployed in high efficiency video coding by examining the three measures of peak signal-to-noise ratio, generalized block-edge impairment metric (MGBIM), and structural similarity. The comparison results indicate the effectiveness of the introduced method in particular for low bit-rate JPEG images.
Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.