Presentation + Paper
22 April 2020 No-reference image quality assessment based on residual neural networks (ResNets)
Author Affiliations +
Abstract
Image Quality assessment (IQA) is a tricky field to master, as it attempts to measure the quality of an image with reference to the complex human visual system. In IQA, there are three dominant strands of research, namely: fullreference, reduced-reference and no-reference image quality assessment. No-reference image quality assessment is the hardest one to achieve, as the reference images required for determining the quality of the given images are not available. In one of our previous papers, we quantified no-reference IQA, using state-of-the-art multitasking neural networks, particularly the VGG-16 and shallow neural networks. We achieved good accuracy for the classification of most distortions. However, one of the drawbacks of the networks used was that the classification accuracy was not good for JPEG2000 compressed images. These images were classified incorrectly as blurry or noisy images. In this paper, we try to classify compressed images more accurately using residual neural networks (ResNets). These deep learning models were built based upon micro-architecture modules and are specific task-focused entities, each one determining the distortion type and distortion level of an artifact present in the image. The test images were obtained from the LIVE II, CSIQ, and TID2013 databases for comparison with previous work. In contrast to our previous approach, where the training was limited to one specific distortion at a time, we train the collection of ResNets with all the possible distortion types present in the test databases. Preprocessing of the images is done using local contrast normalization and global contrast normalization methods. All the hyper-parameters in the ResNets collection, such as activation functions, dropout regularizations, optimizers are tuned to produce optimal classification accuracy. The results are evaluated with different methods such as PLCC, SROCC and MSE and high linear correlation is achieved using the ResNets collection and compared to previous results.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ravi Ravela, Mukul Shirvaikar, and Christos Grecos "No-reference image quality assessment based on residual neural networks (ResNets)", Proc. SPIE 11401, Real-Time Image Processing and Deep Learning 2020, 114010C (22 April 2020); https://doi.org/10.1117/12.2556347
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image quality

Neural networks

Data modeling

JPEG2000

Image fusion

Databases

RGB color model

Back to Top