No-reference image quality assessment (NR-IQA) aims to predict image quality consistently with subjective scores with no prior knowledge of reference images. However, contrast distortion, which is an uncommon distortion, has been largely overlooked. To address this issue, we explore the NR-IQA metric by predicting the quality of contrast-altered images, using deep-learning techniques. We adopt a two-stage training strategy due to a gap between the deep learning’s sample requirements and the insufficiency of samples in the IQA domain. A deep convolutional neural network (CNN) is first designed and is pretrained to the classification task with the help of an additional synthetic contrast-distorted dataset. Then, the pretrained CNN is fine-tuned on the target IQA dataset using an end-to-end training approach. An effective pooling method is employed to map the image representation into a subjective quality score during the fine-tuning stage. Experimental results on five public IQA databases containing contrast-altered images show that the proposed method achieves competitive results and has good generalization ability compared to other NR-IQA methods.
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images’ subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.