Video quality assessment (VQA) has been a hot topic due to the rapidly increasing demands in related video applications. The existing state-of-art full reference (FR) VQA metric ViS3 uses adapted the Most Apparent Distortion (MAD) algorithm to capture spatial distortion first, and then quantifies the spatiotemporal distortion by spatiotemporal correlation and a HVS-based model from the spatiotemporal slices (STS) images. In this paper we argue that the STS images can provide enough information for measuring video distortion. Taking advantage of an effective and easy-applied FR image quality model GMSD, we propose to measure video quality by analysing the structural changes between the STS images of the reference videos and their distorted counterparts. This new VQA model is denoted as STS-GMSD. To further investigate the influence spatial dissimilarity, we also combine the frame-by-frame spatial GMSD factor with the STS-GMSD and propose another VQA model, named SSTS-GMSD. Extensive experimental evaluations on two benchmark video quality databases demonstrate that the proposed STS-GMSD outperforms the existing state-of-the-art FR-VQA methods. While STS-GMSD works all square with SSTS-GMSD, which validates that STS images contain enough information for FR-VQA model design.
In the previous work, the LoG (Laplacian of Gaussian) signal that is the earliest stage output of human visual neural
system was suggested to be useful in image quality assessment (IQA) model design. This work considered that LoG
signal carried crucial structural information of IQA in the position of its zero-crossing and proposed a Non-shift Edge
(NSE) based IQA model. In this study, we focus on another aspect of the properties of the LoG signal, i.e., LoG whitens
the power spectrum of natural images. Here our interest is that: when exposed to unnatural images, specifically distorted
images, how does the HVS whitening this type of signals? In this paper, we first investigate the whitening filter for
natural image and distorted image respectively, and then suggest that the LoG is also a whitening filter for distorted
images to some extent. Based on this fact, we deploy the LOG signal in the task of IQA model design by applying two
very simple distance metrics, i.e., the MSE (mean square error) and the correlation. The proposed models are analyzed
according to the evaluation performance on three subjective databases. The experimental results validate the usability of
the LoG signal in IQA model design and that the proposed models stay in the state-of-the-art IQA models.
Measurement of visual quality is of fundamental importance for numerous image and video processing applications. This
paper presented a novel and concise reduced reference (RR) image quality assessment method. Statistics of local binary
pattern (LBP) is introduced as a similarity measure to form a novel RR image quality assessment (IQA) method for the
first time. With this method, first, the test image is decomposed with a multi-scale transform. Second, LBP encoding
maps are extracted for each of subband images. Third, the histograms are extracted from the LBP encoding map to form
the RR features. In this way, image structure primitive information for RR features extraction can be reduced greatly.
Hence, new RR IQA method is formed with only at most 56 RR features. The experimental results on two large scale
IQA databases show that the statistic of LBPs is fairly robust and reliable to RR IQA task. The proposed methods show
strong correlations with subjective quality evaluations.
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its
loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual
quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented,
whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image
Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test
image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments
on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work
very well with high correlations between the subjective and objective scores. In particular, it works consistently well
across all the distortion types.
Recently, research of Objective Image Quality Assessment (IQA) has gained much attention due to its wide application
prospect. Among them, the Reduced-Reference (RR) methods estimate perceptual quality of distorted images with
partial information from the reference images. This paper proposes a novel universal RR-IQA metric based on the
statistics of edge patterns. Firstly, the binary edge maps of the reference and distorted images are created by the LOG
operator and zero-crossing detection. Based on them, 15 groups of typical edge patterns are extracted and then their
statistical distributions are calculated respectively for the reference and distortion images. The proposed RR-IQA metric
is achieved by computing the L-1 Minkowski distance between those two distributions. We have evaluated this metric on
six publicly accessible subjective IQA databases. Experiments shows that the proposed metric featured with typical edge
patterns outperform other methods in terms of data volume, accuracy and consistency with human perception. In a way,
our work provides a new view to the IQA metric design.
The research on image quality assessment (IQA) has been become a hot topic in most area concerning image processing.
Seeking for the efficient IQA model with the neurophysiology support is naturally the goal people put the efforts to
pursue. In this paper, we argue that comparing the edges position of reference and distorted image can well measure the
image structural distortion and become an efficient IQA metric, while the edge is detected from the primitive structures
of image convolving with LOG filters. The proposed metric is called NSER that has been designed following a simple
logic based on the cosine distance of the primitive structures and two accessible improvements. Validation is taken by
comparison of the well-known state-of-the-art IQA metrics: VIF, MS-SSIM, VSNR over the six IQA databases: LIVE,
TID2008, MICT, IVC, A57, and CSIQ. Experiments show that NSER works stably across all the six databases and
achieves the good performance.
Objective Image Quality Assessment (IQA) model investigation is a hot topic in recent times. This paper proposed a
novel and efficient universal Reduced Reference (RR) image quality assessment method based upon the statistics of edge
discrimination. Firstly, binary edge maps created from the multi-scale wavelet transform modulus maxima were used as
the low level feature to discriminate the difference between the reference and distorted image for IQA purpose. Then the
gradient operator was applied on the binary map to produce the so called edge pattern map. The histogram of edge
pattern map was used to verify the pattern of the edges of reference and distorted image, respectively. The RR features
extracted from the histogram was used to discriminate the difference of edge pattern maps, and then form a new RR IQA
model. Comparing to the typical RR model (Zhou Wang's method, 2005), only 12 features (96 bits) are needed instead
of 18 features (162 bits) in Zhou Wang et al.'s method with better overall performance.