KEYWORDS: Video, Video surveillance, Databases, Video compression, Visualization, Statistical modeling, Statistical analysis, Visual process modeling, Feature extraction, Data modeling
We propose a no reference (NR) video quality assessment (VQA) model. Recently, ‘completely blind’ still picture quality analyzers have been proposed that do not require any prior training on, or exposure to, distorted images or human opinions of them. We have been trying to bridge an important but difficult gap by creating a ‘completely blind’ VQA model. The principle of this new approach is founded on intrinsic statistical regularities that are observed in natural vidoes. This results in a video ‘quality analyzer’ that can predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions or human judgments. Hence, the model is zero shot. Experimental results show that, even with such paucity of information, the new VQA algorithm performs better than the full reference (FR) quality measure PSNR on the LIVE VQA database. It is also fast and efficient. We envision that the proposed method is an important step towards making real time monitoring of ‘completely blind’ video quality feasible.
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
A natural scene statistics (NSS) based blind image denoising approach is proposed, where denoising is performed
without knowledge of the noise variance present in the image. We show how such a parameter estimation can
be used to perform blind denoising by combining blind parameter estimation with a state-of-the-art denoising
algorithm.1 Our experiments show that for all noise variances simulated on a varied image content, our approach
is almost always statistically superior to the reference BM3D implementation in terms of perceived visual quality
at the 95% confidence level.
We tracked the points-of-gaze of human observers as they viewed videos drawn from foreign films while engaged
in two different tasks: (1) Quality Assessment and (2) Summarization. Each video was subjected to three possible
distortion severities - no compression (pristine), low compression and high compression - using the H.264
compression standard. We have analyzed these eye-movement locations in detail. We extracted local statistical
features around points-of-gaze and used them to answer the following questions: (1) Are there statistical differences
in variances of points-of-gaze across videos between the two tasks?, (2) Does the variance in eye movements
indicate a change in viewing strategy with change in distortion severity? (3) Are statistics at points-of-gaze different
from those at random locations? (4) How do local low-level statistics vary across tasks? (5) How do
point-of-gaze statistics vary across distortion severities within each task?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.