Paper
24 January 2012 No-reference video quality assessment of H.264 video streams based on semantic saliency maps
H. Boujut, J. Benois-Pineau, T. Ahmed, O. Hadar, P. Bonnet
Author Affiliations +
Proceedings Volume 8293, Image Quality and System Performance IX; 82930T (2012) https://doi.org/10.1117/12.905379
Event: IS&T/SPIE Electronic Imaging, 2012, Burlingame, California, United States
Abstract
The paper contributes to No-Reference video quality assessment of broadcasted HD video over IP networks and DVB. In this work we have enhanced our bottom-up spatio-temporal saliency map model by considering semantics of the visual scene. Thus we propose a new saliency map model based on face detection that we called semantic saliency map. A new fusion method has been proposed to merge the bottom-up saliency maps with the semantic saliency map. We show that our NR metric WMBER weighted by the spatio-temporal-semantic saliency map provides higher results then the WMBER weighted by the bottom-up spatio-temporal saliency map. Tests are performed on two H.264/AVC video databases for video quality assessment over lossy networks.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
H. Boujut, J. Benois-Pineau, T. Ahmed, O. Hadar, and P. Bonnet "No-reference video quality assessment of H.264 video streams based on semantic saliency maps", Proc. SPIE 8293, Image Quality and System Performance IX, 82930T (24 January 2012); https://doi.org/10.1117/12.905379
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Visualization

Databases

Facial recognition systems

Molybdenum

Semantic video

Visual process modeling

Back to Top