Paper
15 February 2012 Video attention deviation estimation using inter-frame visual saliency map analysis
Yunlong Feng, Gene Cheung, Patrick Le Callet, Yusheng Ji
Author Affiliations +
Proceedings Volume 8305, Visual Information Processing and Communication III; 83050H (2012) https://doi.org/10.1117/12.907384
Event: IS&T/SPIE Electronic Imaging, 2012, Burlingame, California, United States
Abstract
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yunlong Feng, Gene Cheung, Patrick Le Callet, and Yusheng Ji "Video attention deviation estimation using inter-frame visual saliency map analysis", Proc. SPIE 8305, Visual Information Processing and Communication III, 83050H (15 February 2012); https://doi.org/10.1117/12.907384
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Visualization

Eye

Eye models

Visual analytics

Visual process modeling

Motion models

Back to Top