Paper
31 January 2020 Performance of bottom-up visual attention models when compared in contextless and context awareness scenarios
Author Affiliations +
Proceedings Volume 11433, Twelfth International Conference on Machine Vision (ICMV 2019); 114332E (2020) https://doi.org/10.1117/12.2557135
Event: Twelfth International Conference on Machine Vision, 2019, Amsterdam, Netherlands
Abstract
Visual Attention Models are usually tested using collections of natural images that have intentionally salient objects and obvious context information. On the other hand, in the literature, few algorithms have considered datasets with non-context information to modeling attention. Moreover, Visual Attention Models haven’t been well-measured considering both contextless and context-awareness environments. In this paper, we compare some well-known Bottom Up visual attention models performance using contextless and context aware datasets, using the Pearson Correlation Coefficient as a method to assess the efficiency of each Visual Attention Model in terms of accuracy and eye fixations predictions. The best algorithm outperforms the others by reaching 59,1% and 43,8% of correlation with ground truth information in the contextless and context awareness datasets respectively.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Juan Anaya-Jaimes, Angie García-Castro, and R. E. Gutiérrez-Carvajal "Performance of bottom-up visual attention models when compared in contextless and context awareness scenarios", Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 114332E (31 January 2020); https://doi.org/10.1117/12.2557135
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visual process modeling

Eye models

Performance modeling

Image processing

Image resolution

Back to Top