Translator Disclaimer
3 February 2014 An evaluation of attention models for use in SLAM
Author Affiliations +
Proceedings Volume 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques; 90250M (2014)
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Samuel Dodge and Lina Karam "An evaluation of attention models for use in SLAM", Proc. SPIE 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, 90250M (3 February 2014);

Back to Top