Saliency detection model has been widely used in many fields of computer vision. Currently, most models are not applicable to the foggy scenario. Because these models are highly dependent upon high-level features extracted by deep learning and handcrafted features, these features cannot effectively highlight the significant targets in foggy images. In this paper, we present a saliency detection model for foggy images. This model extracts non-local feature, i.e., jointly learned with local features under a unified deep learning framework. The key idea of the proposed model is to hierarchically introduce non-local module with local contrast processing blocks, aiming to provide robust representation of saliency information towards foggy images with low signal-to-noise ratio property. Experiments have been conducted on three challenging datasets and our foggy image dataset consisting of dynamic object images. By comparing with the state-of-the-art models, our model gets better performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.