Different from object tacking on the ground, underwater object tracking is challenging due to the image attenuation and distortion. Also, challenges are increased by the high-freedom motion of targets under water. Target rotation, scale change, and occlusion significantly degenerate the performance of various tracking methods. Aiming to solve above problems, this paper proposes a multi-scale underwater object tracking method by adaptive feature fusion. The gray, HOG (Histogram of Oriented Gradient) and CN (Color Names) features are adaptively fused in the background-aware correlation filter (BACF) model. Moreover, a novel scale estimation method and a high-confidence model update strategy are proposed to comprehensively solve the problems caused by the scale changes and background noise influences. Experimental results demonstrate that the success ratio of the AUC criterion is 64.1% that is better than classic BACF and other methods, especially in challenging conditions.
Graph-based salient object detection has been widely applied in many applications, because of its excellent performance and strong theoretical basis. Basically, the performance of this type of methods depends on the correctness in foreground seed selection. In research aiming to exactly identify the seeds on foreground objects, an external prior has been defined in recent work as having an image boundary that is mostly background (called boundary prior), so the foreground seeds must locate around the image center. However, this is not the case when salient objects are spatially close to the image boundary. This problem will cause a severe error in salient object detection, because background noises are likely mixed in foreground seeds. To solve this problem, we propose a robust foreground seed selection method for salient object detection. In our method, the external prior and multiple internal image features are combined for foreground seed selection. Our method can relax the limitation of the external prior and make the foreground seed selection more adaptive and robust to diverse samples. As a result, the proposed method can generate satisfying results, no matter where the salient object is located. This advantage is demonstrated by experimental comparisons with several state-of-art methods.
We present texture-based active contours method for two-phase image segmentation in a statistical framework. The proposed method first combines color, texture, and saliency weight to form an augmented image and introduces the joint distribution of these features into the image likelihood term in the energy function. Second, we use the local probability distribution to obtain a smooth label that can reduce the fragmentation in the initialization and evolution of segmentation contours. Finally, we propose a simple and efficient geometric prior based directly on the level sets and introduce the related spatial constraints into the Bayes inference to estimate the smooth probabilistic label. Therefore, the image is represented by high-dimensional features but segmented in low-dimensional space. Furthermore, evolving of the level-set function and updating of the smooth probabilistic label are run alternately in a fast manner. We experimentally compare our texture-based method with others on complicated natural images and demonstrate its good performance in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.