Recently, the attention mechanism and part-based architectures have greatly boosted the research of person re-identification (Re-ID). However, most attention-based works extract first-order information and lack diversity. Meanwhile, the classic part-based works are not able to make use of cross-part information because of the unified partitions. These two kinds of methods ignore visual relationships in the global scope. Accordingly, we propose multi-subspace non-local attention (MSNA) and reinforced loss (R-Loss) to alleviate the issues above. MSNA is an improved attention module. It can be integrated into existing networks to utilize rich low-level information and extract the global relationships from different subspaces. R-Loss module is motivated to reinforce the capability of extracting fine-grained features by making full use of intra-part and cross-part information. We combine them and provide a global-locally reinforced feature extraction strategy. In addition, we design a feature fusion module to combine features from different branches. Equipped with the modules above, our model can extract important local and fine-grained features by identifying diverse visual relationships in the global scope. The models with our proposed modules achieve significant improvements over the baselines on four public datasets and establish new state-of-the-art results. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 2 scholarly publications.
Feature extraction
Performance modeling
Data modeling
Cameras
Visualization
High power microwaves
Image processing