Action recognition has wide applications in fields such as human–computer interaction, virtual reality, and robotics. Since human actions can be represented as a sequence of skeleton graphs, approaches based on graph neural networks (GNNs) have attracted considerable attention in the research action recognition. Recent studies have demonstrated the effectiveness of two-stream GNNs in which discriminative features for action recognition are extracted from both the joint stream and the bone stream. Each stream is generated by GNNs that support message passing along fixed connections between vertices. However, existing two-stream approaches have two limitations: no interaction is allowed between the two streams and temporary contacts between joints or bones cannot be modeled. To address these issues, we propose the interactive two-stream graph neural network, which employs a joint–bone communication block to accelerate the interaction between the joint stream and the bone stream. Furthermore, an adaptive strategy is introduced to enable dynamic connections between vertices. Extensive experiments on three large-scale datasets have demonstrated the effectiveness of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.