Sign language recognition is challenging, due to the scarcity of available annotated corpora and the difficulty of large vocabulary. In this paper, we study the task based on a Chinese SL database-DEVISIGN, but it only has a few samples to train the deep network on the scratch. First, we segment the hand to eliminate the disturbance of irrelevant factors. By analyzing the special movement tendency of sign words, we propose two novel Key-frame selection schemes. Since no other datasets can have similar data distribution with our preprocessed data, we invent a novel cross-sampling approach, which successfully prevent the overfitting under small sample. To enhance the diversity of data, we take several samplingbased videos as input, and learn spatiotemporal features based on R(2+1)D-18 layers, which is successful in action recognition tasks. Finally, it is shown that our solution can obtain the state-of-the-art performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.