Poster + Presentation + Paper
26 October 2022 Multi-domain vision-based sign language recognition based on auto-labeled hand tracking data learning
Junha Lee, Hong-In Won, Min Young Kim, Byeong Hak Kim
Author Affiliations +
Conference Poster
Abstract
Remote operating and autonomous systems are widely applied in various fields, and the development of technology for human machine interface and communication is strongly demanded. In order to overcome the limitations of the conventional keyboard and tablet devices, various vision sensors and state-of-the-art artificial intelligence image processing techniques are used to recognize hand gestures. In this study, we propose a method for recognizing a reference sign language using auto labeled AI model training datasets. This study can be applied to the remote control interfaces for drivers to vehicles, person to home appliances, and gamers to entertainment contents and remote character input technology for the metaverse environment.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Junha Lee, Hong-In Won, Min Young Kim, and Byeong Hak Kim "Multi-domain vision-based sign language recognition based on auto-labeled hand tracking data learning", Proc. SPIE 12267, Image and Signal Processing for Remote Sensing XXVIII, 1226714 (26 October 2022); https://doi.org/10.1117/12.2638450
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

RGB color model

Sensors

Visual process modeling

Performance modeling

Signal detection

Systems modeling

Back to Top