Poster + Paper
17 May 2022 Multi-level deep learning depth and color fusion for action recognition
A. Zelensky, V. Voronin, M. Zhdanova, N. Gapon, O. Tokareva, E. Semenishchev
Author Affiliations +
Conference Poster
Abstract
The solution to the problem of recognizing human actions on video sequences is one of the key areas on the path to the development and implementation of computer vision systems in various spheres of life. At the same time, additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level decompression for depth and color information to improve the state of art action recognition methods. We present the algorithm, combining information from visible cameras and depth sensors based on the deep learning and PLIP model (parameterized model of logarithmic image processing) close to the human visual system's perception. The experiment results on the test dataset confirmed the high efficiency of the proposed action recognition method compared to the state-of-the-art methods that used only one modality image (visible or depth).
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
A. Zelensky, V. Voronin, M. Zhdanova, N. Gapon, O. Tokareva, and E. Semenishchev "Multi-level deep learning depth and color fusion for action recognition", Proc. SPIE 12138, Optics, Photonics and Digital Technologies for Imaging Applications VII, 121380Y (17 May 2022); https://doi.org/10.1117/12.2626000
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Image processing

Computer programming

Neural networks

Process modeling

Image quality

Video

RELATED CONTENT


Back to Top