Presentation + Paper
13 October 2020 Human activity recognition for efficient human-robot collaboration
Author Affiliations +
Abstract
A crucial technology in modern smart manufacturing is the human-robot collaboration (HRC) concept. In the HRC, operators, and robots unite and collaborate to perform complex tasks in a variety of scenarios, heterogeneous and dynamic conditions. A unique role in the implementation of the HRC model, as a means of sensation, is assigned to machine vision systems. It provides the receipt and processing of visual information about the environment, the analysis of images of the working area, the transfer of this information to the control system, and decision-making within the framework of the task. Thus, the task of recognizing the actions of a human-operator for the development of a robot control system in order to implement an effective HRC system becomes relevant. The operator commands fed to the robot can have a variety of forms: from simple and concrete to quite abstract. This introduces several difficulties when the implementation of automated recognition systems in real conditions; this is a heterogeneous background, an uncontrolled work environment, irregular lighting, etc. In the article, we present an algorithm for constructing a video descriptor and solve the problem of classifying a set of actions into predefined classes. The proposed algorithm is based on capturing three-dimensional subvolumes located inside a video sequence patch and calculating the difference in intensities between these sub-volumes. Video patches and central coordinates of sub-volumes are built on the principle of VLBP. Such a representation of three-dimensional blocks (patches) of a video sequence by capturing sub-volumes, inside each patch, in several scales and orientations, leads to an informative description of the scene and the actions taking place in it. Experimental results showed the effectiveness of the proposed algorithm on known data sets.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
M. Zhdanova, V. Voronin, E. Semenishchev, Yu. Ilyukhin, and A. Zelensky "Human activity recognition for efficient human-robot collaboration", Proc. SPIE 11543, Artificial Intelligence and Machine Learning in Defense Applications II, 115430K (13 October 2020); https://doi.org/10.1117/12.2574133
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Robots

Control systems

Detection and tracking algorithms

Information visualization

Image processing

Machine vision

RELATED CONTENT


Back to Top