Paper
21 May 2015 The Johns Hopkins University multimodal dataset for human action recognition
Thomas S. Murray, Daniel R. Mendat, Philippe O. Pouliquen, Andreas G. Andreou
Author Affiliations +
Abstract
The Johns Hopkins University MultiModal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Thomas S. Murray, Daniel R. Mendat, Philippe O. Pouliquen, and Andreas G. Andreou "The Johns Hopkins University multimodal dataset for human action recognition", Proc. SPIE 9461, Radar Sensor Technology XIX; and Active and Passive Signatures VI, 94611U (21 May 2015); https://doi.org/10.1117/12.2189349
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Ultrasonography

Modulation

Data acquisition

Acoustics

Doppler effect

Data modeling

Back to Top