FUSION OF DEPTH, SKELETON, AND INERTIAL DATA FOR HUMAN ACTION RECOGNITION Conference Paper uri icon

abstract

  • 2016 IEEE. This paper presents a human action recognition approach by the simultaneous deployment of a second generation Kinect depth sensor and a wearable inertial sensor. Three data modalities consisting of depth images, skeleton joint positions, and inertial signals are fused by utilizing three collaborative representation classifiers. A database consisting of 10 actions performed by 6 subjects is put together to carry out two types of testing of the developed fusion approach: subject-generic and subject-specific. The overall recognition rates obtained from both types of testing indicate recognition improvements when fusing all the data modalities compared to the situations when data modalities are used individually.

name of conference

  • 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

published proceedings

  • 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS

altmetric score

  • 3

author list (cited authors)

  • Chen, C., Jafari, R., & Kehtarnavazi, N.

citation count

  • 58

complete list of authors

  • Chen, Chen||Jafari, Roozbeh||Kehtarnavazi, Nasser

publication date

  • March 2016