A survey of depth and inertial sensor fusion for human action recognition Academic Article uri icon

abstract

  • 2015, Springer Science+Business Media New York. A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.

published proceedings

  • MULTIMEDIA TOOLS AND APPLICATIONS

altmetric score

  • 6

author list (cited authors)

  • Chen, C., Jafari, R., & Kehtarnavaz, N.

citation count

  • 244

complete list of authors

  • Chen, Chen||Jafari, Roozbeh||Kehtarnavaz, Nasser

publication date

  • February 2017