Fusion of Video and Inertial Sensing for Deep Learning-Based Human Action Recognition. Academic Article uri icon

abstract

  • This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered-Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach.

published proceedings

  • Sensors (Basel)

author list (cited authors)

  • Wei, H., Jafari, R., & Kehtarnavaz, N.

citation count

  • 34

complete list of authors

  • Wei, Haoran||Jafari, Roozbeh||Kehtarnavaz, Nasser

publication date

  • August 2019

publisher