Evaluation of Head Gaze Loosely Synchronized With Real-Time Synthetic Speech for Social Robots Academic Article uri icon

abstract

  • 2014 IEEE. This study demonstrates that robots can achieve socially acceptable interactions using loosely synchronized head gaze-speech acts. Prior approaches use tightly synchronized head gaze-speech, which requires significant human effort and time to manually annotate synchronization events in advance, restricts interactive dialog, or requires that the operator acts as a puppeteer. This paper describes how autonomous synchronization of head gaze can be achieved by exploiting affordances in the sentence structure and time delays. A 93-participant user study was conducted in a simulated disaster site. The rescue robot 'Survivor Buddy' generated head gaze for a victim management scenario using a 911 dialog. The study used pre- and postinteraction questionnaires to compare the social acceptance level of loosely synchronized head gaze-speech against tightly synchronized head gaze-speech (manual annotation) and no head gaze-speech conditions. The results indicated that for attributes of Self-Assessment Manikin, i.e., Arousal, Robot Likeability, Human-Like Behavior, Understanding Robot Behavior, Gaze-Speech Synchronization, Looking at Objects at Appropriate Times, and Natural Movement, the loosely synchronized head gaze-speech is similar to tightly synchronized head gaze-speech and preferred to the no head gaze-speech case. This study contributes to a fundamental understanding of the role of social head gaze in social acceptance for human-machine interaction, how social gaze can be produced, and promotes practical implementation in social robots.

published proceedings

  • IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

author list (cited authors)

  • Srinivasan, V., Bethel, C. L., & Murphy, R. R.

citation count

  • 7

complete list of authors

  • Srinivasan, Vasant||Bethel, Cindy L||Murphy, Robin R

publication date

  • December 2014