EMOTION CLASSIFICATION OF SPEECH USING MODULATION FEATURES Conference Paper uri icon

abstract

  • 2014 EURASIP. Automatic classification of a speaker's affective state is one of the major challenges in signal processing community, since it can improve Human-Computer interaction and give insights into the nature of emotions from psychology perspective. The amplitude and frequency control of sound production influences strongly the affective voice content. In this paper, we take advantage of the inherent speech modulations and propose the use of instant amplitude- and frequency-derived features for efficient emotion recognition. Our results indicate that these features can further increase the performance of the widely-used spectral-prosodic information, achieving improvements on two emotional databases, the Berlin Database of Emotional Speech and the recently collected Athens Emotional States Inventory.

published proceedings

  • 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO)

author list (cited authors)

  • Chaspari, T., Dimitriadis, D., & Maragos, P.

complete list of authors

  • Chaspari, Theodora||Dimitriadis, Dimitrios||Maragos, Petros

publication date

  • January 2014