Processing emotions in sounds: cross-domain aftereffects of vocal utterances and musical sounds. Academic Article uri icon

abstract

  • Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217-223. doi: 10.1016/j.cognition.2010.08.008 ], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction - vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.

published proceedings

  • Cogn Emot

altmetric score

  • 2.6

author list (cited authors)

  • Bowman, C., & Yamauchi, T.

citation count

  • 3

complete list of authors

  • Bowman, Casady||Yamauchi, Takashi

publication date

  • December 2017