A comparative analysis of modeling and predicting perceived and induced emotions in sonification

Faranak Abri, Luis Felipe Gutiérrez, Prerit Datta, David R.W. Sears, Akbar Siami Namin, Keith S. Jones

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Sonification is the utilization of sounds to convey information about data or events. There are two types of emotions associated with sounds: (1) “perceived” emotions, in which listeners recognize the emotions expressed by the sound, and (2) “induced” emotions, in which listeners feel emotions induced by the sound. Although listeners may widely agree on the perceived emotion for a given sound, they often do not agree about the induced emotion of a given sound, so it is difficult to model induced emotions. This paper describes the development of several machine and deep learning models that predict the perceived and induced emotions associated with certain sounds, and it analyzes and compares the accuracy of those predictions. The results revealed that models built for predicting perceived emotions are more accurate than ones built for predicting induced emotions. However, the gap in predictive power between such models can be narrowed substantially through the optimization of the machine and deep learning models. This research has several applications in automated configurations of hardware devices and their integration with software components in the context of the Internet of Things, for which security is of utmost importance.

Original languageEnglish
Article number2519
JournalElectronics (Switzerland)
Volume10
Issue number20
DOIs
StatePublished - Oct 1 2021

Keywords

  • Acoustic features
  • EmoSoundscape
  • Emotion prediction
  • IADSE
  • Internet of Things
  • Security alarm
  • Sonification
  • Sound analysis

Fingerprint

Dive into the research topics of 'A comparative analysis of modeling and predicting perceived and induced emotions in sonification'. Together they form a unique fingerprint.

Cite this