Emotional States Associated with Music (original) (raw)

Emotional States Associated with Music: Classification, Prediction of Changes, and Consideration in Recommendation

Authors Info & Claims

Article No.: 4, Pages 1 - 36

Published: 25 March 2015 Publication History

Abstract

We present several interrelated technical and empirical contributions to the problem of emotion-based music recommendation and show how they can be applied in a possible usage scenario. The contributions are (1) a new three-dimensional resonance-arousal-valence model for the representation of emotion expressed in music, together with methods for automatically classifying a piece of music in terms of this model, using robust regression methods applied to musical/acoustic features; (2) methods for predicting a listener’s emotional state on the assumption that the emotional state has been determined entirely by a sequence of pieces of music recently listened to, using conditional random fields and taking into account the decay of emotion intensity over time; and (3) a method for selecting a ranked list of pieces of music that match a particular emotional state, using a minimization iteration method. A series of experiments yield information about the validity of our operationalizations of these contributions. Throughout the article, we refer to an illustrative usage scenario in which all of these contributions can be exploited, where it is assumed that (1) a listener’s emotional state is being determined entirely by the music that he or she has been listening to and (2) the listener wants to hear additional music that matches his or her current emotional state. The contributions are intended to be useful in a variety of other scenarios as well.

References

[1]

G. Adomavicius and A. Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering 17, 6, 734--749.

[2]

F. Agrafioti, D. Hatzinakos, and A. K. Anderson. 2012. ECG pattern analysis for emotion detection. IEEE Transactions on Affective Computing 3, 1, 102--115.

[3]

D. L. Bartlett. 1996. Physiological responses to music and sound stimuli. In Handbook of Music Psychology, Vol. 2, D. Hodges (Ed.). Mmb Music, 343--85.

[4]

E. Bigand, S. Vieillard, F. Madurell, J. Marozeau, and A. Dacquet. 2005. Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition and Emotion 19, 8, 1113--1139.

[5]

G. Borg. 1998. Borg’s Perceived Exertion and Pain Scales. Human Kinetics.

[6]

C. C. Chang and C. J. Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3, 27.

[7]

H.-T. Cheng, Y.-H. Yang, Y.-C. Lin, I.-B. Liao, and H. H. Chen. 2008. Automatic chord recognition for music classification and retrieval. In Proceedings of the 2008 IEEE International Conference on Multimedia and Expo. 1505--1508.

[8]

G. L. Collier. 2007. Beyond valence and activity in the emotional connotations of music. Psychology of Music 35, 1, 110--131.

[9]

E. Coutinho and A. Cangelosi. 2011. Musical emotions: Predicting second-by-second subjective feelings of emotion from low-level psychoacoustic features and physiological measurements. Emotion 11, 4, 921.

[10]

R. T. Dean and F. Bailes. 2010. Time series analysis as a method to examine acoustical influences on real-time perception of music. Empirical Musicology Review 5, 4, 152--175.

[11]

J. J. Deng and C. Leung. 2012. Emotion-based music recommendation using audio features and user playlist. In Proceedings of the 6th International Conference on New Trends in Information Science and Service Science and Data Mining (ISSDM). 796--801.

[12]

J. J. Deng and C. H. C. Leung. 2013. Music retrieval in joint emotion space using audio features and emotional tags. In Advances in Multimedia Modeling. Lecture Notes in Computer Science, Vol. 7732. Springer, 524--534.

[13]

J. Dias and A. Paiva. 2005. Feeling and reasoning: A computational model for emotional characters. In Proceedings of the 12th Portuguese Conference on Progress in Artificial Intelligence. 127--140.

[14]

T. Eerola, O. Lartillot, and P. Toiviainen. 2009. Prediction of multidimensional emotional ratings in music from audio using multivariate regression models. In Proceedings of the International Conference on Music Information Retrieval. 621--626.

[15]

T. Eerola and J. K. Vuoskoski. 2010. A comparison of the discrete and dimensional models of emotion in music. Psychology of Music 39, 1, 18--49. http://pom.sagepub.com/cgi/doi/10.1177/0305735610362821

[16]

P. Ekman. 1994. Moods, emotions, and traits. In The Nature of Emotions: Fundamental Questions, P. Eckman and R. J. Davidson (Eds). Oxford University Press, 56--58.

[17]

J. R. J. Fontaine, K. R. Scherer, E. B. Roesch, and P. C. Ellsworth. 2007. The world of emotions is not two-dimensional. Psychological Science 18, 12, 1050--1057.

[18]

P. Gebhard. 2005. ALMA: A layered model of affect. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems. 29--36.

[19]

H. Gunes, B. Schuller, M. Pantic, and R. Cowie. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG ’11). 827--834.

[20]

B. Han, S. Rho, R. B. Dannenberg, and E. Hwang. 2009. SMERS: Music emotion recognition using support vector regression. In Proceedings of the 10th International Society for Music Information Retrieval Conference. 651--656. http://dblp.uni-trier.de/db/conf/ismir/ismir2009.html#HanRDH09.

[21]

B. Han, S. Rho, S. Jun, and E. Hwang. 2010. Music emotion classification and context-based music recommendation. Multimedia Tools and Applications 47, 3, 433--460.

[22]

A. Hanjalic. 2006. Extracting moods from pictures and sounds: Towards truly personalized TV. IEEE Signal Processing Magazine 23, 2, 90--100.

[23]

J. Healey, R. Picard, and F. Dabek. 1998. A new affect-perceiving interface and its application to personalized music selection. In Proceedings of the 1998 Workshop on Perceptual User Interfaces. 4--6.

[24]

K. Hevner. 1936. Experimental studies of the elements of expression in music. American Journal of Psychology 48, 2, 246--268.

[25]

X. Hu, J. S. Downie, and A. F. Ehmann. 2009. Lyric text mining in music mood classification. American Music 183, 5,049, 2--209.

[26]

G. Ilie and W. F. Thompson. 2006. A comparison of acoustic cues in music and speech for three dimensions of affect. Music Perception 23, 4, 319--330.

[27]

C. E. Izard and C. Z. Malatesta. 1987. Perspectives on emotional development I: Differential emotions theory of early emotional development. In Handbook of Infant Development (2nd ed.), J. D. Osofsky (Ed.). Wiley, 494--554.

[28]

J. H. Janssen, E. L. van den Broek, and J. H. D. M. Westerink. 2009. Personalized affective music player. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII ’09). 1--6.

[29]

J. H. Janssen, E. L. van den Broek, and J. H. D. M. Westerink. 2012. Tune in to your emotions: A robust personalized affective music player. User Modeling and User-Adapted Interaction 22, 3, 255--279.

[30]

S. Jun, S. Rho, and E. Hwang. 2010. Music retrieval and recommendation scheme based on varying mood sequences. International Journal on Semantic Web and Information Systems 6, 2, 1--16.

[31]

P. N. Juslin and P. Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 3, 217--238.

[32]

P. N. Juslin and J. A. Sloboda. 2001. Music and Emotion: Theory and Research, Vol. 20. Oxford University Press. http://nursinglibrary.org/Portal/main.aspx?pageid=36&sku==80380&ProductPrice==89.5000

[33]

J. Kim. 2008. Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 12, 2067--2083.

[34]

Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. A. Speck, and D. Turnbull. 2010. Music emotion recognition: A state of the art review. In Proceedings of the International Conference on Music Information Retrieval. 255--266.

[35]

V. J. Konečni. 2008. Does music induce emotion? A theoretical and methodological analysis. Psychology of Aesthetics, Creativity, and the Arts 2, 2, 115.

[36]

C. L. Krumhansl. 2002. Music: A link between cognition and emotion. Current Directions in Psychological Science 11, 2, 45--50.

[37]

F. F. Kuo, M. F. Chiang, M. K. Shan, and S. Y. Lee. 2005. Emotion-based music recommendation by association discovery from film music. In Proceedings of the 13th Annual ACM International Conference on Multimedia. 507--510.

[38]

J. D. Lafferty, A. McCallum, and F. C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning. 282--289. http://dl.acm.org/citation.cfm?id=645530.655813

[39]

O. Lartillot and P. Toiviainen. 2007. A Matlab toolbox for musical feature extraction from audio. In Proceedings of the 10th International Conference on Digital Audio Effects. 237--244.

[40]

R. W. Levenson. 1988. Emotion and the autonomic nervous system: A prospectus for research on autonomic specificity. In Social Psychophysiology and Emotion: Theory and Clinical Applications, H. Wagner (Ed.). John Wiley and Sons, 17--42.

[41]

Y. P. Lin, C. H. Wang, T. P. Jung, T. L. Wu, S. K. Jeng, J. R. Duann, and J. H. Chen. 2010. EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering 57, 7, 1798--1806.

[42]

C. C. Lu and V. S. Tseng. 2009. A novel method for personalized music recommendation. Expert Systems with Applications 36, 6, 10035--10044.

[43]

L. Lu, D. Liu, and H. J. Zhang. 2006. Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech, and Language Processing 14, 1, 5--18.

[44]

K. F. MacDorman and S. O. C. C. Ho. 2007. Automatic emotion prediction of song excerpts: Index construction, algorithm design, and empirical comparison. Journal of New Music Research 36, 4, 281--299.

[45]

J. P. Magalhães and W. B. de Haas. 2011. Functional modelling of musical harmony: An experience report. In Proceedings of the 16th International Conference on Functional Programming. 156--162.

[46]

S. Marsella, J. Gratch, and P. Petta. 2010. Computational models of emotion. In A Blueprint for Affective Computing: A Sourcebook and Manual, K. R. Scherer, T. Banziger, and E. Roesch (Eds.). Oxford University Press, 21--46.

[47]

A. Mehrabian. 1980. Basic Dimensions for a General Psychological Theory: Implications for Personality, Social, Environmental, and Developmental Studies. Oelgeschlager, Gunn, and Hain, Cambridge, MA.

[48]

A. Mehrabian. 1996. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology 14, 4, 261--292.

[49]

L. B. Meyer. 1956. Emotion and Meaning in Music. University of Chicago Press.

[50]

M. Muller and S. Ewert. 2010. Towards timbre-invariant audio features for harmony-based music. IEEE Transactions on Audio, Speech, and Language Processing 18, 3, 649--662.

[51]

N. Orio. 2006. Music Retrieval: A Tutorial and Review, Vol. 1. Now Pub.

[52]

A. Ortony, G. L. Clore, and A. Collins. 1990. The Cognitive Structure of Emotions. Cambridge University Press.

[53]

R. W. Picard. 1997. Affective Computing. MIT Press, Cambridge, MA, 167--170.

[54]

R. W. Picard, E. Vyzas, and J. Healey. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 10, 1175--1191.

[55]

T. Pohle, E. Pampalk, and G. Widmer. 2005. Evaluation of frequently used audio features for classification of music into perceptual categories. In Proceedings of the International Workshop on Content-Based Multimedia Indexing.

[56]

P. Pudil, F. J. Ferri, J. Novovicova, and J. Kittler. 1994. Floating search methods for feature selection with nonmonotonic criterion functions. In Proceedings of the 12th International Conference on Pattern Recognition. 279--283.

[57]

J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6, 1161.

[58]

K. R. Scherer. 2004. Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research 33, 3, 239--251.

[59]

K. R. Scherer and J. S. Oshinsky. 1977. Cue utilization in emotion attribution from auditory stimuli. Motivation and Emotion 1, 4, 331--346.

[60]

K. R. Scherer. 1993. Neuroscience projections to current debates in emotion psychology. Cognition and Emotion 7, 1, 1--41.

[61]

K. R. Scherer, M. R. Zentner, and A. Schacht. 2002. Emotional states generated by music: An exploratory study of music experts. Musicae Scientiae 5, 1, 149--171.

[62]

U. Schimmack and R. Rainer. 2002. Experiencing activation: Energetic arousal and tense arousal are not mixtures of valence and activation. Emotion 2, 4, 412.

[63]

E. M. Schmidt and Y. E. Kim. 2010. Prediction of time-varying musical mood distributions from audio. In Proceedings of the 11th International Society for Music Information Retrieval Conference. 465--470.

[64]

E. M. Schmidt and Y. E. Kim. 2011. Modeling musical emotion dynamics with conditional random fields. In Proceedings of the 12th International Society for Music Information Retrieval Conference. 777--782.

[65]

E. M. Schmidt, D. Turnbull, and Y. E. Kim. 2010. Feature selection for content-based, time-varying musical emotion regression. In Proceedings of the International Conference on Multimedia Information Retrieval. 267--274.

[66]

E. Schubert. 1999. Measuring emotion continuously: Validity and reliability of the two-dimensional emotion-space. Australian Journal of Psychology 51, 3, 154--165.

[67]

E. Schubert. 2004. Modeling perceived emotion with continuous musical features. Music Perception 21, 4, 561--585.

[68]

C. E. Seashore. 1923. Measurements on the expression of emotion in music. Proceedings of the National Academy of Sciences of the United States of America 9, 9, 323.

[69]

M. C. Sezgin, B. Gunsel, and G. Karabulut Kurt. 2012. Perceptual audio features for emotion detection. EURASIP Journal on Audio, Speech, and Music Processing 2012, 1, 16.

[70]

B. R. Steunebrink, M. Dastani, and J.-J. C. Meyer. 2008. A formal model of emotions: Integrating qualitative and quantitative aspects. In Proceedings of the 18th European Conference on Artificial Intelligence. 256--260. http://dl.acm.org/citation.cfm?id=1567281.1567340

[71]

J. M. Talarico, K. S. LaBar, and D. C. Rubin. 1994. Emotional intensity predicts autobiographical memory experience. Memory and Cognition 32, 7, 1118--1132.

[72]

R. E. Thayer. 1989. The Biopsychology of Mood and Arousal. Oxford University Press.

[73]

M. E. Tipping. 2001. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211--244.

[74]

M. Tkalcic, A. Kosir, and J. Tasic. 2011. Affective recommender systems: The role of emotions in recommender systems. In Proceedings of the RecSys 2011 Workshop on Human Decision Making in Recommender Systems. 9--13.

[75]

G. Tzanetakis and P. Cook. 2000. Marsyas: A framework for audio analysis. Organised Sound 4, 3, 169--175.

[76]

G. D. Webster and C. G. Weir. 2005. Emotional responses to music: Interactive effects of mode, texture, and tempo. Motivation and Emotion 29, 1, 19--39.

[77]

G. Wijnalda, S. Pauws, F. Vignoli, and H. Stuckenschmidt. 2005. A personalized music system for motivation in sport performance. Pervasive Computing 4, 3, 26--32.

[78]

S. Yan, D. Xu, B. Zhang, H. J. Zhang, Q. Yang, and S. Lin. 2007. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 1, 40--51.

[79]

Y. H. Yang and H. H. Chen. 2011. Ranking-based emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech, and Language Processing 19, 4, 762--774.

[80]

Y. H. Yang, Y. C. Lin, Y. F. Su, and H. H. Chen. 2008. A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing 16, 2, 448--457.

[81]

M. S. M. Yik, J. A. Russell, and L. F. Barrett. 1999. Structure of self-reported current affect: Integration and beyond. Journal of Personality and Social Psychology 77, 3, 600.

[82]

M. Zentner, D. Grandjean, and K. R. Scherer. 2008. Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion 8, 4, 494.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems

ACM Transactions on Interactive Intelligent Systems Volume 5, Issue 1

March 2015

164 pages

Copyright © 2015 ACM.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 March 2015

Accepted: 01 January 2015

Revised: 01 December 2014

Received: 01 February 2012

Published in TIIS Volume 5, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Musical emotion
  2. affective computing
  3. conditional random fields
  4. emotional state
  5. music emotion recognition
  6. music recommendation

Qualifiers

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

View Options

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Affiliations

James J. Deng

Hong Kong Baptist University, Hong Kong

Clement H. C. Leung

Hong Kong Baptist University, Hong Kong

Alfredo Milani

University of Perugia, Italy

Li Chen

Hong Kong Baptist University, Hong Kong