Classifying Songs with EEG (original) (raw)
Related papers
A machine learning approach to EEG based prediction of user's music preferences
2019
Music has many benefits for our mood and feelings, especially so when we get to choose our own favorite music. However, accessing one's favorite music is not as easy for everyone. For motorically disabled and locked-in people, interacting with devices used for listening to music is challenging since it requires physical interaction. Machine learning classification methods used with EEG could prove useful for detecting individual musical preferences, extracted without any physical or verbal interaction. The two most common methods within EEG-based classification are Artificial neural networks (ANN) and Support vector machines (SVM). This study compares the performance of these two methods, when used with the DEAP dataset of EEG-monitored participants watching music videos. This comparison can help with gaining insight into which machine learning method is most appropriate for music preference detection, contributing towards more accurate predictions of motorically disabled people...
2021
Music has the ability to evoke a wide variety of emotions in human listeners. Research has shown that treatment for depression and mental health disorders is significantly more effective when it is complemented by music therapy. However, because each human experiences music-induced emotions differently, there is no systematic way to accurately predict how people will respond to different types of music at an individual level. In this experiment, a model is created to predict humans’ emotional responses to music from both their electroencephalographic data (EEG) and the acoustic features of the music. By using recursive feature elimination (RFE) to extract the most relevant and performing features from the EEG and music, a regression model is fit and accurately correlates the patient’s actual music-induced emotional responses and model’s predicted responses. By reaching a mean correlation of r = 0.788, this model is significantly more accurate than previous works attempting to predic...
IEEE Transactions on Affective Computing, 2013
A time-windowing feature extraction approach based on time-frequency (TF) analysis is adopted here to investigate the time-course of the discrimination between musical appraisal electroencephalogram (EEG) responses, under the parameter of familiarity. An EEG data set, formed by the responses of nine subjects during music listening, along with self-reported ratings of liking and familiarity, is used. Features are extracted from the beta (13-30 Hz) and gamma (30-49 Hz) EEG bands in time windows of various lengths, by employing three TF distributions (spectrogram, Hilbert-Huang spectrum, and Zhao-Atlas-Marks transform). Subsequently, two classifiers (k-NN and SVM) are used to classify feature vectors in two categories, i.e., "like" and "dislike," under three cases of familiarity, i.e., regardless of familiarity (LD), familiar music (LDF), and unfamiliar music (LDUF). Key findings show that best classification accuracy (CA) is higher and it is achieved earlier in the LDF case {91:02 AE 1:45% (7.5-10.5 s)} as compared to the LDUF case {87:10 AE 1:84% (10-15 s)}. Additionally, best CAs in LDF and LDUF cases are higher as compared to the general LD case {85:28 AE 0:77%}. The latter results, along with neurophysiological correlates, are further discussed in the context of the existing literature on the time-course of music-induced affective responses and the role of familiarity.
Music-induced emotions can be predicted from a combination of brain activity and acoustic features
Brain and cognition, 2015
It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests...
Advances in Intelligent Systems and Computing, 2021
This paper proposes a methodology for investigating musical preferences of the age group between 18 and 24. We conducted an electroencephalogram (EEG) experiment to collect individual's responses to audio stimuli along with a measure of like or dislike for a piece of music. Machine learning (multilayer perceptron and support vector machine) classifiers and signal processing [independent component analysis (ICA)] techniques were applied on the pre-processed dataset of 10 participant's EEG signals and preference ratings. Our classification model classified song preference with high accuracy. The ICA based EEG signal processing enabled the identification of perceptual patterns via analysis of the spectral peaks which suggest that the recorded brain activities were dependent on the respective song's rating.
Frontiers in Neuroscience, 2014
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74∼76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
EEG-Based Emotion Recognition in Music Listening
IEEE Transactions on Biomedical Engineering, 2010
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% ± 3.06% across 26 subjects. Further, this study identified 30 subjectindependent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
It has been repeatedly reported that motivation for listening to music is majorly driven by the latter’s emotional effect. There is a relative opposition to this approach, however, suggesting that music does not elicit true emotions. Counteracting this notion, contemporary research studies indicate that listeners do respond affectively to musicproviding ascientific basis in differentially approaching and registering affective responses to music as of their behavioral or biological states. Nevertheless, nostudiesexistthat combinethe behavioral and neuroscientific research domains, offering a cross-referenced neuropsychological outcome, based on a non-personalized approach specifically using a continuous response methodology with ecologically valid musical stimuli for both research domains. Our study,trying to fill this void for the first time, discussesa relevant proof-of-conceptprotocol,and presentsthe technical outlineon how to multimodally measure elicited responses on evoked emot...
Mental state and emotion detection from musically stimulated EEG
Brain Informatics, 2018
This literature survey attempts to clarify different approaches considered to study the impact of the musical stimulus on the human brain using EEG Modality. Glancing at the field through various aspects of such studies specifically an experimental protocol, the EEG machine, number of channels investigated, feature extracted, categories of emotions, the brain area, the brainwaves, statistical tests, machine learning algorithms used for classification and validation of the developed model. This article comments on how these different approaches have particular weaknesses and strengths. Ultimately, this review concludes a suitable method to study the impact of the musical stimulus on brain and implications of such kind of studies. which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.