Extraction of User Preference for Video Stimuli Using EEG-Based User Responses (original) (raw)

Emotion recognition based on EEG signals during watching video clips

Brain signal analysis for human emotion recognition plays important role in psychology , management and human machine interface design. Electroencephalogram (EEG) is the reflection of brain activity – by studying and analysing these signals we are able to perceive emotional state changes. In order to do so, it is necessary to select the appropriate EEG channels that are placed mostly on the frontal part of the head. In this paper we used video stimuli to induce happy and sad mood of 20 participants. To classify the emotions experienced by the volunteers we used five different classification methods to obtain optimal result taking into account all features that were extracted from signals. We observed that the Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) obtained the highest accuracy of emotion recognition.

Emotion recognition based on EEG features in movie clips with channel selection

Brain Informatics

Emotion plays an important role in human interaction. People can explain their emotions in terms of word, voice intonation, facial expression, and body language. However, brain-computer interface (BCI) systems have not reached the desired level to interpret emotions. Automatic emotion recognition based on BCI systems has been a topic of great research in the last few decades. Electroencephalogram (EEG) signals are one of the most crucial resources for these systems. The main advantage of using EEG signals is that it reflects real emotion and can easily be processed by computer systems. In this study, EEG signals related to positive and negative emotions have been classified with preprocessing of channel selection. Self-Assessment Manikins was used to determine emotional states. We have employed discrete wavelet transform and machine learning techniques such as multilayer perceptron neural network (MLPNN) and k-nearest neighborhood (kNN) algorithm to classify EEG signals. The classifier algorithms were initially used for channel selection. EEG channels for each participant were evaluated separately, and five EEG channels that offered the best classification performance were determined. Thus, final feature vectors were obtained by combining the features of EEG segments belonging to these channels. The final feature vectors with related positive and negative emotions were classified separately using MLPNN and kNN algorithms. The classification performance obtained with both the algorithms are computed and compared. The average overall accuracies were obtained as 77.14 and 72.92% by using MLPNN and kNN, respectively.

EEG-based recognition of video-induced emotions: Selecting subject-independent feature set

2013

Abstract² Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of GLIIHUHQWIHDWXUHVHWVLQFODVVLI\LQJSHUVRQ ¶VDURXVDODQGYDOHQFH while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

EMOTION ANALYSIS FROM PHYSIOLOGICAL SIGNAL USING EEG

In modern world, hearing a song or seeing a video has become an imperative entertainment to people. Music Video Content (MVC) must be retrieved based on emotional information on human presence of mind. Many researches focus the study of relationship betwe en videos and users’ induced physiological and psychological responses. The existing system performs the emotion analysis by using single - trial classification with arousal, valence and liking using features extracted from the electroencephalogram (EEG) and peripheral physiological signals and MCA (Multimedia Content Analysis) modalities. It uses semi - automatic stimuli selection method using affective tags, which was validated by an analysis of the ratings given by the participants. But, it has limitations w hile considering the signal noise, physiological differences among individuals, and limited quality of self - assessments. To overcome these limitations, it is necessary to develop a new technique for effective MVC model. In the proposed work, a new framewor k for personalized MV affective analysis, visualization, and retrieval is used. By stimulating the human affective response mechani sm, affective video content analysis extracts the affective information contained in videos, and, with the affective informat ion, natural, user - friendly, and effective MVC access strategies could be developed. Based on the values retrieved by Independent Components Analysis (ICA), the music video is retrieved from the large - scale MV databases. The proposed approach may provide an efficient mechanism for searching results with a high degree of precision with minimal error. Thus it will be helpful for overcoming th e current limitations and improve the final performance of affective computation.

Detecting user attention to video segments using interval EEG features

Expert Systems with Applications, 2019

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Highlights  A method of detecting the top 20% of viewer attention to video segments is proposed.  This is the first study of detecting viewer attention during video viewing.  All subject-independent models unbiased to specific genres are evaluated  The all-14-channel, single-channel, and selected multi-channel models are included.  The interval band ratio features are the most suitable for all the types of models.

Predicting Premature Video Skipping and Viewer Interest from EEG Recordings

Entropy, 2019

Brain–computer interfacing has enjoyed growing attention, not only due to the stunning demonstrations with severely disabled patients, but also the advent of economically viable solutions in areas such as neuromarketing, mental state monitoring, and future human–machine interaction. An interesting case, at least for neuromarketers, is to monitor the customer’s mental state in response to watching a commercial. In this paper, as a novelty, we propose a method to predict from electroencephalography (EEG) recordings whether individuals decide to skip watching a video trailer. Based on multiscale sample entropy and signal power, indices were computed that gauge the viewer’s engagement and emotional affect. We then trained a support vector machine (SVM), a k-nearest neighbor (kNN), and a random forest (RF) classifier to predict whether the viewer declares interest in watching the video and whether he/she decides to skip it prematurely. Our model achieved an average single-subject classification accuracy of 75.803% for skipping and 73.3% for viewer interest for the SVM, 82.223% for skipping and 78.333% for viewer interest for the kNN, and 80.003% for skipping and 75.555% for interest for the RF. We conclude that EEG can provide indications of viewer interest and skipping behavior and provide directions for future research.

Individual Classification of Emotions Using EEG

Many studies suggest that EEG signals provide enough information for the detection of human emotions with feature based classification methods. However, very few studies have reported a classification method that reliably works for individual participants (classification accuracy well over 90%). Further, a necessary condition for real life applications is a method that allows, irrespective of the immense individual difference among participants, to have minimal variance over the individual classification accuracy. We conducted offline computer aided emotion classification experiments using strict experimental controls. We analyzed EEG data collected from nine participants using validated film clips to induce four different emotional states (amused, disgusted, sad and neutral). The classification rate was evaluated using both unsupervised and supervised learning algorithms (in total seven “state of the art” algorithms were tested). The largest classification accuracy was computed by means of Support Vector Machine. Accuracy rate was on average 97.2%. The experimental protocol effectiveness was further supported by very small variance among individual participants’ classification accuracy (within interval: 96.7%, 98.3%). Classification accuracy evaluated on reduced number of electrodes suggested, consistently with psychological constructionist approaches, that we were able to classify emotions considering cortical activity from areas involved in emotion representation. The experimental protocol therefore appeared to be a key factor to improve the classification outcome by means of data quality improvements.

Emotion Analysis using Different Stimuli with EEG Signals in Emotional Space

Automatic detection for human-machine interfaces of the emotional states of the people is one of the difficult tasks. EEG signals that are very difficult to control by the person are also used in emotion recognition tasks. In this study, emotion analysis and classification study were conducted by using EEG signals for different types of stimuli. The combination of the audio and video information has been shown to be more effective about the classification of positive/negative (high/low) emotion by using wavelet transform from EEG signals, and true positive rate of 81.6% was obtained in valence dimension. Information of audio was found to be more effective than the information of video at classification that is made in arousal dimension, and true positive rate of 73.7% was obtained when both stimuli of audio and audio+video are used. Four class classification performance has also been examined in the space of valence-arousal.

High-frequency electroencephalographic activity in left temporal area is associated with pleasant emotion induced by video clips

Computational intelligence and neuroscience, 2015

Recent findings suggest that specific neural correlates for the key elements of basic emotions do exist and can be identified by neuroimaging techniques. In this paper, electroencephalogram (EEG) is used to explore the markers for video-induced emotions. The problem is approached from a classifier perspective: the features that perform best in classifying person's valence and arousal while watching video clips with audiovisual emotional content are searched from a large feature set constructed from the EEG spectral powers of single channels as well as power differences between specific channel pairs. The feature selection is carried out using a sequential forward floating search method and is done separately for the classification of valence and arousal, both derived from the emotional keyword that the subject had chosen after seeing the clips. The proposed classifier-based approach reveals a clear association between the increased high-frequency (15-32 Hz) activity in the left ...