Affective Recognition in Dynamic and Interactive Virtual Environments (original) (raw)

Towards emotion recognition for virtual environments: an evaluation of eeg features on benchmark dataset

One of the challenges in virtual environments is the difficulty users have in interacting with these increasingly complex systems. Ultimately, endowing machines with the ability to perceive users emotions will enable a more intuitive and reliable interaction. Consequently, using the electroencephalogram as a bio-signal sensor, the affective state of a user can be modelled and subsequently utilised in order to achieve a system that can recognise and react to the user's emotions. This paper investigates features extracted from electroencephalogram signals for the purpose of affective state modelling based on Russell's Circumplex Model. Investigations are presented that aim to provide the foundation for future work in modelling user affect to enhance interaction experience in virtual environments. The DEAP dataset was used within this work, along with a Support Vector Machine and Random Forest, which yielded reasonable classification accuracies for Valence and Arousal using feature vectors based on statistical measurements and band power from the α, β, δ, and θ waves and High Order Crossing of the EEG signal.

Affective Recognition for Multimedia Environments A Review

Detecting emotional responses in multimedia environments is an academically and technologically challenging research issue. In the domain of Affective Computing, from non-interactive and static stimuli (e.g. affective image) to highly interactive and dynamic environments (affective virtual realities), researchers have employed a wide range of affective stimuli to measure and interpret human psychological and physiological emotional behaviours. Various psychophysiological parameters (e.g. Electroencephalography, Galvanic Skin Response, Heart Rate, etc.) have been employed and investigated, in order to detect and quantify human affective states. In this paper, we present a detailed literature review of over 33 affective computing studies, undertaken since 1993. All aspects of these studies (stimuli type, pre-processing, windowing, features, classification technique, etc.) have been reported in detail. We believe that this paper not only summarises the breadth of research over the past...

Optimal Arousal Identification and Classification for Affective Computing Using Physiological Signals: Virtual Reality Stroop Task

IEEE Transactions on Affective Computing, 2010

A closed-loop system that offers real-time assessment and manipulation of a user's affective and cognitive states is very useful in developing adaptive environments which respond in a rational and strategic fashion to real-time changes in user affect, cognition, and motivation. The goal is to progress the user from suboptimal cognitive and affective states toward an optimal state that enhances user performance. In order to achieve this, there is need for assessment of both 1) the optimal affective/cognitive state and 2) the observed user state. This paper presents approaches for assessing these two states. Arousal, an important dimension of affect, is focused upon because of its close relation to a user's cognitive performance, as indicated by the Yerkes-Dodson Law. Herein, we make use of a Virtual Reality Stroop Task (VRST) from the Virtual Reality Cognitive Performance Assessment Test (VRCPAT) to identify the optimal arousal level that can serve as the affective/cognitive state goal. Three stimuli presentations (with distinct arousal levels) in the VRST are selected. We demonstrate that when reaction time is used as the performance measure, one of the three stimuli presentations can elicit the optimal level of arousal for most subjects. Further, results suggest that high classification rates can be achieved when a support vector machine is used to classify the psychophysiological responses (skin conductance level, respiration, ECG, and EEG) in these three stimuli presentations into three arousal levels. This research reflects progress toward the implementation of a closed-loop affective computing system.

Physiological Measurement for Emotion Recognition in Virtual Reality

2019 2nd International Conference on Data Intelligence and Security (ICDIS)

In this work, various non-invasive sensors are used to collect physiological data during subject interaction with virtual reality environments. The collected data are used to recognize the subjects' emotional response to stimuli. The shortcomings and challenges faced during the data collection and labeling process are discussed, and solutions are proposed. A machine learning approach is adopted for emotion classification. Our experiments show that feature extraction is a crucial step in the classification process. A collection of general purpose features that can be extracted from a variety of physiological biosignals is proposed. Our experimental results show that the proposed feature set achieves better emotion classification accuracy compared to traditional domain-specific features used in previous studies.

Automatic detection and classification of emotional states in virtual reality and standard environments (LCD): comparing valence and arousal of induced emotions

Virtual Reality

The following case study was carried out on a sample of one experimental and one control group. The participants of the experimental group watched the movie section from the standardized LATEMO-E database via virtual reality (VR) on Oculus Rift S and HTC Vive Pro devices. In the control group, the movie section was displayed on the LCD monitor. The movie section was categorized according to Ekman's and Russell's classification model of evoking an emotional state. The range of valence and arousal was determined in both observed groups. Valence and arousal were measured in each group using a Self-Assessment Manikin (SAM). The control group was captured by a camera and evaluated by Affdex software from Affectiva in order to compare valence values. The control group showed a very high correlation (0.92) between SAM and Affdex results. Having considered the Affdex results as a reference value, it can be concluded that SAM participants evaluated their emotions objectively. The res...

Emotion assessment for affective computing based on physiological responses

2012 IEEE International Conference on Fuzzy Systems, 2012

Fusion and rejection 6.4 Results 6.4.1 Participants reports and protocol validation 6.4.2 Results of single classifiers 6.4.3 Results of feature selection 6.4.4 Results of fusion 6.4.5 Results of rejection 6.5 Conclusions Chapter 7 Assessment of emotions for computer games 7.1 Introduction: the flow theory for games 7.2 Data acquisition 7.2.1 Acquisition protocol 7.2.2 Feature extraction 7.3 Analysis of questionnaires and of physiological features 7.3.1 Elicited emotions 7.3.2 Evolution of emotions in engaged trials 7.4 Classification of the gaming conditions using physiological signals 7.4.1 Classification methods 7.4.2 Peripheral signals 7.4.3 EEG signals 7.4.4 EEG and peripheral signals 7.4.5 Fusion 7.5 Analysis of game-over events 7.5.1 Method 7.5.2 Results 7.6 Conclusion x Chapter 8 Conclusions 8.1 Outcomes 8.2 Future prospects Appendix A Consent form Appendix B Neighborhood table for the Laplacian filter Appendix C List of IAPS images used Appendix D Questionnaire results for the game protocol Appendix E Publications References List of figures List of tables xii Acronyms ANOVA ANalysis Of VAriance ANS Autonomic Nervous System BCI Brain-Computer Interface Chapter 1 In order to obtain such an "emotional machine" it is necessary to include emotions in the humanmachine communication loop (see Figure 1.1). This is what is defined as "affective computing". According to Picard [2], affective computing "proposes to give computers the ability to recognize [and] express […] emotions". Synthetic expression of emotions can be achieved by enabling avatars or simpler agents to have facial expressions, different tones of voice, and empathic behaviors [3, 4]. Detection of human emotions can be realized by monitoring and interpreting the different cues that are given in both verbal and non-verbal communication. However, a system that is "emotionally intelligent" should not only detect and express emotions but it should also take the proper action in response to the detected emotion. Expressing the adequate emotion is thus one of the outputs of this decision making. The proper action to take is dependent on the application but examples of reactions could be to provide help in the case the user feels helpless or to lower task demand in the case he / she is highly stressed. Many applications are detailed in chapter 1.2.2. Accurately assessing emotions is thus a critical step toward affective computing since this will determine the reaction of the system. The present work will focus on this first step of affective computing by trying to reliably assess emotion from several emotional cues. Figure 1.1. Including emotions in the human-machine loop 1.2 Emotion assessment 1.2.1 Multimodal expression of emotion A modality is defined as a path used to carry information for the purpose of interaction. In HMI, there are two possible approaches to define a modality: from the machine point of view and from the user point of view. On the machine side, a modality refers to the processes that generate information to the physical world and interpret information from it. It is thus possible to distinguish between input and output modalities and to associate them to the corresponding communication device. A keyboard, a mouse and a pad with the associated information processes are typical input modalities, while a text, an image and music presented on screens and speakers Emotional cues Senses Emotion assessment Decision of proper reaction Sensors User Emotion synthesis Ouput devices Machine Command execution Emotional adaptation Decision of proper reaction Emotion synthesis Evaluation Perception Interpretation Outcome evaluation Emotional evaluation / elicitation Execution Intentions formulation Actions specification Execution of actions Goals Emotional cues Machine User Games are also interesting from a HMI point of view because they are an ideal ground for the design of new ways to communicate with the machine. One of the main goals of games is to provide emotional experiences such as fun and excitement which generally occurs when the player is strongly involved in the course of the game action. However, a loss of involvement can occur if the game does not correspond to the player's expectations and competences; he might measuring various physiological signals requires the use of several sensors that are sometimes quite obtrusive since they can monopolize the use of one hand and are not comfortable. The price of those sensors should also to be taken into account. For these reasons the issue of determining the most useful sensors is of importance. Finally, there is also variability in physiological signals, from person to person but also from day to day, that yield to difficulty in designing a system that Reference Basic emotions Criteria Ekman [42] Anger,

EMOTION INTERACTION WITH VIRTUAL REALITY USING HYBRID EMOTION CLASSIFICATION TECHNIQUE TOWARD BRAIN SIGNALS

Human computer interaction (HCI) considered main aspect in virtual reality (VR) especially in the context of emotion, where users can interact with virtual reality through their emotions and it could be expressed in virtual reality. Last decade many researchers focused on emotion classification in order to employ emotion in interaction with virtual reality, the classification will be done based on Electroencephalogram (EEG) brain signals. This paper provides a new hybrid emotion classification method by combining self-assessment, arousal valence dimension and variance of brain hemisphere activity to classify users’ emotions. Self-assessment considered a standard technique used for assessing emotion, arousal valence emotion dimension model is an emotion classifier with regards to aroused emotions and brain hemisphere activity that classifies emotion with regards to right and left hemisphere. This method can classify human emotions, two basic emotions is highlighted i.e. happy and sad. EEG brain signals are used to interpret the users’ emotional. Emotion interaction is expressed by 3D model walking expression in VR. The results show that the hybrid method classifies the highlighted emotions in different circumstances, and how the 3D model changes its walking style according to the classified users’ emotions. Finally, the outcome is believed to afford new technique on classifying emotions with feedback through 3D virtual model walking expression.

Selection of the Most Relevant Physiological Features for Classifying Emotion

With the development of wearable physiological sensors, emotion estimation becomes a hot topic in the literature. Databases of physiological signals recorded during emotional stimulation are acquired and machine learning algorithms are used. Yet, which are the most relevant signals to detect emotions is still a question to be answered. In order to better understand the contribution of each signal, and thus sensor, to the emotion estimation problem, several feature selection algorithms were implemented on two databases freely available to the research community (DEAP and MANHOB-HCI). Both databases manipulate emotions by showing participants short videos (video clips or part of movies respectively). Features extracted from Galvanic Skin response were found to be relevant for arousal estimation in both databases. Other relevant features were eye closing rate for arousal, variance of zygomatic EMG for valence (those features being only available for DEAP). The hearth rate variability power in three frequency bands also appeared to be very relevant, but only for MANHOB-HCI database where heat rate was measured using ECG (whereas DEAP used PPG). This suggests that PPG is not accurate enough to estimate HRV precisely. Finally we showed on DEAP database that emotion classifiers need just a few well selected features to obtain similar performances to literature classifiers using more features.

From Physiological Signals to Emotions: Implementing and Comparing Selected Methods for Feature Extraction and Classification

2005 IEEE International Conference on Multimedia and Expo

Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification.