E. Douglas-cowie - Academia.edu (original) (raw)

Papers by E. Douglas-cowie

Research paper thumbnail of Report on the first HUMAINE summer school

Research paper thumbnail of FEELTRACE': An instrument for recording perceived emotion in real time

FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus a... more FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus as they perceive it over time, allowing the emotional dynamics of speech episodes to be examined. It is based on activation-evaluation space, a representation derived from psychology. The activation dimension measures how dynamic the emotional state is; the evaluation dimension is a global measure of the positive or negative feeling associated with the state. Research suggests that the space is naturally circular, i.e. states which are at the limit of emotional intensity define a circle, with alert neutrality at the centre. To turn those ideas into a recording tool, the space was represented by a circle on a computer screen, and observers described perceived emotional state by moving a pointer (in the form of a disc) to the appropriate point in the circle, using a mouse. Prototypes were tested, and in the light of results, refinements were made to ensure that outputs were as consistent and meaningful as possible. They include colour coding the pointer in a way that users readily associate with the relevant emotional state; presenting key emotion words as 'landmarks' at the strategic points in the space; and developing an induction procedure to introduce observers to the system. An experiment assessed the reliability of the developed system. Stimuli were 16 clips from TV programs, two showing relatively strong emotions in each quadrant of activationevaluation space, each paired with one of the same person in a relatively neural state. 24 raters took part. Differences between clips chosen to contrast were statistically robust. Results were plotted in activation-evaluation space as ellipses, each with its centre at the mean coordinates for the clip, and its width proportional to standard deviation across raters. The size of the ellipses meant that about 25 could be fitted into the space, i.e. FEELTRACE has resolving power comparable to an emotion vocabulary of 20 non-overlapping words, with the advantage of allowing intermediate ratings, and above all, the ability to track impressions continuously.

Research paper thumbnail of Acoustic correlates of emotion dimensions in view of speech synthesis

7th European Conference on Speech Communication and Technology (Eurospeech 2001)

In a database of emotional speech, dimensional descriptions of emotional states have been correla... more In a database of emotional speech, dimensional descriptions of emotional states have been correlated with acoustic variables. Many stable correlations have been found. The predictions made by linear regression widely agree with the literature. The numerical form of the description and the choice of acoustic variables studied are particularly well suited for future implementation in a speech synthesis system, possibly

Research paper thumbnail of Detecting Politeness and efficiency in a cooperative social interaction

Interspeech 2010, 2010

We developed a cooperative time-sensitive task to study vocal expression of politeness and effici... more We developed a cooperative time-sensitive task to study vocal expression of politeness and efficiency. Sixteen dyads completed 20 trials of the 'Maze Task', where one participant (the 'navigator') gave oral instructions (mainly 'up', 'down', left', 'right') for the other (the 'pilot') to follow. For half of the trials, navigators were instructed to be polite, and for the other half to be efficient. The simplicity of the task left few ways to express politeness. Nevertheless it significantly affected task accuracy, and pilots' subjective ratings indicate that it was perceived. Efficiency was not as clearly perceived. Preliminary acoustic analysis suggests relevant dimensions.

Research paper thumbnail of Data-driven clustering in emotional space for affect recognition using discriminatively trained LSTM networks

Interspeech 2009, 2009

In today's affective databases speech turns are often labelled on a continuous scale for emotiona... more In today's affective databases speech turns are often labelled on a continuous scale for emotional dimensions such as valence or arousal to better express the diversity of human affect. However, applications like virtual agents usually map the detected emotional user state to rough classes in order to reduce the multiplicity of emotion dependent system responses. Since these classes often do not optimally reflect emotions that typically occur in a given application, this paper investigates data-driven clustering of emotional space to find class divisions that better match the training data and the area of application. Thereby we consider the Belfast Sensitive Artificial Listener database and TV talkshow data from the VAM corpus. We show that a discriminatively trained Long Short-Term Memory (LSTM) recurrent neural net that explicitly learns clusters in emotional space and additionally models context information outperforms both, Support Vector Machines and a Regression-LSTM net.

Research paper thumbnail of Intonational Settings as Markers of Discourse Units in Telephone Conversations

Language and Speech, 1998

A study of business telephone calls provides quantitative evidence suggesting “intonational setti... more A study of business telephone calls provides quantitative evidence suggesting “intonational settings”: Certain attributes of intonation are sustained throughout discourse units in the calls (openings, business transactions, preclosures, and final closures), and differentiate one unit from another, as if phonologically significant aspects of intonation are realized within a space controlled by discourse-related parameters. Two types of parameter emerge, one controlling the midpoint of the F0 contour in frequency space, the other controlling the way it fluctuates.

Research paper thumbnail of On emotion recognition of faces and of speech using neural networks, fuzzy logic and the ASSESS system

Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, 2000

2.2. One of the numerhs pm l'ems tokognize facial expressions is their inher... more 2.2. One of the numerhs pm l'ems tokognize facial expressions is their inherent ambiguity. Often additional information is needed in order to interpret them, for instance the verbal and nonverbal information accompanying the expression, or some contextual inf mation. In particular, ...

Research paper thumbnail of An Intelligent System for Facial Emotion Recognition

2005 IEEE International Conference on Multimedia and Expo

An intelligent emotion recognition system, interweaving psychological findings about emotion repr... more An intelligent emotion recognition system, interweaving psychological findings about emotion representation with analysis and evaluation of facial expressions has been generated and its performance has been investigated with experimental real data. Additionally, a fuzzy rule based system has been created for classifying facial expressions to the six archetypal emotion categories. The continuous 2-D emotion space was then examined and a pool of known and novel classification and clustering techniques have been applied to our data obtaining high rates in classification and clustering into quadrants of the emotion representation space.

Research paper thumbnail of Emotion in Speech: Towards an Integration of Linguistic, Paralinguistic, and Psychological Analysis

Lecture Notes in Computer Science, 2003

If speech analysis is to detect a speaker's emotional state, it needs to derive information from ... more If speech analysis is to detect a speaker's emotional state, it needs to derive information from both linguistic information, i.e., the qualitative targets that the speaker has attained (or approximated), conforming to the rules of language; and paralinguistic information, i.e., allowed variations in the way that qualitative linguistic targets are realised. It also needs an appropriate representation of emotional states. The ERMIS project addresses the integration problem that those requirements pose. It mainly comprises a paralinguistic analysis and a robust speech recognition module. Descriptions of emotionality are derived from these modules following psychological and linguistic research that indicates the information likely to be available. We argue that progress in registering emotional states depends on establishing an overall framework of at least this level of complexity.

Research paper thumbnail of On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues

Journal on Multimodal User Interfaces, 2009

For many applications of emotion recognition, such as virtual agents, the system must select resp... more For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user's affect. However most emotion recognition systems are based on turnwise processing. We present a novel approach to on-line emotion recognition from speech using Long Short-Term Memory Recurrent Neural Networks. Emotion is recognised frame-wise in a two-dimensional valence-activation continuum. In contrast to current state-of-the-art approaches, recognition is performed on low-level signal frames, similar to those used for speech recognition. No statistical functionals are applied to low-level feature contours. Framing at a higher level is therefore unnecessary and regression outputs can be produced in real-time for every low-level input frame. We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.

Research paper thumbnail of Induction, recording and recognition of natural emotions from facial expressions and speech prosody

Journal on Multimodal User Interfaces, 2013

Recording and annotating a multimodal database of natural expressivity is a task that requires ca... more Recording and annotating a multimodal database of natural expressivity is a task that requires careful planning and implementation, before even starting to apply feature extraction and recognition algorithms. Requirements and characteristics of such databases are inherently different than those of acted behaviour, both in terms of unconstrained expressivity of the human participants, and in terms of the expressed emotions. In this paper, we describe a method to induce, record and annotate natural emotions, which was used to provide multimodal data for dynamic emotion recognition from facial expressions and speech prosody; results from a dynamic recognition algorithm, based on recurrent neural networks, indicate that multimodal processing surpasses both speech and visual analysis by a wide margin. The SAL database was used in the framework of the Humaine Network of Excellence as a common ground for research in everyday, natural emotions.

Research paper thumbnail of Emotion recognition in HCI

Research paper thumbnail of The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data

Affective Computing and Intelligent Interaction

The HUMAINE project is concerned with developing interfaces that will register and respond to emo... more The HUMAINE project is concerned with developing interfaces that will register and respond to emotion, particularly pervasive emotion (forms of feeling, expression and action that colour most of human life). The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it.

Research paper thumbnail of A Response to Goehl and Kaufman (1984)

Journal of Speech and Hearing Disorders, 1986

Research paper thumbnail of Tracing Emotion

International Journal of Synthetic Emotions, 2012

Computational research with continuous representations depends on obtaining continuous representa... more Computational research with continuous representations depends on obtaining continuous representations from human labellers. The main method used for that purpose is tracing. Tracing raises a range of challenging issues, both psychological and statistical. Naive assumptions about these issues are easy to make, and can lead to inappropriate requirements and uses. The natural function of traces is to capture perceived affect, and as such they belong in long traditions of research on both perception and emotion. Experiments on several types of material provide information about their characteristics, particularly the ratings on which people tend to agree. Disagreement is not necessarily a problem in the technique. It may correctly show that people’s impressions of emotion diverge more than commonly thought. A new system, Gtrace, is designed to let rating studies capitalise on a decade of experience and address the research questions that are opened up by the data now available.

Research paper thumbnail of Emotion Recognition and Synthesis Based on MPEG-4 FAPs

MPEG-4 Facial Animation

EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis th... more EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis that is compatible with the MPEG-4 standard and can be used for emotion understanding.

Research paper thumbnail of ASR for emotional speech: Clarifying the issues and enhancing performance

Neural Networks, 2005

There are multiple reasons to expect that recognising the verbal content of emotional speech will... more There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%.

Research paper thumbnail of A study of speech deterioration in post-lingually deafened adults

The Journal of Laryngology & Otology, 1982

Research paper thumbnail of Emotion recognition in human-computer interaction

IEEE Signal Processing Magazine, 2001

Entertainment Commercially, the first major application of emotion-related technology may well be... more Entertainment Commercially, the first major application of emotion-related technology may well be in entertainment and game programs which respond to the user's state. There is probably an immense market for pets, friends, and dolls which respond even crudely to the owner's mood. Many of those applications could be addressed in a piecemeal way. It seems likely, however, that genuinely satisfying solutions will depend on a solid theoretical base. The main concern of this article is with the development of that kind of base. It is important for both theory and application to recognize that the term "emotion" has a broad and a narrow sense. The narrow sense refers to what might be called full-blown emotion, where emotion is (temporarily) the dominant feature of mental life-it preempts ordinary de-34 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2001 Hybrid systems have a particular attraction in that they link two types of elements that are prominent in reactions to emotion-articulate verbal descriptions and explanations and responses that are felt rather than articulated.

Research paper thumbnail of Politeness and social signals

Cognitive Processing, 2011

In the literature, politeness has been researched within many disciplines. Although Brown and Lev... more In the literature, politeness has been researched within many disciplines. Although Brown and Levinson's theory of politeness (1978, 1987) is often cited, it is primarily a linguistic theory and has been criticized for its lack of generalizability to all cultures. Consequently, there is a need for a more comprehensive approach to understand and explain politeness. We suggest applying a social signal framework that considers politeness as a communicative state. By doing so, we aim to unify and explain politeness and its corresponding research and identify further research needed in this area.

Research paper thumbnail of Report on the first HUMAINE summer school

Research paper thumbnail of FEELTRACE': An instrument for recording perceived emotion in real time

FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus a... more FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus as they perceive it over time, allowing the emotional dynamics of speech episodes to be examined. It is based on activation-evaluation space, a representation derived from psychology. The activation dimension measures how dynamic the emotional state is; the evaluation dimension is a global measure of the positive or negative feeling associated with the state. Research suggests that the space is naturally circular, i.e. states which are at the limit of emotional intensity define a circle, with alert neutrality at the centre. To turn those ideas into a recording tool, the space was represented by a circle on a computer screen, and observers described perceived emotional state by moving a pointer (in the form of a disc) to the appropriate point in the circle, using a mouse. Prototypes were tested, and in the light of results, refinements were made to ensure that outputs were as consistent and meaningful as possible. They include colour coding the pointer in a way that users readily associate with the relevant emotional state; presenting key emotion words as 'landmarks' at the strategic points in the space; and developing an induction procedure to introduce observers to the system. An experiment assessed the reliability of the developed system. Stimuli were 16 clips from TV programs, two showing relatively strong emotions in each quadrant of activationevaluation space, each paired with one of the same person in a relatively neural state. 24 raters took part. Differences between clips chosen to contrast were statistically robust. Results were plotted in activation-evaluation space as ellipses, each with its centre at the mean coordinates for the clip, and its width proportional to standard deviation across raters. The size of the ellipses meant that about 25 could be fitted into the space, i.e. FEELTRACE has resolving power comparable to an emotion vocabulary of 20 non-overlapping words, with the advantage of allowing intermediate ratings, and above all, the ability to track impressions continuously.

Research paper thumbnail of Acoustic correlates of emotion dimensions in view of speech synthesis

7th European Conference on Speech Communication and Technology (Eurospeech 2001)

In a database of emotional speech, dimensional descriptions of emotional states have been correla... more In a database of emotional speech, dimensional descriptions of emotional states have been correlated with acoustic variables. Many stable correlations have been found. The predictions made by linear regression widely agree with the literature. The numerical form of the description and the choice of acoustic variables studied are particularly well suited for future implementation in a speech synthesis system, possibly

Research paper thumbnail of Detecting Politeness and efficiency in a cooperative social interaction

Interspeech 2010, 2010

We developed a cooperative time-sensitive task to study vocal expression of politeness and effici... more We developed a cooperative time-sensitive task to study vocal expression of politeness and efficiency. Sixteen dyads completed 20 trials of the 'Maze Task', where one participant (the 'navigator') gave oral instructions (mainly 'up', 'down', left', 'right') for the other (the 'pilot') to follow. For half of the trials, navigators were instructed to be polite, and for the other half to be efficient. The simplicity of the task left few ways to express politeness. Nevertheless it significantly affected task accuracy, and pilots' subjective ratings indicate that it was perceived. Efficiency was not as clearly perceived. Preliminary acoustic analysis suggests relevant dimensions.

Research paper thumbnail of Data-driven clustering in emotional space for affect recognition using discriminatively trained LSTM networks

Interspeech 2009, 2009

In today's affective databases speech turns are often labelled on a continuous scale for emotiona... more In today's affective databases speech turns are often labelled on a continuous scale for emotional dimensions such as valence or arousal to better express the diversity of human affect. However, applications like virtual agents usually map the detected emotional user state to rough classes in order to reduce the multiplicity of emotion dependent system responses. Since these classes often do not optimally reflect emotions that typically occur in a given application, this paper investigates data-driven clustering of emotional space to find class divisions that better match the training data and the area of application. Thereby we consider the Belfast Sensitive Artificial Listener database and TV talkshow data from the VAM corpus. We show that a discriminatively trained Long Short-Term Memory (LSTM) recurrent neural net that explicitly learns clusters in emotional space and additionally models context information outperforms both, Support Vector Machines and a Regression-LSTM net.

Research paper thumbnail of Intonational Settings as Markers of Discourse Units in Telephone Conversations

Language and Speech, 1998

A study of business telephone calls provides quantitative evidence suggesting “intonational setti... more A study of business telephone calls provides quantitative evidence suggesting “intonational settings”: Certain attributes of intonation are sustained throughout discourse units in the calls (openings, business transactions, preclosures, and final closures), and differentiate one unit from another, as if phonologically significant aspects of intonation are realized within a space controlled by discourse-related parameters. Two types of parameter emerge, one controlling the midpoint of the F0 contour in frequency space, the other controlling the way it fluctuates.

Research paper thumbnail of On emotion recognition of faces and of speech using neural networks, fuzzy logic and the ASSESS system

Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, 2000

2.2. One of the numerhs pm l'ems tokognize facial expressions is their inher... more 2.2. One of the numerhs pm l'ems tokognize facial expressions is their inherent ambiguity. Often additional information is needed in order to interpret them, for instance the verbal and nonverbal information accompanying the expression, or some contextual inf mation. In particular, ...

Research paper thumbnail of An Intelligent System for Facial Emotion Recognition

2005 IEEE International Conference on Multimedia and Expo

An intelligent emotion recognition system, interweaving psychological findings about emotion repr... more An intelligent emotion recognition system, interweaving psychological findings about emotion representation with analysis and evaluation of facial expressions has been generated and its performance has been investigated with experimental real data. Additionally, a fuzzy rule based system has been created for classifying facial expressions to the six archetypal emotion categories. The continuous 2-D emotion space was then examined and a pool of known and novel classification and clustering techniques have been applied to our data obtaining high rates in classification and clustering into quadrants of the emotion representation space.

Research paper thumbnail of Emotion in Speech: Towards an Integration of Linguistic, Paralinguistic, and Psychological Analysis

Lecture Notes in Computer Science, 2003

If speech analysis is to detect a speaker's emotional state, it needs to derive information from ... more If speech analysis is to detect a speaker's emotional state, it needs to derive information from both linguistic information, i.e., the qualitative targets that the speaker has attained (or approximated), conforming to the rules of language; and paralinguistic information, i.e., allowed variations in the way that qualitative linguistic targets are realised. It also needs an appropriate representation of emotional states. The ERMIS project addresses the integration problem that those requirements pose. It mainly comprises a paralinguistic analysis and a robust speech recognition module. Descriptions of emotionality are derived from these modules following psychological and linguistic research that indicates the information likely to be available. We argue that progress in registering emotional states depends on establishing an overall framework of at least this level of complexity.

Research paper thumbnail of On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues

Journal on Multimodal User Interfaces, 2009

For many applications of emotion recognition, such as virtual agents, the system must select resp... more For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user's affect. However most emotion recognition systems are based on turnwise processing. We present a novel approach to on-line emotion recognition from speech using Long Short-Term Memory Recurrent Neural Networks. Emotion is recognised frame-wise in a two-dimensional valence-activation continuum. In contrast to current state-of-the-art approaches, recognition is performed on low-level signal frames, similar to those used for speech recognition. No statistical functionals are applied to low-level feature contours. Framing at a higher level is therefore unnecessary and regression outputs can be produced in real-time for every low-level input frame. We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.

Research paper thumbnail of Induction, recording and recognition of natural emotions from facial expressions and speech prosody

Journal on Multimodal User Interfaces, 2013

Recording and annotating a multimodal database of natural expressivity is a task that requires ca... more Recording and annotating a multimodal database of natural expressivity is a task that requires careful planning and implementation, before even starting to apply feature extraction and recognition algorithms. Requirements and characteristics of such databases are inherently different than those of acted behaviour, both in terms of unconstrained expressivity of the human participants, and in terms of the expressed emotions. In this paper, we describe a method to induce, record and annotate natural emotions, which was used to provide multimodal data for dynamic emotion recognition from facial expressions and speech prosody; results from a dynamic recognition algorithm, based on recurrent neural networks, indicate that multimodal processing surpasses both speech and visual analysis by a wide margin. The SAL database was used in the framework of the Humaine Network of Excellence as a common ground for research in everyday, natural emotions.

Research paper thumbnail of Emotion recognition in HCI

Research paper thumbnail of The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data

Affective Computing and Intelligent Interaction

The HUMAINE project is concerned with developing interfaces that will register and respond to emo... more The HUMAINE project is concerned with developing interfaces that will register and respond to emotion, particularly pervasive emotion (forms of feeling, expression and action that colour most of human life). The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it.

Research paper thumbnail of A Response to Goehl and Kaufman (1984)

Journal of Speech and Hearing Disorders, 1986

Research paper thumbnail of Tracing Emotion

International Journal of Synthetic Emotions, 2012

Computational research with continuous representations depends on obtaining continuous representa... more Computational research with continuous representations depends on obtaining continuous representations from human labellers. The main method used for that purpose is tracing. Tracing raises a range of challenging issues, both psychological and statistical. Naive assumptions about these issues are easy to make, and can lead to inappropriate requirements and uses. The natural function of traces is to capture perceived affect, and as such they belong in long traditions of research on both perception and emotion. Experiments on several types of material provide information about their characteristics, particularly the ratings on which people tend to agree. Disagreement is not necessarily a problem in the technique. It may correctly show that people’s impressions of emotion diverge more than commonly thought. A new system, Gtrace, is designed to let rating studies capitalise on a decade of experience and address the research questions that are opened up by the data now available.

Research paper thumbnail of Emotion Recognition and Synthesis Based on MPEG-4 FAPs

MPEG-4 Facial Animation

EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis th... more EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis that is compatible with the MPEG-4 standard and can be used for emotion understanding.

Research paper thumbnail of ASR for emotional speech: Clarifying the issues and enhancing performance

Neural Networks, 2005

There are multiple reasons to expect that recognising the verbal content of emotional speech will... more There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%.

Research paper thumbnail of A study of speech deterioration in post-lingually deafened adults

The Journal of Laryngology & Otology, 1982

Research paper thumbnail of Emotion recognition in human-computer interaction

IEEE Signal Processing Magazine, 2001

Entertainment Commercially, the first major application of emotion-related technology may well be... more Entertainment Commercially, the first major application of emotion-related technology may well be in entertainment and game programs which respond to the user's state. There is probably an immense market for pets, friends, and dolls which respond even crudely to the owner's mood. Many of those applications could be addressed in a piecemeal way. It seems likely, however, that genuinely satisfying solutions will depend on a solid theoretical base. The main concern of this article is with the development of that kind of base. It is important for both theory and application to recognize that the term "emotion" has a broad and a narrow sense. The narrow sense refers to what might be called full-blown emotion, where emotion is (temporarily) the dominant feature of mental life-it preempts ordinary de-34 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2001 Hybrid systems have a particular attraction in that they link two types of elements that are prominent in reactions to emotion-articulate verbal descriptions and explanations and responses that are felt rather than articulated.

Research paper thumbnail of Politeness and social signals

Cognitive Processing, 2011

In the literature, politeness has been researched within many disciplines. Although Brown and Lev... more In the literature, politeness has been researched within many disciplines. Although Brown and Levinson's theory of politeness (1978, 1987) is often cited, it is primarily a linguistic theory and has been criticized for its lack of generalizability to all cultures. Consequently, there is a need for a more comprehensive approach to understand and explain politeness. We suggest applying a social signal framework that considers politeness as a communicative state. By doing so, we aim to unify and explain politeness and its corresponding research and identify further research needed in this area.