Hatice Gunes | The university of Cambridge (original) (raw)

Papers by Hatice Gunes

Research paper thumbnail of Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

Research paper thumbnail of Personality Perception of robot avatar Teleoperators in solo and Dyadic Tasks

Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely rep... more Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely represented by a robot that replicates their arm, head, and possible face movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls. However, using a teleoperated humanoid as a communication medium inherently changes the appearance of the operator, and appearance-based stereotypes are used in interpersonal judgments (whether consciously or unconsciously). One such judgment that plays a key role in how people interact is personality. Hence, we have been motivated to investigate if and how using a robot avatar alters the perceived personality of teleoperators. To do so, we carried out two studies where participants performed 3 communication tasks, solo in study one and dyadic in study two, and were recorded on video both with and without robot mediation. Judges recruited using online crowdsourcing services then made personality judgments of the participants in the video clips. We observed that judges were able to make internally consistent trait judgments in both communication conditions. However, judge agreement was affected by robot mediation, although which traits were affected was highly task dependent. Our most important finding was that in dyadic tasks personality trait perception was shifted to incorporate cues relating to the robot's appearance when it was used to communicate. Our findings have important implications for telepresence robot design and personality expression in autonomous robots.

Research paper thumbnail of Automatic Subgrouping of Multitrack Audio

Research paper thumbnail of Face Alignment Assisted by Head Pose Estimation

Procedings of the British Machine Vision Conference 2015, 2015

Research paper thumbnail of Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015

Research paper thumbnail of MAPTRAITS 2014

Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop - MAPTRAITS '14, 2014

Research paper thumbnail of Static vs. Dynamic Modelling of Human Nonverbal Behaviour from Multiple Cues and Modalities

Image and Vision Computing, 2009

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of int... more Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper

Research paper thumbnail of Fourth international workshop on human behavior understanding (HBU 2013)

Proceedings of the 21st ACM international conference on Multimedia - MM '13, 2013

ABSTRACT With advances in pattern recognition and multimedia computing, it became possible to ana... more ABSTRACT With advances in pattern recognition and multimedia computing, it became possible to analyze human behavior via multimodal sensors, at different time-scales and at different levels of interaction and interpretation. This ability opens up enormous possibilities for multimedia and multimodal interaction, with a potential of endowing the computers with a capacity to attribute meaning to users' attitudes, preferences, personality, social relationships, etc., as well as to understand what people are doing, the activities they have been engaged in, their routines and lifestyles. This workshop gathers researchers dealing with the problem of modeling human behavior under its multiple facets with particular attention to interactions in arts, creativity, entertainment and edutainment.

Research paper thumbnail of Local Zernike Moment Representation for Facial Affect Recognition

Procedings of the British Machine Vision Conference 2013, 2013

Research paper thumbnail of Dimensional and continuous analysis of emotions for multimedia applications

Proceedings of the 20th ACM international conference on Multimedia - MM '12, 2012

Research paper thumbnail of Multimodal Affective Computing

Research paper thumbnail of Gesturing at Architecture:Experiences & Issues with New Forms of Interaction

Research paper thumbnail of Human Behavior Understanding: 4th International Workshop, HBU 2013, Barcelona, Spain, October 22, 2013. Proceedings

Research paper thumbnail of Local Zernike moment representations for facial affect recognition

Research paper thumbnail of Being There: Humans and Robots in Public Spaces

Research paper thumbnail of Continuous prediction of trait impressions

2014 22nd Signal Processing and Communications Applications Conference (SIU), 2014

Research paper thumbnail of Measuring Affect for the Study and Enhancement of Co-present Creative Collaboration

2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013

Research paper thumbnail of Introduction To The Special Issue On Affect Analysis In Continuous Input

Image and Vision Computing, 2013

Research paper thumbnail of Categorical and dimensional affect analysis in continuous input: Current trends and future directions

Image and Vision Computing, 2013

Abstract In the context of affective human behavior analysis, we use the term continuous input to... more Abstract In the context of affective human behavior analysis, we use the term continuous input to refer to naturalistic settings where explicit or implicit input from the subject is continuously available, where in a human-human or human-computer interaction setting, the subject plays the role of a producer of the communicative behavior or the role of a recipient of the communicative behavior. As a result, the analysis and the response provided by the automatic system is also envisioned to be continuous over the course of time, within the ...

Research paper thumbnail of Editorial: Introduction To The Special Issue On Affect Analysis In Continuous Input

Image and Vision Computing, Feb 1, 2013

Note: OCR errors may be found in this Reference List extracted from the full text article. ACM ha... more Note: OCR errors may be found in this Reference List extracted from the full text article. ACM has opted to expose the complete List rather than only correct and linked references. ... Gunes, H. and Schuller, B., Categorical and dimensional affect analysis in continuous input: Current trends and future directions, Image and Vision Computing, Special Issue on Affect Analysis in Continuous Input. Image Vision Comput. v31 i2. 120-136. ... Metallinou, A., Katsamanis, A. and Narayanan, S., Tracking continuous emotional trends of participants during ...

Research paper thumbnail of Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

Research paper thumbnail of Personality Perception of robot avatar Teleoperators in solo and Dyadic Tasks

Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely rep... more Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely represented by a robot that replicates their arm, head, and possible face movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls. However, using a teleoperated humanoid as a communication medium inherently changes the appearance of the operator, and appearance-based stereotypes are used in interpersonal judgments (whether consciously or unconsciously). One such judgment that plays a key role in how people interact is personality. Hence, we have been motivated to investigate if and how using a robot avatar alters the perceived personality of teleoperators. To do so, we carried out two studies where participants performed 3 communication tasks, solo in study one and dyadic in study two, and were recorded on video both with and without robot mediation. Judges recruited using online crowdsourcing services then made personality judgments of the participants in the video clips. We observed that judges were able to make internally consistent trait judgments in both communication conditions. However, judge agreement was affected by robot mediation, although which traits were affected was highly task dependent. Our most important finding was that in dyadic tasks personality trait perception was shifted to incorporate cues relating to the robot's appearance when it was used to communicate. Our findings have important implications for telepresence robot design and personality expression in autonomous robots.

Research paper thumbnail of Automatic Subgrouping of Multitrack Audio

Research paper thumbnail of Face Alignment Assisted by Head Pose Estimation

Procedings of the British Machine Vision Conference 2015, 2015

Research paper thumbnail of Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015

Research paper thumbnail of MAPTRAITS 2014

Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop - MAPTRAITS '14, 2014

Research paper thumbnail of Static vs. Dynamic Modelling of Human Nonverbal Behaviour from Multiple Cues and Modalities

Image and Vision Computing, 2009

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of int... more Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper

Research paper thumbnail of Fourth international workshop on human behavior understanding (HBU 2013)

Proceedings of the 21st ACM international conference on Multimedia - MM '13, 2013

ABSTRACT With advances in pattern recognition and multimedia computing, it became possible to ana... more ABSTRACT With advances in pattern recognition and multimedia computing, it became possible to analyze human behavior via multimodal sensors, at different time-scales and at different levels of interaction and interpretation. This ability opens up enormous possibilities for multimedia and multimodal interaction, with a potential of endowing the computers with a capacity to attribute meaning to users' attitudes, preferences, personality, social relationships, etc., as well as to understand what people are doing, the activities they have been engaged in, their routines and lifestyles. This workshop gathers researchers dealing with the problem of modeling human behavior under its multiple facets with particular attention to interactions in arts, creativity, entertainment and edutainment.

Research paper thumbnail of Local Zernike Moment Representation for Facial Affect Recognition

Procedings of the British Machine Vision Conference 2013, 2013

Research paper thumbnail of Dimensional and continuous analysis of emotions for multimedia applications

Proceedings of the 20th ACM international conference on Multimedia - MM '12, 2012

Research paper thumbnail of Multimodal Affective Computing

Research paper thumbnail of Gesturing at Architecture:Experiences & Issues with New Forms of Interaction

Research paper thumbnail of Human Behavior Understanding: 4th International Workshop, HBU 2013, Barcelona, Spain, October 22, 2013. Proceedings

Research paper thumbnail of Local Zernike moment representations for facial affect recognition

Research paper thumbnail of Being There: Humans and Robots in Public Spaces

Research paper thumbnail of Continuous prediction of trait impressions

2014 22nd Signal Processing and Communications Applications Conference (SIU), 2014

Research paper thumbnail of Measuring Affect for the Study and Enhancement of Co-present Creative Collaboration

2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013

Research paper thumbnail of Introduction To The Special Issue On Affect Analysis In Continuous Input

Image and Vision Computing, 2013

Research paper thumbnail of Categorical and dimensional affect analysis in continuous input: Current trends and future directions

Image and Vision Computing, 2013

Abstract In the context of affective human behavior analysis, we use the term continuous input to... more Abstract In the context of affective human behavior analysis, we use the term continuous input to refer to naturalistic settings where explicit or implicit input from the subject is continuously available, where in a human-human or human-computer interaction setting, the subject plays the role of a producer of the communicative behavior or the role of a recipient of the communicative behavior. As a result, the analysis and the response provided by the automatic system is also envisioned to be continuous over the course of time, within the ...

Research paper thumbnail of Editorial: Introduction To The Special Issue On Affect Analysis In Continuous Input

Image and Vision Computing, Feb 1, 2013

Note: OCR errors may be found in this Reference List extracted from the full text article. ACM ha... more Note: OCR errors may be found in this Reference List extracted from the full text article. ACM has opted to expose the complete List rather than only correct and linked references. ... Gunes, H. and Schuller, B., Categorical and dimensional affect analysis in continuous input: Current trends and future directions, Image and Vision Computing, Special Issue on Affect Analysis in Continuous Input. Image Vision Comput. v31 i2. 120-136. ... Metallinou, A., Katsamanis, A. and Narayanan, S., Tracking continuous emotional trends of participants during ...