Alexis Heloir - Academia.edu (original) (raw)

Papers by Alexis Heloir

Research paper thumbnail of Reformulating clancey's generic tasks for bridging both sides of dual reality

Proceedings of the 29th Conference on l'Interaction Homme-Machine, 2017

Research paper thumbnail of Requirements for a Gesture Specification

Language A comparison of two representation formalisms

Research paper thumbnail of Realizing Multimodal Behavior Closing the gap between behavior planning and embodied agent presentation

Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, faci... more Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors. 1

Research paper thumbnail of Write-Once, Transpile-Everywhere: Re-using Motion Controllers of Virtual Humans Across Multiple Game Engines

Transpilation allows to write code once and re-use it across multiple runtime environments. In th... more Transpilation allows to write code once and re-use it across multiple runtime environments. In this paper, we propose a software development practice to implement once the motion controllers of virtual humans and re-use the implementation in multiple game engines. In a case study, three common human behaviors – blinking, text-to-speech, and eye-gaze – were developed in the Haxe programming language and deployed in the free, open-source Blender Game Engine and the commercial Unity engine. Performance tests show that transpiled code executes within 67% faster to 127% slower with respect to an implementation manually written in the game engine target languages.

Research paper thumbnail of Personality models to and from virtual characters

In order to be believable, virtual agents must possess both a behavioral model simulating emotion... more In order to be believable, virtual agents must possess both a behavioral model simulating emotions and personality, and a convincing aesthetics [4]. A lot of research already exists about models of emotions, and some seminal work investigates now the role of personality [1, 2]. While emotions are dynamic and variable in time, personality is a static feature of humans, changing only very slowly through the course of a life. The emotional state drives the style of the behavior of a character (how it accomplishes actions) the personality drives the intention of an autonomous agent (what to do next). However, there is not much work investigating the relationships between the personality of a virtual agent, its behavior, and its physical appearance. The work that we are conducting in the SLSI group is based on the observation that people very quickly build up their ideas about the personality of others in zero-acquaintance encounters [11]. The judgment of the personality can be modeled, ...

Research paper thumbnail of Towards Monitoring Patients with Alzheimer's Disease Activity using Distributed Tangible Tabletops and Dual Reality

This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD),... more This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD), on an everyday's life basis, using distributed tangible tabletops. Our proposed system aims to help individuals with AD live at home independently of physical presence of caregivers, nurses, and/or family members, since these latter can assist the patients with AD and monitor their activity through a virtually duplicated (dual reality) environment. This monitoring happens in real time, and monitors can encourage and remind patients with AD to take actions, while respecting their privacy. The interaction is achieved by moving physical (tangible) and/or virtual (digital) objects on the tabletop's surface, or by displaying messages and reminders on its surface. A first prototype is described and illustrated, as well as propositions to validate and extend the system.

Research paper thumbnail of Human-Centred Machine Learning

Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2016

Machine learning is one of the most important and successful techniques in contemporary computer ... more Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the coadaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning.

Research paper thumbnail of Système de communication par agent virtuel : aide à la communication des personnes sourdes

Research paper thumbnail of Realizing Multimodal Behavior

Lecture Notes in Computer Science, 2010

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial express... more Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression.. .) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.

Research paper thumbnail of Captured motion data processing for real time synthesis of sign language

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006

Research paper thumbnail of Virtual humanoids endowed with expressive communication gestures: The HuGEx project

Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 2007

Abstract This project aims at the creation of a virtual humanoid endowed with expressive gestures... more Abstract This project aims at the creation of a virtual humanoid endowed with expressive gestures. More specifically, we focus our attention on expressiveness (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract significant features of communicative gestures, and to re-synthesize them afterward with style variation. The proposed model is applied to the generation of a ...

Research paper thumbnail of Virtual Agent for Deaf Signing Gestures

We describe in this paper a system for automatically synthesizing deaf signing animations from mo... more We describe in this paper a system for automatically synthesizing deaf signing animations from motion data captured on real deaf subjects. Moreover, we create a virtual agent endowed with expressive gestures. Our attention is focused on the expressiveness of gesture (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract relevant features of communicative ...

Research paper thumbnail of Agent virtuel signeur-aide à la communication pour personnes sourdes

Proc. Handicap, 2006

L'objectif de ce projet consistea concevoir un humanoide artificiel doué de comportements ge... more L'objectif de ce projet consistea concevoir un humanoide artificiel doué de comportements gestuels expressifs et intelligents. On s' intéresse ici plus spécifiquementa la qualité expressive des gestes (quel style de geste: fluide, tendu, énervé), eta ses représentations sémantiques. L'application envisagée est celle d'un humanoide signeur capable de générer un ensemble de gestes de la Langue des Signes Francaise (LSF). Nous avons constitué une base de données de gestes capturés comprenant les ...

Research paper thumbnail of Temporal alignment of communicative gesture sequences

Computer Animation and Virtual Worlds, 2006

In this paper we address the problem of temporal alignment applied to captured communicative gest... more In this paper we address the problem of temporal alignment applied to captured communicative gestures conveying different styles. We propose a representation space that may be considered as robust to the spatial variability induced by style. By extending a multilevel dynamic time warping algorithm, we show how this extension can fulfil the goals of time correspondence between gesture sequences while preventing jerkiness introduced by standard time warping methods.

Research paper thumbnail of Towards an Intuitive Sign Language Animation Authoring Environment For the Deaf

We are aiming at developing an online collaborative framework allowing Deaf individuals to author... more We are aiming at developing an online collaborative framework allowing Deaf individuals to author intelligible signs using a dedicated authoring interface controlling the animation of a 3D avatar. In this paper, we present the results that we have achieved so far. They mainly focus on the design of a User Interface assisted by novel input devices. In particular, we show, in two user studies, how the Leap Motion and Kinect-like devices can be used together for capturing respectively hand trajectories (position and orientation) and facial animation. We also show how these devices are integrated into a prototype that we will demonstrate during the workshop.

Research paper thumbnail of Generation of Virtual Characters from Personality Traits

We present a method to generate a virtual character whose physical attributes reflect public opin... more We present a method to generate a virtual character whose physical attributes reflect public opinion of a given personality profile. An initial reverse correlation experiment trains a model which explains the perception of personality traits from physical attributes. The reverse model, solved using linear programming, allows for the real-time generation of virtual characters from an input personality. The method has been applied on three personality traits (dominance, trustworthiness, and agreeableness) and 14 physical attributes and verified through both an analytic test and a subjective study.

Research paper thumbnail of A Self Adaptive Architecture for Hand-Tracked 3 D Authoring Interface

This paper presents a natural and intuitive interface that uses a consumer-range 3D hand capture ... more This paper presents a natural and intuitive interface that uses a consumer-range 3D hand capture device to interactively edit objects in 3D space. After assessing the potential benefit of 3D input interaction in a preliminary study, we propose a self-adaptive architecture that supports intuitive and efficient 3D manipulation while accounting for the user’s skills and instantaneous performance. While running, the system monitors the user’s behavior and performance to maintain a straightforward user model. This model then drives an online re-arrangement and re-parameterization of a rule-based system driving the interaction.

Research paper thumbnail of On the importance of gaze and speech alignment for efficient communication

Gaze is known to be an important social cue in face-to-face communication indicating focus of att... more Gaze is known to be an important social cue in face-to-face communication indicating focus of attention. Speaker gaze can influence object perception and situated utterance comprehension by driving both interlocutors’ visual attention towards the same object; hence facilitating grounding and disambiguation [1]. The precise temporal and causal processes involved in on-line gaze-following during concurrent utterance comprehension are, however, still largely unknown. Specifically, the alignment of referential gaze and speech cues may be essential to such benefit. In this paper, we report findings from an eye-tracking study exploiting a virtual character [2] to systematically assess how speaker gaze influences listeners’ on-line comprehension.

Research paper thumbnail of Generating Virtual Characters from Personality Traits via Reverse Correlation and Linear Programming

This paper presents a system which generates a virtual character defined along three personality ... more This paper presents a system which generates a virtual character defined along three personality traits: Dominance, Trustworthiness, and Agreeableness. From these three traits, 14 surface physical attributes of the target character are automatically inferred. The configuration of our system accounts for an initial training phase, based on a reverse correlation experiment, from which we infer a multivariate linear model explaining the relationship between the perception of the three personality traits and the 14 physical attributes. The inverse model -- solved using linear programming -- allows for the real-time generation of virtual characters from an input personality profile.

Research paper thumbnail of Ergonomics for the design of multimodal interfaces

The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions - Volume 3

Research paper thumbnail of Reformulating clancey's generic tasks for bridging both sides of dual reality

Proceedings of the 29th Conference on l'Interaction Homme-Machine, 2017

Research paper thumbnail of Requirements for a Gesture Specification

Language A comparison of two representation formalisms

Research paper thumbnail of Realizing Multimodal Behavior Closing the gap between behavior planning and embodied agent presentation

Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, faci... more Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors. 1

Research paper thumbnail of Write-Once, Transpile-Everywhere: Re-using Motion Controllers of Virtual Humans Across Multiple Game Engines

Transpilation allows to write code once and re-use it across multiple runtime environments. In th... more Transpilation allows to write code once and re-use it across multiple runtime environments. In this paper, we propose a software development practice to implement once the motion controllers of virtual humans and re-use the implementation in multiple game engines. In a case study, three common human behaviors – blinking, text-to-speech, and eye-gaze – were developed in the Haxe programming language and deployed in the free, open-source Blender Game Engine and the commercial Unity engine. Performance tests show that transpiled code executes within 67% faster to 127% slower with respect to an implementation manually written in the game engine target languages.

Research paper thumbnail of Personality models to and from virtual characters

In order to be believable, virtual agents must possess both a behavioral model simulating emotion... more In order to be believable, virtual agents must possess both a behavioral model simulating emotions and personality, and a convincing aesthetics [4]. A lot of research already exists about models of emotions, and some seminal work investigates now the role of personality [1, 2]. While emotions are dynamic and variable in time, personality is a static feature of humans, changing only very slowly through the course of a life. The emotional state drives the style of the behavior of a character (how it accomplishes actions) the personality drives the intention of an autonomous agent (what to do next). However, there is not much work investigating the relationships between the personality of a virtual agent, its behavior, and its physical appearance. The work that we are conducting in the SLSI group is based on the observation that people very quickly build up their ideas about the personality of others in zero-acquaintance encounters [11]. The judgment of the personality can be modeled, ...

Research paper thumbnail of Towards Monitoring Patients with Alzheimer's Disease Activity using Distributed Tangible Tabletops and Dual Reality

This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD),... more This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD), on an everyday's life basis, using distributed tangible tabletops. Our proposed system aims to help individuals with AD live at home independently of physical presence of caregivers, nurses, and/or family members, since these latter can assist the patients with AD and monitor their activity through a virtually duplicated (dual reality) environment. This monitoring happens in real time, and monitors can encourage and remind patients with AD to take actions, while respecting their privacy. The interaction is achieved by moving physical (tangible) and/or virtual (digital) objects on the tabletop's surface, or by displaying messages and reminders on its surface. A first prototype is described and illustrated, as well as propositions to validate and extend the system.

Research paper thumbnail of Human-Centred Machine Learning

Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2016

Machine learning is one of the most important and successful techniques in contemporary computer ... more Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the coadaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning.

Research paper thumbnail of Système de communication par agent virtuel : aide à la communication des personnes sourdes

Research paper thumbnail of Realizing Multimodal Behavior

Lecture Notes in Computer Science, 2010

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial express... more Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression.. .) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.

Research paper thumbnail of Captured motion data processing for real time synthesis of sign language

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006

Research paper thumbnail of Virtual humanoids endowed with expressive communication gestures: The HuGEx project

Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 2007

Abstract This project aims at the creation of a virtual humanoid endowed with expressive gestures... more Abstract This project aims at the creation of a virtual humanoid endowed with expressive gestures. More specifically, we focus our attention on expressiveness (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract significant features of communicative gestures, and to re-synthesize them afterward with style variation. The proposed model is applied to the generation of a ...

Research paper thumbnail of Virtual Agent for Deaf Signing Gestures

We describe in this paper a system for automatically synthesizing deaf signing animations from mo... more We describe in this paper a system for automatically synthesizing deaf signing animations from motion data captured on real deaf subjects. Moreover, we create a virtual agent endowed with expressive gestures. Our attention is focused on the expressiveness of gesture (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract relevant features of communicative ...

Research paper thumbnail of Agent virtuel signeur-aide à la communication pour personnes sourdes

Proc. Handicap, 2006

L'objectif de ce projet consistea concevoir un humanoide artificiel doué de comportements ge... more L'objectif de ce projet consistea concevoir un humanoide artificiel doué de comportements gestuels expressifs et intelligents. On s' intéresse ici plus spécifiquementa la qualité expressive des gestes (quel style de geste: fluide, tendu, énervé), eta ses représentations sémantiques. L'application envisagée est celle d'un humanoide signeur capable de générer un ensemble de gestes de la Langue des Signes Francaise (LSF). Nous avons constitué une base de données de gestes capturés comprenant les ...

Research paper thumbnail of Temporal alignment of communicative gesture sequences

Computer Animation and Virtual Worlds, 2006

In this paper we address the problem of temporal alignment applied to captured communicative gest... more In this paper we address the problem of temporal alignment applied to captured communicative gestures conveying different styles. We propose a representation space that may be considered as robust to the spatial variability induced by style. By extending a multilevel dynamic time warping algorithm, we show how this extension can fulfil the goals of time correspondence between gesture sequences while preventing jerkiness introduced by standard time warping methods.

Research paper thumbnail of Towards an Intuitive Sign Language Animation Authoring Environment For the Deaf

We are aiming at developing an online collaborative framework allowing Deaf individuals to author... more We are aiming at developing an online collaborative framework allowing Deaf individuals to author intelligible signs using a dedicated authoring interface controlling the animation of a 3D avatar. In this paper, we present the results that we have achieved so far. They mainly focus on the design of a User Interface assisted by novel input devices. In particular, we show, in two user studies, how the Leap Motion and Kinect-like devices can be used together for capturing respectively hand trajectories (position and orientation) and facial animation. We also show how these devices are integrated into a prototype that we will demonstrate during the workshop.

Research paper thumbnail of Generation of Virtual Characters from Personality Traits

We present a method to generate a virtual character whose physical attributes reflect public opin... more We present a method to generate a virtual character whose physical attributes reflect public opinion of a given personality profile. An initial reverse correlation experiment trains a model which explains the perception of personality traits from physical attributes. The reverse model, solved using linear programming, allows for the real-time generation of virtual characters from an input personality. The method has been applied on three personality traits (dominance, trustworthiness, and agreeableness) and 14 physical attributes and verified through both an analytic test and a subjective study.

Research paper thumbnail of A Self Adaptive Architecture for Hand-Tracked 3 D Authoring Interface

This paper presents a natural and intuitive interface that uses a consumer-range 3D hand capture ... more This paper presents a natural and intuitive interface that uses a consumer-range 3D hand capture device to interactively edit objects in 3D space. After assessing the potential benefit of 3D input interaction in a preliminary study, we propose a self-adaptive architecture that supports intuitive and efficient 3D manipulation while accounting for the user’s skills and instantaneous performance. While running, the system monitors the user’s behavior and performance to maintain a straightforward user model. This model then drives an online re-arrangement and re-parameterization of a rule-based system driving the interaction.

Research paper thumbnail of On the importance of gaze and speech alignment for efficient communication

Gaze is known to be an important social cue in face-to-face communication indicating focus of att... more Gaze is known to be an important social cue in face-to-face communication indicating focus of attention. Speaker gaze can influence object perception and situated utterance comprehension by driving both interlocutors’ visual attention towards the same object; hence facilitating grounding and disambiguation [1]. The precise temporal and causal processes involved in on-line gaze-following during concurrent utterance comprehension are, however, still largely unknown. Specifically, the alignment of referential gaze and speech cues may be essential to such benefit. In this paper, we report findings from an eye-tracking study exploiting a virtual character [2] to systematically assess how speaker gaze influences listeners’ on-line comprehension.

Research paper thumbnail of Generating Virtual Characters from Personality Traits via Reverse Correlation and Linear Programming

This paper presents a system which generates a virtual character defined along three personality ... more This paper presents a system which generates a virtual character defined along three personality traits: Dominance, Trustworthiness, and Agreeableness. From these three traits, 14 surface physical attributes of the target character are automatically inferred. The configuration of our system accounts for an initial training phase, based on a reverse correlation experiment, from which we infer a multivariate linear model explaining the relationship between the perception of the three personality traits and the 14 physical attributes. The inverse model -- solved using linear programming -- allows for the real-time generation of virtual characters from an input personality profile.

Research paper thumbnail of Ergonomics for the design of multimodal interfaces

The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions - Volume 3