Alexis Heloir - Academia.edu (original) (raw)
Papers by Alexis Heloir
HAL (Le Centre pour la Communication Scientifique Directe), May 3, 2014
Among the different problems related to editing motion data, style translation has drawn a partic... more Among the different problems related to editing motion data, style translation has drawn a particular attention. In this paper we address the problem of motion temporal alignment applied to captured communicative gestures conveying different styles. We propose a representation space that may be considered as robust to the spatial variability induced by style. We then step on this motion representation to characterize the temporal variability between several realizations of a gesture sequence. By extending a multilevel dynamic time warping (DTW) algorithm, we show how this extension fulfill the goals of time correspondence between gesture sequences while preventing jerkiness introduced by classical time warping methods.
... In this paper, we present a new realizer called EMBR1 (Embodied Agents Be-havior Realizer) an... more ... In this paper, we present a new realizer called EMBR1 (Embodied Agents Be-havior Realizer) and its control language EMBRScript. An embodied agents re-alizer has particularly demanding requirements: it must run at interactive speed, 1 See also http://embots ...
... sur le modèle et aligné selon des axes naturels ou selon une technique d&#x27... more ... sur le modèle et aligné selon des axes naturels ou selon une technique d'alignement spatial. ... In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 203212. ... 3D Data Processing Visualisation and Transmission (3DPVT)., 2002.
HAL (Le Centre pour la Communication Scientifique Directe), Nov 11, 2015
This paper introduces the DeEvA platform for the generation of virtual characters. The platform g... more This paper introduces the DeEvA platform for the generation of virtual characters. The platform generates virtual characters using personality traits as input. The generation process is conceived to generate characters whose physical appearance responds to people's expectations. Characters generated using the platform can be used as believable Embodied Conversational Agents in interactive applications. The platform uses a combination of crowdsourcing techniques: Reverse Correlation and Interactive Genetic Algorithms. This paper describes the method as well as three working examples.
HAL (Le Centre pour la Communication Scientifique Directe), 2014
Journal on Multimodal User Interfaces, Mar 19, 2021
Robots are becoming more and more present in many domains of our daily lives. Their usage encompa... more Robots are becoming more and more present in many domains of our daily lives. Their usage encompasses industry, home automation, space exploration, and military operations. Robots can also be used in crisis management situations, where it is impossible to access or dangerous to send humans into the intervention area. The present work compares users’ performances on tangible and on touch user interfaces, for a crisis management application on tabletop. The studied task consists of remotely controlling robots in a simulated disaster/intervention area using a tabletop equipped with a layer of RFID antennas, by displacing mini-robots on its surface matching the situation of the real robots on the ground. Dual reality enforces an accurate and up-to-date mapping between the real robots and the mini robots on the tabletop surface. Our findings show that tangible interaction outperforms touch interaction in effectiveness, efficiency and usability, in a task of remote control of one and two robots; only when the user manipulates a single robot remains the efficiency dimension unchanged between tangible and touch interaction. Results also show that tangible interaction technique does not significantly lower the users’ workload. We finally expose a post-experiment interview and questionnaire results, assessing the participants’ overall satisfaction and agreement on using tangible objects on a tabletop.
HAL (Le Centre pour la Communication Scientifique Directe), Mar 5, 2019
In this paper, we present an experimental study between tangible interaction and tactile interact... more In this paper, we present an experimental study between tangible interaction and tactile interaction on tabletops in a dual reality environment. We recruited 32 participants to take part in a user study, which consists of remotely displacing robots and exploring a simulated disaster area using a tabletop and robot toys on its surface. We present our results and we focus on the differences between the two interaction techniques for remote control of robots. Our findings indicate that the tangible interaction outperforms the tactile interaction in usability and in terms of committed errors, classified by different criteria. Meanwhile they indicate also that for the user workload there is no significant difference between tangible and tactile interactions.
HAL (Le Centre pour la Communication Scientifique Directe), Mar 1, 2019
In this paper, we present an experimental study between tangible interaction and tactile interact... more In this paper, we present an experimental study between tangible interaction and tactile interaction on tabletops in a dual reality environment. We recruited 32 participants to take part in a user study, which consists of remotely displacing robots and exploring a simulated disaster area using a tabletop and robot toys on its surface. We present our results and we focus on the differences between the two interaction techniques for remote control of robots. Our findings indicate that the tangible interaction outperforms the tactile interaction in usability and in terms of committed errors, classified by different criteria. Meanwhile they indicate also that for the user workload there is no significant difference between tangible and tactile interactions.
HAL (Le Centre pour la Communication Scientifique Directe), Aug 29, 2017
In everyday life, new interactions are gradually replacing the standard computer keyboard and mou... more In everyday life, new interactions are gradually replacing the standard computer keyboard and mouse, by using the human body gestures (hands, fingers, head, etc.) as alternatives of interactions on surfaces and in-air. Another type of interaction resides within the manipulation of everyday objects to interact with digital systems. Interactive tabletops have emerged as new platforms in several domains, offering better usability and facilitating multiuser collaboration, thanks to their large display surface and different interaction techniques on their surfaces, such as multi-touch and tangible. Therefore, improving interaction(s) on these devices and combining it (respectively them) with other concepts can prove more useful and helpful in the everyday life of users and designers. The topic of this thesis focuses on studying user interactions on tangible interactive tabletops, in a context of use set in a dual reality environment. Tangible User Interfaces offer users the possibility to apprehend and grasp the meaning of digital information by manipulating insightful tangible representations in our physical world. These interaction metaphors are bridging both environments that constitute the dual reality: the physical world and the virtual world. In this perspective, this work presents a theoretical contribution along with its applications. We propose to combine tangible interaction on tabletops and dual reality in a conceptual framework, basically intended for application designers, that models and explains interactions and representations, which operate in dual reality setups. First of all, we expose various works carried out in the field of tangible interaction in general, then we focus on existing work conducted on tabletops. We also propose to list 112 interactive tabletops, classified and characterized by several criteria. Next, we present the dual reality concept and its possible application domains. Second, we design our proposal of the framework, illustrate and explain its composing elements, and how it can adapt to various situations of dual reality, particularly with interactive tabletops equipped with RFID technology. Finally, and as application contributions, we show case studies that we designed based on our proposal, which illustrate implementations of elements from our proposed framework. Research perspectives are finally highlighted at the end of the manuscript.
arXiv (Cornell University), Aug 28, 2018
Articulated hand pose and shape estimation is an important problem for vision-based applications ... more Articulated hand pose and shape estimation is an important problem for vision-based applications such as augmented reality and animation. In contrast to the existing methods which optimize only for joint positions, we propose a fully supervised deep network which learns to jointly estimate a full 3D hand mesh representation and pose from a single depth image. To this end, a CNN architecture is employed to estimate parametric representations i.e. hand pose, bone scales and complex shape parameters. Then, a novel hand pose and shape layer, embedded inside our deep framework, produces 3D joint positions and hand mesh. Lack of sufficient training data with varying hand shapes limits the generalized performance of learning based methods. Also, manually annotating real data is suboptimal. Therefore, we present SynHand5M: a million-scale synthetic dataset with accurate joint annotations, segmentation masks and mesh files of depth maps. Among model based learning (hybrid) methods, we show improved results on our dataset and two of the public benchmarks i.e. NYU and ICVL. Also, by employing a joint training strategy with real and synthetic data, we recover 3D hand mesh and pose from real images in 3.7ms.
arXiv (Cornell University), Jan 19, 2018
Skeletal data acquisition generates a huge amount of highdimensionality data. In many fields wher... more Skeletal data acquisition generates a huge amount of highdimensionality data. In many fields where motion capture techniques are now used, practitioners would greatly benefit from high-level representations of these motion sequences. However meaningful motion data dimensionality reduction is not a trivial task and the selection of the best set of features will largely depend on the considered use case, hence enhancing the need for a fast customization and prototyping tool. In this work, we present a prototyping tool for motion representation and interaction design based on the MotionMachine framework, as well as use cases of new audiovisual instruments that use full-body motion to drive sound and visuals. These instruments have been developed using the proposed prototyping environment. The development of these instruments is a proof of concept and demonstrates the potential of an elaborate choice of higher-level feature extraction techniques in improving the human/computer interaction and leading to more expressive experiences.
HAL (Le Centre pour la Communication Scientifique Directe), May 7, 2018
We propose to implement a bottom-up animation solution for the AZee system. No low-level AZee ani... more We propose to implement a bottom-up animation solution for the AZee system. No low-level AZee animation system exists yet, which hinders its effective implementation as Sign Language avatar input. This bottom-up approach delivers procedurally computed animations and, because of its procedural nature, it is capable of generating the whole possible range of gestures covered by AZee's symbolic description. The goal is not to compete on the ground of naturalness since movements are bound to look robotic like all bottom-up systems, but its purpose could be to be used as the missing low-level fallback for an existing top-down system. The proposed animation system is built on the top of a freely available 3D authoring tool and takes advantage of the tool's default IK solving routines.
2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS)
This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD),... more This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD), on an everyday's life basis, using distributed tangible tabletops. Our proposed system aims to help individuals with AD live at home independently of physical presence of caregivers, nurses, and/or family members, since these latter can assist the patients with AD and monitor their activity through a virtually duplicated (dual reality) environment. This monitoring happens in real time, and monitors can encourage and remind patients with AD to take actions, while respecting their privacy. The interaction is achieved by moving physical (tangible) and/or virtual (digital) objects on the tabletop's surface, or by displaying messages and reminders on its surface. A first prototype is described and illustrated, as well as propositions to validate and extend the system.
HAL (Le Centre pour la Communication Scientifique Directe), May 8, 2017
The character generation GUI (left) takes a 3-dimension personality profile as input. It computes... more The character generation GUI (left) takes a 3-dimension personality profile as input. It computes a combination of physical attributes compatible with off-the-shelf avatar authoring tools (right).
This paper introduces the DeEvA platform for the generation of virtual characters. The platform g... more This paper introduces the DeEvA platform for the generation of virtual characters. The platform generates virtual characters using personality traits as input. The generation process is conceived to generate characters whose physical appearance responds to people's expectations. Characters generated using the platform can be used as believable Embodied Conversational Agents in interactive applications. The platform uses a combination of crowdsourcing techniques: Reverse Correlation and Interactive Genetic Algorithms. This paper describes the method as well as three working examples.
Intelligent Virtual Agents, 2017
We present a method to generate a virtual character whose physical attributes reflect public opin... more We present a method to generate a virtual character whose physical attributes reflect public opinion of a given personality profile. An initial reverse correlation experiment trains a model which explains the perception of personality traits from physical attributes. The reverse model, solved using linear programming, allows for the real-time generation of virtual characters from an input personality. The method has been applied on three personality traits (dominance, trustworthiness, and agreeableness) and 14 physical attributes and verified through both an analytic test and a subjective study.
HAL (Le Centre pour la Communication Scientifique Directe), May 3, 2014
Among the different problems related to editing motion data, style translation has drawn a partic... more Among the different problems related to editing motion data, style translation has drawn a particular attention. In this paper we address the problem of motion temporal alignment applied to captured communicative gestures conveying different styles. We propose a representation space that may be considered as robust to the spatial variability induced by style. We then step on this motion representation to characterize the temporal variability between several realizations of a gesture sequence. By extending a multilevel dynamic time warping (DTW) algorithm, we show how this extension fulfill the goals of time correspondence between gesture sequences while preventing jerkiness introduced by classical time warping methods.
... In this paper, we present a new realizer called EMBR1 (Embodied Agents Be-havior Realizer) an... more ... In this paper, we present a new realizer called EMBR1 (Embodied Agents Be-havior Realizer) and its control language EMBRScript. An embodied agents re-alizer has particularly demanding requirements: it must run at interactive speed, 1 See also http://embots ...
... sur le modèle et aligné selon des axes naturels ou selon une technique d&#x27... more ... sur le modèle et aligné selon des axes naturels ou selon une technique d'alignement spatial. ... In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 203212. ... 3D Data Processing Visualisation and Transmission (3DPVT)., 2002.
HAL (Le Centre pour la Communication Scientifique Directe), Nov 11, 2015
This paper introduces the DeEvA platform for the generation of virtual characters. The platform g... more This paper introduces the DeEvA platform for the generation of virtual characters. The platform generates virtual characters using personality traits as input. The generation process is conceived to generate characters whose physical appearance responds to people's expectations. Characters generated using the platform can be used as believable Embodied Conversational Agents in interactive applications. The platform uses a combination of crowdsourcing techniques: Reverse Correlation and Interactive Genetic Algorithms. This paper describes the method as well as three working examples.
HAL (Le Centre pour la Communication Scientifique Directe), 2014
Journal on Multimodal User Interfaces, Mar 19, 2021
Robots are becoming more and more present in many domains of our daily lives. Their usage encompa... more Robots are becoming more and more present in many domains of our daily lives. Their usage encompasses industry, home automation, space exploration, and military operations. Robots can also be used in crisis management situations, where it is impossible to access or dangerous to send humans into the intervention area. The present work compares users’ performances on tangible and on touch user interfaces, for a crisis management application on tabletop. The studied task consists of remotely controlling robots in a simulated disaster/intervention area using a tabletop equipped with a layer of RFID antennas, by displacing mini-robots on its surface matching the situation of the real robots on the ground. Dual reality enforces an accurate and up-to-date mapping between the real robots and the mini robots on the tabletop surface. Our findings show that tangible interaction outperforms touch interaction in effectiveness, efficiency and usability, in a task of remote control of one and two robots; only when the user manipulates a single robot remains the efficiency dimension unchanged between tangible and touch interaction. Results also show that tangible interaction technique does not significantly lower the users’ workload. We finally expose a post-experiment interview and questionnaire results, assessing the participants’ overall satisfaction and agreement on using tangible objects on a tabletop.
HAL (Le Centre pour la Communication Scientifique Directe), Mar 5, 2019
In this paper, we present an experimental study between tangible interaction and tactile interact... more In this paper, we present an experimental study between tangible interaction and tactile interaction on tabletops in a dual reality environment. We recruited 32 participants to take part in a user study, which consists of remotely displacing robots and exploring a simulated disaster area using a tabletop and robot toys on its surface. We present our results and we focus on the differences between the two interaction techniques for remote control of robots. Our findings indicate that the tangible interaction outperforms the tactile interaction in usability and in terms of committed errors, classified by different criteria. Meanwhile they indicate also that for the user workload there is no significant difference between tangible and tactile interactions.
HAL (Le Centre pour la Communication Scientifique Directe), Mar 1, 2019
In this paper, we present an experimental study between tangible interaction and tactile interact... more In this paper, we present an experimental study between tangible interaction and tactile interaction on tabletops in a dual reality environment. We recruited 32 participants to take part in a user study, which consists of remotely displacing robots and exploring a simulated disaster area using a tabletop and robot toys on its surface. We present our results and we focus on the differences between the two interaction techniques for remote control of robots. Our findings indicate that the tangible interaction outperforms the tactile interaction in usability and in terms of committed errors, classified by different criteria. Meanwhile they indicate also that for the user workload there is no significant difference between tangible and tactile interactions.
HAL (Le Centre pour la Communication Scientifique Directe), Aug 29, 2017
In everyday life, new interactions are gradually replacing the standard computer keyboard and mou... more In everyday life, new interactions are gradually replacing the standard computer keyboard and mouse, by using the human body gestures (hands, fingers, head, etc.) as alternatives of interactions on surfaces and in-air. Another type of interaction resides within the manipulation of everyday objects to interact with digital systems. Interactive tabletops have emerged as new platforms in several domains, offering better usability and facilitating multiuser collaboration, thanks to their large display surface and different interaction techniques on their surfaces, such as multi-touch and tangible. Therefore, improving interaction(s) on these devices and combining it (respectively them) with other concepts can prove more useful and helpful in the everyday life of users and designers. The topic of this thesis focuses on studying user interactions on tangible interactive tabletops, in a context of use set in a dual reality environment. Tangible User Interfaces offer users the possibility to apprehend and grasp the meaning of digital information by manipulating insightful tangible representations in our physical world. These interaction metaphors are bridging both environments that constitute the dual reality: the physical world and the virtual world. In this perspective, this work presents a theoretical contribution along with its applications. We propose to combine tangible interaction on tabletops and dual reality in a conceptual framework, basically intended for application designers, that models and explains interactions and representations, which operate in dual reality setups. First of all, we expose various works carried out in the field of tangible interaction in general, then we focus on existing work conducted on tabletops. We also propose to list 112 interactive tabletops, classified and characterized by several criteria. Next, we present the dual reality concept and its possible application domains. Second, we design our proposal of the framework, illustrate and explain its composing elements, and how it can adapt to various situations of dual reality, particularly with interactive tabletops equipped with RFID technology. Finally, and as application contributions, we show case studies that we designed based on our proposal, which illustrate implementations of elements from our proposed framework. Research perspectives are finally highlighted at the end of the manuscript.
arXiv (Cornell University), Aug 28, 2018
Articulated hand pose and shape estimation is an important problem for vision-based applications ... more Articulated hand pose and shape estimation is an important problem for vision-based applications such as augmented reality and animation. In contrast to the existing methods which optimize only for joint positions, we propose a fully supervised deep network which learns to jointly estimate a full 3D hand mesh representation and pose from a single depth image. To this end, a CNN architecture is employed to estimate parametric representations i.e. hand pose, bone scales and complex shape parameters. Then, a novel hand pose and shape layer, embedded inside our deep framework, produces 3D joint positions and hand mesh. Lack of sufficient training data with varying hand shapes limits the generalized performance of learning based methods. Also, manually annotating real data is suboptimal. Therefore, we present SynHand5M: a million-scale synthetic dataset with accurate joint annotations, segmentation masks and mesh files of depth maps. Among model based learning (hybrid) methods, we show improved results on our dataset and two of the public benchmarks i.e. NYU and ICVL. Also, by employing a joint training strategy with real and synthetic data, we recover 3D hand mesh and pose from real images in 3.7ms.
arXiv (Cornell University), Jan 19, 2018
Skeletal data acquisition generates a huge amount of highdimensionality data. In many fields wher... more Skeletal data acquisition generates a huge amount of highdimensionality data. In many fields where motion capture techniques are now used, practitioners would greatly benefit from high-level representations of these motion sequences. However meaningful motion data dimensionality reduction is not a trivial task and the selection of the best set of features will largely depend on the considered use case, hence enhancing the need for a fast customization and prototyping tool. In this work, we present a prototyping tool for motion representation and interaction design based on the MotionMachine framework, as well as use cases of new audiovisual instruments that use full-body motion to drive sound and visuals. These instruments have been developed using the proposed prototyping environment. The development of these instruments is a proof of concept and demonstrates the potential of an elaborate choice of higher-level feature extraction techniques in improving the human/computer interaction and leading to more expressive experiences.
HAL (Le Centre pour la Communication Scientifique Directe), May 7, 2018
We propose to implement a bottom-up animation solution for the AZee system. No low-level AZee ani... more We propose to implement a bottom-up animation solution for the AZee system. No low-level AZee animation system exists yet, which hinders its effective implementation as Sign Language avatar input. This bottom-up approach delivers procedurally computed animations and, because of its procedural nature, it is capable of generating the whole possible range of gestures covered by AZee's symbolic description. The goal is not to compete on the ground of naturalness since movements are bound to look robotic like all bottom-up systems, but its purpose could be to be used as the missing low-level fallback for an existing top-down system. The proposed animation system is built on the top of a freely available 3D authoring tool and takes advantage of the tool's default IK solving routines.
2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS)
This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD),... more This work presents a baseline for remotely monitoring patients with Alzheimer's disease (AD), on an everyday's life basis, using distributed tangible tabletops. Our proposed system aims to help individuals with AD live at home independently of physical presence of caregivers, nurses, and/or family members, since these latter can assist the patients with AD and monitor their activity through a virtually duplicated (dual reality) environment. This monitoring happens in real time, and monitors can encourage and remind patients with AD to take actions, while respecting their privacy. The interaction is achieved by moving physical (tangible) and/or virtual (digital) objects on the tabletop's surface, or by displaying messages and reminders on its surface. A first prototype is described and illustrated, as well as propositions to validate and extend the system.
HAL (Le Centre pour la Communication Scientifique Directe), May 8, 2017
The character generation GUI (left) takes a 3-dimension personality profile as input. It computes... more The character generation GUI (left) takes a 3-dimension personality profile as input. It computes a combination of physical attributes compatible with off-the-shelf avatar authoring tools (right).
This paper introduces the DeEvA platform for the generation of virtual characters. The platform g... more This paper introduces the DeEvA platform for the generation of virtual characters. The platform generates virtual characters using personality traits as input. The generation process is conceived to generate characters whose physical appearance responds to people's expectations. Characters generated using the platform can be used as believable Embodied Conversational Agents in interactive applications. The platform uses a combination of crowdsourcing techniques: Reverse Correlation and Interactive Genetic Algorithms. This paper describes the method as well as three working examples.
Intelligent Virtual Agents, 2017
We present a method to generate a virtual character whose physical attributes reflect public opin... more We present a method to generate a virtual character whose physical attributes reflect public opinion of a given personality profile. An initial reverse correlation experiment trains a model which explains the perception of personality traits from physical attributes. The reverse model, solved using linear programming, allows for the real-time generation of virtual characters from an input personality. The method has been applied on three personality traits (dominance, trustworthiness, and agreeableness) and 14 physical attributes and verified through both an analytic test and a subjective study.