Michael Gleicher - Academia.edu (original) (raw)
Papers by Michael Gleicher
In this paper we consider detecting collisions between characters whose motion is specified by mo... more In this paper we consider detecting collisions between characters whose motion is specified by motion capture data. We consider rough collisions, modeling the characters as a disk in the floor plane. To provide efficient collision detection, we introduce a hierarchical bounding volume, the Motion Oriented Bounding Box tree (MOBB tree). A MOBBtree stores space-time bounds of a motion clip. In crowd animation tests, MOBB trees performance improvements ranging between two and an order of magnitude.
11th Pacific Conference onComputer Graphics and Applications, 2003. Proceedings., 2000
Proceedings of the 2014 Acm Ieee International Conference, Mar 3, 2014
Gaze aversion-the intentional redirection away from the face of an interlocutor-is an important n... more Gaze aversion-the intentional redirection away from the face of an interlocutor-is an important nonverbal cue that serves a number of conversational functions, including signaling cognitive effort, regulating a conversation's intimacy level, and managing the conversational floor. In prior work, we developed a model of how gaze aversions are employed in conversation to perform these functions. In this paper, we extend the model to apply to conversational robots, enabling them to achieve some of these functions in conversations with people. We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes. This system, implemented on the NAO platform, autonomously generates and combines three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor.
Proceedings of the 2002 Acm Siggraph Eurographics Symposium on Computer Animation, Jul 21, 2002
While motion capture is commonplace in character animation, often the raw motion data itself is n... more While motion capture is commonplace in character animation, often the raw motion data itself is not used. Rather, it is first fit onto a skeleton and then edited to satisfy the particular demands of the animation. This process can introduce artifacts into the motion. One particularly distracting artifact is when the character's feet move when they ought to remain planted, a condition known as footskate. In this paper we present a simple, efficient algorithm for removing footskate. Our algorithm exactly satisfies footplant constraints without introducing disagreeable artifacts.
Proceedings of the 19th Annual Conference, Jul 1, 1992
In this paper we introduce through-the-lens camera control, a body of techniques that permit a us... more In this paper we introduce through-the-lens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in user-defined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
... One interface we have used for this is to attach a control rod (which represents the second p... more ... One interface we have used for this is to attach a control rod (which represents the second parametric derivative) to the surface at the specified point, and let the user pull on the end of the rod. ... [CMS88] Michael Chen, S. Joy Mountford, and Abigail Sellen. ...
As computational performance becomes more readily available, there will be an increasing variety ... more As computational performance becomes more readily available, there will be an increasing variety of interactive graphical applications with iterative numerical techniques at their core. In this paper, we consider how to support the unique demands of such applications. In particular, we focus on how to set up the numerical problems which must be solved. In the context of interactive systems, this requires the ability to dynamically compose systems of equations and rapidly evaluate them and their derivatives. We present an approach called Snap-Together Mathematics for doing this.
Many motion editing algorithms, including transitioning and multitarget interpolation, can be rep... more Many motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.
Computer Graphics Forum
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb... more A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, ...
Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12, 2012
Virtual agents hold great promise in human-computer interaction with their ability to afford embo... more Virtual agents hold great promise in human-computer interaction with their ability to afford embodied interaction using nonverbal human communicative cues. Gaze cues are particularly important to achieve significant high-level outcomes such as improved learning and feelings of rapport. Our goal is to explore how agents might achieve such outcomes through seemingly subtle changes in gaze behavior and what design variables for gaze might lead to such positive outcomes. Drawing on research in human physiology, we developed a model of gaze behavior to capture these key design variables. In a user study, we investigated how manipulations in these variables might improve affiliation with the agent and learning. The results showed that an agent using affiliative gaze elicited more positive feelings of connection, while an agent using referential gaze improved participants' learning. Our model and findings offer guidelines for the design of effective gaze behaviors for virtual agents.
ACM Transactions on Interactive Intelligent Systems, 2015
To facilitate natural interactions between humans and embodied conversational agents (ECAs), we n... more To facilitate natural interactions between humans and embodied conversational agents (ECAs), we need to endow the latter with the same nonverbal cues that humans use to communicate. Gaze cues in particular are integral in mechanisms for communication and management of attention in social interactions, which can trigger important social and cognitive processes, such as establishment of affiliation between people or learning new information. The fundamental building blocks of gaze behaviors are gaze shifts: coordinated movements of the eyes, head, and body toward objects and information in the environment. In this article, we present a novel computational model for gaze shift synthesis for ECAs that supports parametric control over coordinated eye, head, and upper body movements. We employed the model in three studies with human participants. In the first study, we validated the model by showing that participants are able to interpret the agent's gaze direction accurately. In the second and third studies, we showed that by adjusting the participation of the head and upper body in gaze shifts, we can control the strength of the attention signals conveyed, thereby strengthening or weakening their social and cognitive effects. The second study shows that manipulation of eye-head coordination in gaze enables an agent to convey more information or establish stronger affiliation with participants in a teaching task, while the third study demonstrates how manipulation of upper body coordination enables the agent to communicate increased interest in objects in the environment.
Lecture Notes in Computer Science, 2013
In conversation, people avert their gaze from one another to achieve a number of conversational f... more In conversation, people avert their gaze from one another to achieve a number of conversational functions, including turn-taking, regulating intimacy, and indicating that cognitive effort is being put into planning an utterance. In this work, we enable virtual agents to effectively use gaze aversions to achieve these same functions in conversations with people. We extend existing social science knowledge of gaze aversion by analyzing video data of human dyadic conversations. This analysis yielded precise timings of speaker and listener gaze aversions, enabling us to design gaze aversion behaviors for virtual agents. We evaluated these behaviors for their ability to achieve positive conversational functions in a laboratory experiment with 24 participants. Results show that virtual agents employing gaze aversion are perceived as thinking, are able to elicit more disclosure from human interlocutors, and are able to regulate conversational turn-taking.
Proceedings of the 2007 symposium on Interactive 3D graphics and games - I3D '07, 2007
: An interactively controllable walking character using parametric motion graphs to smoothly move... more : An interactively controllable walking character using parametric motion graphs to smoothly move through an environment. The character is turning around to walk in the user-requested travel direction, depicted by the red arrow on the ground.
ACM SIGGRAPH Computer Graphics, 1992
In this paper we introduce through-the-lens camera control, a body of techniques that permit a us... more In this paper we introduce through-the-lens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in user-defined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
Journal of Vision, 2014
Our visual system can extract statistical properties of large collections of objects. Most studie... more Our visual system can extract statistical properties of large collections of objects. Most studies of this ability focus on mean value judgments across a constrained set of dimensions 1-3 . We explore how two visual representations of a set, line height and color, influence viewers' abilities to visually extract different properties from the set.
2014 IEEE Conference on Visual Analytics Science and Technology (VAST), 2014
Computer graphics forum : journal of the European Association for Computer Graphics, 2014
Many bioinformatics applications construct classifiers that are validated in experiments that com... more Many bioinformatics applications construct classifiers that are validated in experiments that compare their results to known ground truth over a corpus. In this paper, we introduce an approach for exploring the results of such classifier validation experiments, focusing on classifiers for regions of molecular surfaces. We provide a tool that allows for examining classification performance patterns over a test corpus. The approach combines a summary view that provides information about an entire corpus of molecules with a detail view that visualizes classifier results directly on protein surfaces. Rather than displaying miniature 3D views of each molecule, the summary provides 2D glyphs of each protein surface arranged in a reorderable, small-multiples grid. Each summary is specifically designed to support visual aggregation to allow the viewer to both get a sense of aggregate properties as well as the details that form them. The detail view provides a 3D visualization of each protei...
In this paper we consider detecting collisions between characters whose motion is specified by mo... more In this paper we consider detecting collisions between characters whose motion is specified by motion capture data. We consider rough collisions, modeling the characters as a disk in the floor plane. To provide efficient collision detection, we introduce a hierarchical bounding volume, the Motion Oriented Bounding Box tree (MOBB tree). A MOBBtree stores space-time bounds of a motion clip. In crowd animation tests, MOBB trees performance improvements ranging between two and an order of magnitude.
11th Pacific Conference onComputer Graphics and Applications, 2003. Proceedings., 2000
Proceedings of the 2014 Acm Ieee International Conference, Mar 3, 2014
Gaze aversion-the intentional redirection away from the face of an interlocutor-is an important n... more Gaze aversion-the intentional redirection away from the face of an interlocutor-is an important nonverbal cue that serves a number of conversational functions, including signaling cognitive effort, regulating a conversation's intimacy level, and managing the conversational floor. In prior work, we developed a model of how gaze aversions are employed in conversation to perform these functions. In this paper, we extend the model to apply to conversational robots, enabling them to achieve some of these functions in conversations with people. We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes. This system, implemented on the NAO platform, autonomously generates and combines three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor.
Proceedings of the 2002 Acm Siggraph Eurographics Symposium on Computer Animation, Jul 21, 2002
While motion capture is commonplace in character animation, often the raw motion data itself is n... more While motion capture is commonplace in character animation, often the raw motion data itself is not used. Rather, it is first fit onto a skeleton and then edited to satisfy the particular demands of the animation. This process can introduce artifacts into the motion. One particularly distracting artifact is when the character's feet move when they ought to remain planted, a condition known as footskate. In this paper we present a simple, efficient algorithm for removing footskate. Our algorithm exactly satisfies footplant constraints without introducing disagreeable artifacts.
Proceedings of the 19th Annual Conference, Jul 1, 1992
In this paper we introduce through-the-lens camera control, a body of techniques that permit a us... more In this paper we introduce through-the-lens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in user-defined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
... One interface we have used for this is to attach a control rod (which represents the second p... more ... One interface we have used for this is to attach a control rod (which represents the second parametric derivative) to the surface at the specified point, and let the user pull on the end of the rod. ... [CMS88] Michael Chen, S. Joy Mountford, and Abigail Sellen. ...
As computational performance becomes more readily available, there will be an increasing variety ... more As computational performance becomes more readily available, there will be an increasing variety of interactive graphical applications with iterative numerical techniques at their core. In this paper, we consider how to support the unique demands of such applications. In particular, we focus on how to set up the numerical problems which must be solved. In the context of interactive systems, this requires the ability to dynamically compose systems of equations and rapidly evaluate them and their derivatives. We present an approach called Snap-Together Mathematics for doing this.
Many motion editing algorithms, including transitioning and multitarget interpolation, can be rep... more Many motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.
Computer Graphics Forum
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb... more A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, ...
Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12, 2012
Virtual agents hold great promise in human-computer interaction with their ability to afford embo... more Virtual agents hold great promise in human-computer interaction with their ability to afford embodied interaction using nonverbal human communicative cues. Gaze cues are particularly important to achieve significant high-level outcomes such as improved learning and feelings of rapport. Our goal is to explore how agents might achieve such outcomes through seemingly subtle changes in gaze behavior and what design variables for gaze might lead to such positive outcomes. Drawing on research in human physiology, we developed a model of gaze behavior to capture these key design variables. In a user study, we investigated how manipulations in these variables might improve affiliation with the agent and learning. The results showed that an agent using affiliative gaze elicited more positive feelings of connection, while an agent using referential gaze improved participants' learning. Our model and findings offer guidelines for the design of effective gaze behaviors for virtual agents.
ACM Transactions on Interactive Intelligent Systems, 2015
To facilitate natural interactions between humans and embodied conversational agents (ECAs), we n... more To facilitate natural interactions between humans and embodied conversational agents (ECAs), we need to endow the latter with the same nonverbal cues that humans use to communicate. Gaze cues in particular are integral in mechanisms for communication and management of attention in social interactions, which can trigger important social and cognitive processes, such as establishment of affiliation between people or learning new information. The fundamental building blocks of gaze behaviors are gaze shifts: coordinated movements of the eyes, head, and body toward objects and information in the environment. In this article, we present a novel computational model for gaze shift synthesis for ECAs that supports parametric control over coordinated eye, head, and upper body movements. We employed the model in three studies with human participants. In the first study, we validated the model by showing that participants are able to interpret the agent's gaze direction accurately. In the second and third studies, we showed that by adjusting the participation of the head and upper body in gaze shifts, we can control the strength of the attention signals conveyed, thereby strengthening or weakening their social and cognitive effects. The second study shows that manipulation of eye-head coordination in gaze enables an agent to convey more information or establish stronger affiliation with participants in a teaching task, while the third study demonstrates how manipulation of upper body coordination enables the agent to communicate increased interest in objects in the environment.
Lecture Notes in Computer Science, 2013
In conversation, people avert their gaze from one another to achieve a number of conversational f... more In conversation, people avert their gaze from one another to achieve a number of conversational functions, including turn-taking, regulating intimacy, and indicating that cognitive effort is being put into planning an utterance. In this work, we enable virtual agents to effectively use gaze aversions to achieve these same functions in conversations with people. We extend existing social science knowledge of gaze aversion by analyzing video data of human dyadic conversations. This analysis yielded precise timings of speaker and listener gaze aversions, enabling us to design gaze aversion behaviors for virtual agents. We evaluated these behaviors for their ability to achieve positive conversational functions in a laboratory experiment with 24 participants. Results show that virtual agents employing gaze aversion are perceived as thinking, are able to elicit more disclosure from human interlocutors, and are able to regulate conversational turn-taking.
Proceedings of the 2007 symposium on Interactive 3D graphics and games - I3D '07, 2007
: An interactively controllable walking character using parametric motion graphs to smoothly move... more : An interactively controllable walking character using parametric motion graphs to smoothly move through an environment. The character is turning around to walk in the user-requested travel direction, depicted by the red arrow on the ground.
ACM SIGGRAPH Computer Graphics, 1992
In this paper we introduce through-the-lens camera control, a body of techniques that permit a us... more In this paper we introduce through-the-lens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in user-defined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
Journal of Vision, 2014
Our visual system can extract statistical properties of large collections of objects. Most studie... more Our visual system can extract statistical properties of large collections of objects. Most studies of this ability focus on mean value judgments across a constrained set of dimensions 1-3 . We explore how two visual representations of a set, line height and color, influence viewers' abilities to visually extract different properties from the set.
2014 IEEE Conference on Visual Analytics Science and Technology (VAST), 2014
Computer graphics forum : journal of the European Association for Computer Graphics, 2014
Many bioinformatics applications construct classifiers that are validated in experiments that com... more Many bioinformatics applications construct classifiers that are validated in experiments that compare their results to known ground truth over a corpus. In this paper, we introduce an approach for exploring the results of such classifier validation experiments, focusing on classifiers for regions of molecular surfaces. We provide a tool that allows for examining classification performance patterns over a test corpus. The approach combines a summary view that provides information about an entire corpus of molecules with a detail view that visualizes classifier results directly on protein surfaces. Rather than displaying miniature 3D views of each molecule, the summary provides 2D glyphs of each protein surface arranged in a reorderable, small-multiples grid. Each summary is specifically designed to support visual aggregation to allow the viewer to both get a sense of aggregate properties as well as the details that form them. The detail view provides a 3D visualization of each protei...