Clifton Forlines - Academia.edu (original) (raw)

Papers by Clifton Forlines

Research paper thumbnail of GhostID

Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017

Current touch devices are adept at tracking finger touches, but cannot distinguish if multiple to... more Current touch devices are adept at tracking finger touches, but cannot distinguish if multiple touches are caused by different fingers on a single hand, by fingers from both hands of a single user, or by different users. This limitation significantly reduces the possibilities for interaction techniques in touch interfaces. We present GhostID, a capacitive sensor that can differentiate the origins of multiple simultaneous touches. Our approach analyzes the signal ghosting, already present as an artifact in a frequency-division touch controller, to differentiate touches from the same hand or different hands of a single user (77% reliability at 60 fps) or from two different users (95% reliability at 60 fps). In addition to GhostID, we also develop a framework of user-differentiation capabilities for touch input devices, and illustrate a set of interaction techniques enabled by GhostID.

Research paper thumbnail of How pairs interact over a multimodal digital table

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2007

Co-located collaborators often work over physical tabletops using combinations of expressive hand... more Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multiuser multimodal digital table interaction.

Research paper thumbnail of Three modes of multisurface interaction and visualization

... Chia Shen, Alan Esenther, Clifton Forlines, Kathy Ryall ... From left to right: (a) Parsons B... more ... Chia Shen, Alan Esenther, Clifton Forlines, Kathy Ryall ... From left to right: (a) Parsons Brinckerhoff PB CAVE (Computer Analysis and Visualization Environment) for urban design [8], (b) document triage [1], (c) ad hoc group meeting[6], (d) NYPD Real-Time Crime Center[7 ...

Research paper thumbnail of Stratégies de décimation pour traitement d'événements d'entrée

Selon un mode de realisation, l'invention concerne un dispositif tactile qui comprend une int... more Selon un mode de realisation, l'invention concerne un dispositif tactile qui comprend une interface tactile comportant des rangees et des colonnes et un generateur de signaux pour generer des signaux orthogonaux uniques sur une pluralite des rangees, respectivement. Un processeur tactile identifie un contact sur l'interface tactile par traitement de signaux tactiles presents sur les colonnes, et delivre un flux d'evenements tactiles. Un decimateur recoit le flux d'evenements tactiles, identifie selectivement un ou plusieurs des evenements tactiles dans le flux, et delivre un flux d'evenements tactiles modifie destine a etre utilise par le dispositif tactile.

Research paper thumbnail of Hammer Time!

Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016

We report on the Latency Hammer, a low-cost yet highaccuracy and high-precision automated tool th... more We report on the Latency Hammer, a low-cost yet highaccuracy and high-precision automated tool that measures the interface latency of touchscreen devices. The Hammer directly measures latency by triggering a capacitive touch event on a device using an electrically actuated touch simulator, and a photo sensor to monitor the screen for a visual response. This allows us to measure the full end-toend latency of a touchscreen system exactly as it would be experienced by a user. The Hammer does not require human interaction to perform a measurement, enabling the acquisition of large datasets. We present the operating principles of the Hammer, and discuss its design and construction; full design documents are available online. We also present a series of tools and equipment that were built to assess and validate the performance of the Hammer, and demonstrate that it provides reliable latency measurements.

Research paper thumbnail of Informing the Design of Direct-Touch Tabletops

IEEE Computer Graphics and Applications, 2006

Tables provide a large and natural interface for supporting direct manipulation of visual content... more Tables provide a large and natural interface for supporting direct manipulation of visual content for human-to-human interactions. Such surfaces also support collaboration, coordination, and parallel problem solving. However, the direct-touch table metaphor also presents considerable challenges, including the need for input methods that transcend traditional mouse-and keyboardbased designs.

Research paper thumbnail of DiamondSpin

Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, 2004

Research paper thumbnail of Under the table interaction

Proceedings of the 19th annual ACM symposium on User interface software and technology - UIST '06, 2006

Research paper thumbnail of Direct-touch vs. mouse input for tabletop displays

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2007

Research paper thumbnail of Enabling interaction with single user applications through speech and gestures on a multi-user tabletop

Proceedings of the working conference on Advanced visual interfaces - AVI '06, 2006

Co-located collaborators often work over physical tabletops with rich geospatial information. Pre... more Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multiuser surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems Google Earth and Warcraft III we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.

Research paper thumbnail of Multimodal multiplayer tabletop gaming

Computers in Entertainment, 2007

There is a large disparity between the rich physical interfaces of co-located arcade games and th... more There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this article we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view; (b) rich whole-handed gesture input usually seen only when handling physical objects; (c) the ability to monitor how others use space and access objects on the surface; and (d) the ability to communicate with each other and interact on top of the surface via gestures and verbal utterance. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole-hand ...

Research paper thumbnail of A speech-in list-out approach to spoken user interfaces

Proceedings of HLT-NAACL 2004: Short Papers on XX - HLT-NAACL '04, 2004

Spoken user interfaces are conventionally either dialogue based or menu-based. In this paper we p... more Spoken user interfaces are conventionally either dialogue based or menu-based. In this paper we propose a third approach, in which the task of invoking responses from the system is treated as one of retrieval from the set of all possible responses. Unlike conventional spoken user interfaces that return a unique response to the user, the proposed interface returns a shortlist of possible responses, from which the user must make the final selection. We refer to such interfaces as Speech-In List-Out or SILO interfaces. Experiments show that SILO interfaces can be very effective, are highly robust to degraded speech recognition performance, and can impose significantly lower cognitive load on the user as compared to menu-based interfaces.

Research paper thumbnail of How Much Faster is Fast Enough?

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

This paper reports on two experiments designed to further our understanding of users' perception ... more This paper reports on two experiments designed to further our understanding of users' perception of latency in touchbased systems. The first experiment extends previous efforts to measure latency perception by reporting on a unified study in which direct and indirect form-factors are compared for both tapping and dragging tasks. Our results show significant effects from both form-factor and task, and inform system designers as to what input latencies they should aim to achieve in a variety of system types. A follow-up experiment investigates peoples' ability to perceive small improvements to latency in direct and indirect form-factors for tapping and dragging tasks. Our results provide guidance to system designers of the relative value of making improvements in latency that reduce but do not fully eliminate lag from their systems.

Research paper thumbnail of iLamps

ACM SIGGRAPH 2003 Papers on - SIGGRAPH '03, 2003

Research paper thumbnail of Context aware projector

Research paper thumbnail of Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

Freehand gestural interaction with direct-touch computation surfaces has been the focus of signif... more Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a contructive design framework. In this paper, we develop and articulate a set of design principles for constructing-in a systematic and extensible manner-multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.

Research paper thumbnail of Experiences with and observations of directtouch tabletops

The design of multi-touch multiuser tabletop user interfaces is still in its infancy and is not y... more The design of multi-touch multiuser tabletop user interfaces is still in its infancy and is not yet well understood. To date, published experimental results have primarily focused on controlled user studies. In this paper, we present observations of user experience "in the wild" on interactive tables in four different real-world contexts-all non-controlled settings. We reflect upon our collective experience, report our observations, and summarize lessons learned by identifying design considerations relating to several aspects of interactive tables, such as simultaneous touching, ambiguous input, one-fingered touch, finger resolution, alternate touch input, crowding and clutter, text input, orientation, multiuser coordination, occlusion, ergonomic issues, and mental models.

Research paper thumbnail of Release, Relocate, Reorient, Resize: Fluid Techniques for Document Sharing on Multi-User Interactive Tables

Chi 04 Extended Abstracts on Human Factors in Computing Systems, 2004

Research paper thumbnail of Handbook of research on user interface design and evaluation for mobile technology

Choice Reviews Online, 2008

Abstract In this chapter we discuss a variety of topics relating to speech-based user interfaces ... more Abstract In this chapter we discuss a variety of topics relating to speech-based user interfaces for use in an automotive environment. We begin by presenting a number of design principles for the design of such interfaces, derived from several decades of combined experience in the development and evaluation of spoken user interfaces (UI) for automobiles, along with three case studies of current automotive navigation interfaces. Finally, we present a new model for speech-based user interfaces in automotive ...

Research paper thumbnail of Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

First IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP '06), 2006

Freehand gestural interaction with direct-touch computation surfaces has been the focus of signif... more Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a contructive design framework. In this paper, we develop and articulate a set of design principles for constructing-in a systematic and extensible manner-multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.

Research paper thumbnail of GhostID

Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017

Current touch devices are adept at tracking finger touches, but cannot distinguish if multiple to... more Current touch devices are adept at tracking finger touches, but cannot distinguish if multiple touches are caused by different fingers on a single hand, by fingers from both hands of a single user, or by different users. This limitation significantly reduces the possibilities for interaction techniques in touch interfaces. We present GhostID, a capacitive sensor that can differentiate the origins of multiple simultaneous touches. Our approach analyzes the signal ghosting, already present as an artifact in a frequency-division touch controller, to differentiate touches from the same hand or different hands of a single user (77% reliability at 60 fps) or from two different users (95% reliability at 60 fps). In addition to GhostID, we also develop a framework of user-differentiation capabilities for touch input devices, and illustrate a set of interaction techniques enabled by GhostID.

Research paper thumbnail of How pairs interact over a multimodal digital table

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2007

Co-located collaborators often work over physical tabletops using combinations of expressive hand... more Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multiuser multimodal digital table interaction.

Research paper thumbnail of Three modes of multisurface interaction and visualization

... Chia Shen, Alan Esenther, Clifton Forlines, Kathy Ryall ... From left to right: (a) Parsons B... more ... Chia Shen, Alan Esenther, Clifton Forlines, Kathy Ryall ... From left to right: (a) Parsons Brinckerhoff PB CAVE (Computer Analysis and Visualization Environment) for urban design [8], (b) document triage [1], (c) ad hoc group meeting[6], (d) NYPD Real-Time Crime Center[7 ...

Research paper thumbnail of Stratégies de décimation pour traitement d'événements d'entrée

Selon un mode de realisation, l'invention concerne un dispositif tactile qui comprend une int... more Selon un mode de realisation, l'invention concerne un dispositif tactile qui comprend une interface tactile comportant des rangees et des colonnes et un generateur de signaux pour generer des signaux orthogonaux uniques sur une pluralite des rangees, respectivement. Un processeur tactile identifie un contact sur l'interface tactile par traitement de signaux tactiles presents sur les colonnes, et delivre un flux d'evenements tactiles. Un decimateur recoit le flux d'evenements tactiles, identifie selectivement un ou plusieurs des evenements tactiles dans le flux, et delivre un flux d'evenements tactiles modifie destine a etre utilise par le dispositif tactile.

Research paper thumbnail of Hammer Time!

Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016

We report on the Latency Hammer, a low-cost yet highaccuracy and high-precision automated tool th... more We report on the Latency Hammer, a low-cost yet highaccuracy and high-precision automated tool that measures the interface latency of touchscreen devices. The Hammer directly measures latency by triggering a capacitive touch event on a device using an electrically actuated touch simulator, and a photo sensor to monitor the screen for a visual response. This allows us to measure the full end-toend latency of a touchscreen system exactly as it would be experienced by a user. The Hammer does not require human interaction to perform a measurement, enabling the acquisition of large datasets. We present the operating principles of the Hammer, and discuss its design and construction; full design documents are available online. We also present a series of tools and equipment that were built to assess and validate the performance of the Hammer, and demonstrate that it provides reliable latency measurements.

Research paper thumbnail of Informing the Design of Direct-Touch Tabletops

IEEE Computer Graphics and Applications, 2006

Tables provide a large and natural interface for supporting direct manipulation of visual content... more Tables provide a large and natural interface for supporting direct manipulation of visual content for human-to-human interactions. Such surfaces also support collaboration, coordination, and parallel problem solving. However, the direct-touch table metaphor also presents considerable challenges, including the need for input methods that transcend traditional mouse-and keyboardbased designs.

Research paper thumbnail of DiamondSpin

Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, 2004

Research paper thumbnail of Under the table interaction

Proceedings of the 19th annual ACM symposium on User interface software and technology - UIST '06, 2006

Research paper thumbnail of Direct-touch vs. mouse input for tabletop displays

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2007

Research paper thumbnail of Enabling interaction with single user applications through speech and gestures on a multi-user tabletop

Proceedings of the working conference on Advanced visual interfaces - AVI '06, 2006

Co-located collaborators often work over physical tabletops with rich geospatial information. Pre... more Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multiuser surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems Google Earth and Warcraft III we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.

Research paper thumbnail of Multimodal multiplayer tabletop gaming

Computers in Entertainment, 2007

There is a large disparity between the rich physical interfaces of co-located arcade games and th... more There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this article we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view; (b) rich whole-handed gesture input usually seen only when handling physical objects; (c) the ability to monitor how others use space and access objects on the surface; and (d) the ability to communicate with each other and interact on top of the surface via gestures and verbal utterance. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole-hand ...

Research paper thumbnail of A speech-in list-out approach to spoken user interfaces

Proceedings of HLT-NAACL 2004: Short Papers on XX - HLT-NAACL '04, 2004

Spoken user interfaces are conventionally either dialogue based or menu-based. In this paper we p... more Spoken user interfaces are conventionally either dialogue based or menu-based. In this paper we propose a third approach, in which the task of invoking responses from the system is treated as one of retrieval from the set of all possible responses. Unlike conventional spoken user interfaces that return a unique response to the user, the proposed interface returns a shortlist of possible responses, from which the user must make the final selection. We refer to such interfaces as Speech-In List-Out or SILO interfaces. Experiments show that SILO interfaces can be very effective, are highly robust to degraded speech recognition performance, and can impose significantly lower cognitive load on the user as compared to menu-based interfaces.

Research paper thumbnail of How Much Faster is Fast Enough?

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

This paper reports on two experiments designed to further our understanding of users' perception ... more This paper reports on two experiments designed to further our understanding of users' perception of latency in touchbased systems. The first experiment extends previous efforts to measure latency perception by reporting on a unified study in which direct and indirect form-factors are compared for both tapping and dragging tasks. Our results show significant effects from both form-factor and task, and inform system designers as to what input latencies they should aim to achieve in a variety of system types. A follow-up experiment investigates peoples' ability to perceive small improvements to latency in direct and indirect form-factors for tapping and dragging tasks. Our results provide guidance to system designers of the relative value of making improvements in latency that reduce but do not fully eliminate lag from their systems.

Research paper thumbnail of iLamps

ACM SIGGRAPH 2003 Papers on - SIGGRAPH '03, 2003

Research paper thumbnail of Context aware projector

Research paper thumbnail of Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

Freehand gestural interaction with direct-touch computation surfaces has been the focus of signif... more Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a contructive design framework. In this paper, we develop and articulate a set of design principles for constructing-in a systematic and extensible manner-multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.

Research paper thumbnail of Experiences with and observations of directtouch tabletops

The design of multi-touch multiuser tabletop user interfaces is still in its infancy and is not y... more The design of multi-touch multiuser tabletop user interfaces is still in its infancy and is not yet well understood. To date, published experimental results have primarily focused on controlled user studies. In this paper, we present observations of user experience "in the wild" on interactive tables in four different real-world contexts-all non-controlled settings. We reflect upon our collective experience, report our observations, and summarize lessons learned by identifying design considerations relating to several aspects of interactive tables, such as simultaneous touching, ambiguous input, one-fingered touch, finger resolution, alternate touch input, crowding and clutter, text input, orientation, multiuser coordination, occlusion, ergonomic issues, and mental models.

Research paper thumbnail of Release, Relocate, Reorient, Resize: Fluid Techniques for Document Sharing on Multi-User Interactive Tables

Chi 04 Extended Abstracts on Human Factors in Computing Systems, 2004

Research paper thumbnail of Handbook of research on user interface design and evaluation for mobile technology

Choice Reviews Online, 2008

Abstract In this chapter we discuss a variety of topics relating to speech-based user interfaces ... more Abstract In this chapter we discuss a variety of topics relating to speech-based user interfaces for use in an automotive environment. We begin by presenting a number of design principles for the design of such interfaces, derived from several decades of combined experience in the development and evaluation of spoken user interfaces (UI) for automobiles, along with three case studies of current automotive navigation interfaces. Finally, we present a new model for speech-based user interfaces in automotive ...

Research paper thumbnail of Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

First IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP '06), 2006

Freehand gestural interaction with direct-touch computation surfaces has been the focus of signif... more Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a contructive design framework. In this paper, we develop and articulate a set of design principles for constructing-in a systematic and extensible manner-multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.