Modal spaces (original) (raw)
Related papers
Perspective Cursor: Perspective-Based Interaction for Multi-Display Environments
Proceedings of the …, 2006
Multi-display environments and smart meeting rooms are now becoming more common. These environments build a shared display space from variety of devices: tablets, projected surfaces, tabletops, and traditional monitors. Since the different display surfaces are usually not organized in a single plane, traditional schemes for stitching the displays together can cause problems for interaction. However, there is a more natural way to compose display space -- using perspective. In this paper, we develop interaction techniques for multi-display environments that are based on the user's perspective on the room. We designed the Perspective Cursor, a mapping of cursor to display space that appears natural and logical from wherever the user is located. We conducted an experiment to compare two perspective-based techniques, the Perspective Cursor and a beam-based technique, with traditional stitched displays. We found that both perspective techniques were significantly faster for targeting tasks than the traditional technique, and that Perspective Cursor was the most preferred method. Our results show that integrating perspective into the design of multi-display environments can substantially improve performance.
Interaction Techniques for Wall-Sized Screens
Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces - ITS '15, 2015
Large screen displays are part of many future visions, such as i-LAND [11] that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.
Interacting in multi-display environments
Proceedings of the Doctoral Colloqium at the Ubicomp, 2007
With the introduction of mobile devices as well as large public displays, we are surrounded by multiple displays at any time during the day. In addition, interacting with displayed information on one screen at a time is very common. The interaction between multiple displays simultaneously is rather difficult due to reachability, hard-to-use techniques or limited input capabilities. In my work I will investigate and propose new techniques that allow copying, moving or manipulating information across multiple displays. In addition, I will explore ways to allow remote control of (partially) unreachable displays.
An interaction model for universal interaction and control in multi display environments
2003
In this paper we propose a framework for interaction within multi display environments. Each component of the framework is detailed, with comparisons to existing research in ubiquitous computing software infrastructures. We identify a key component of our framework: the interaction manager, which is designed to allow users to interact within multi display environments with multiple devices exhibiting multiple modalities. This is substantiated with a mock up of a scenario in which we foresee this research being used.
Interaction techniques in large display environments using hand-held devices
2006
Abstract Hand-held devices possess a large potential as an interaction device for their today's ubiquity, and present us with an opportunity to devise new and unique ways of interaction as a smart device with multi-modal sensing and display capabilities. This paper introduces user interaction techniques (for selection, translation, scaling and rotation of objects) using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared environments.
Interaction and visualisation across multiple displays in ubiquitous computing environments
Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa - Afrigaph '06, 2006
This paper describes the Universal Interaction Controller (UIC), a user interface framework and device designed to support interactions in ubiquitous computing environments, and the in-situ visualisation of ambient information in environments equipped with multiple heterogeneous displays. We describe the device and the infrastructure we have created to support it. We present the use of augmented reality to display information that is outside the bounds of traditional display surfaces.
Eliciting usable gestures for multi-display environments
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces, 2012
Multi-display environments (MDEs) have advanced rapidly in recent years, incorporating multi-touch tabletops, tablets, wall displays and even position tracking systems. Designers have proposed a variety of interesting gestures for use in an MDE, some of which involve a user moving their hands, arms, body or even a device itself. These gestures are often used as part of interactions to move data between the various components of an MDE, which is a longstanding research problem. But designers, not users, have created most of these gestures and concerns over implementation issues such as recognition may have influenced their design. We performed a user study to elicit these gestures directly from users, but found a low level of convergence among the gestures produced. This lack of agreement is important and we discuss its possible causes and the implication it has for designers. To assist designers, we present the most prevalent gestures and some of the underlying conceptual themes behind them. We also provide analysis of how certain factors such as distance and device type impact the choice of gestures and discuss how to apply them to real-world systems.
Comparing gestures and traditional interaction modalities on large displays
Human-Computer Interaction–INTERACT 2009, 2009
Interfaces based on gesture recognition offer a simple and intuitive alternative to the use of traditional menus, keyboard and mouse. In this paper we explore the field of gestural interaction on large screen displays, presenting the results of a study where users employ gestures to perform common actions in various applications suited for large displays. Results show the actions for which gestural interaction is a best asset compared to traditional interaction.