Design and Development of Multimodal Applications: A Vision on Key Issues and Methods (original) (raw)
Related papers
Designing, Developing & Evaluating Multimodal Applications
2000
1 group has been working with multimodal pen and voice applications s ince 1994. In this paper, we briefly describe several im plemented systems, provide an overview of the infrastructure used for these projects, and then outline a novel m ethodology that encourages an incremental, integrated approach for simultaneously designing, developing and evaluating multimodal systems. We finish by listing a
Software Design and Development of Mutimodal Interaction
IFIP International Federation for Information Processing, 2004
The multimodal dimension of a user interface raises numerous problems that are not present in more traditional interfaces. In this paper, we briefly review the current approaches in software design and modality integration techniques for multimodal interaction. We then propose a simple framework for describing multimodal interaction designs and for combining sets of user inputs of different modalities. We show that the proposed framework can help designers in reasoning about synchronization patterns problems and testing interaction robustness.
Multimodality and Multi-device Interfaces
This position paper discusses how to take into account multimodality when designing multi-device user interfaces. In particular, it focuses on graphical and vocal interaction and discusses how to obtain interfaces in either one modality or their combination starting with logical descriptions, such as the indications of the tasks to perform in order to reach the user's goals. It also introduces how such an approach can enable achieving context-dependent or migratory interfaces exploiting various modalities.
Designing and evaluating multimodal interaction for mobile contexts
Proceedings of the 10th international conference on Multimodal interfaces - IMCI '08, 2008
In this paper we report on our experience on the design and evaluation of multimodal user interfaces in various contexts. We introduce a novel combination of existing design and evaluation methods in the form of a 5-step iterative process and show the feasibility of this method and some of the lessons learned through the design of a messaging application for two contexts (in car, walking). The iterative design process we employed included the following five basic steps: 1) identification of the limitations affecting the usage of different modalities in various contexts (contextual observations and context analysis) 2) identifying and selecting suitable interaction concepts and creating a general design for the multimodal application (storyboarding, use cases, interaction concepts, task breakdown, application UI and interaction design), 3) creating modality-specific UI designs, 4) rapid prototyping and 5) evaluating the prototype in naturalistic situations to find key issues to be taken into account in the next iteration. We have not only found clear indications that context affects users' preferences in the usage of modalities and interaction strategies but also identified some of these. For instance, while speech interaction was preferred in the car environment users did not consider it useful when they were walking. 2D (finger strokes) and especially 3D (tilt) gestures were preferred by walking users.
Technical Requirements for a Successful Multimodal Interaction
2001
Combining several modalities in the same interface requires certain characteristics from input and output devices and the ability to provide some specific information, which is important at the technical level. Unfortunately, several of the current devices do not provide such information. The reason is simple: they have been designed keeping in mind that they will be used in an isolated way, not in combination with other devices. In this paper, we describe the technical requirements revealed by our practical experience when designing multimodal interfaces.
Embarking on multimodal interface design
Proceedings. Fourth IEEE International Conference on Multimodal Interfaces
Designers are increasingly faced with the challenge of targeting multimodal applications, those that span heterogeneous devices and use multimodal input, but do not have tools to support them. We studied the early stage work practices of professional multimodal interaction designers. We noted the variety of different artifacts produced, such as design sketches and paper prototypes. Additionally, we observed Wizard of Oz techniques that are sometimes used to simulate an interactive application from these sketches. These studies have led to our development of a technique for interface designers to consider as they embark on creating multimodal applications.
Human Factors and Design Issues in Multimodal (Speech/Gesture) Interface
Multimodal interfaces are the emerging technology that offers expressive, transparent, efficient, robust, and mobile human-computer interaction. In this paper, we described the speech/gesture based multimodal interface systematically from the human factors point of view. To design more practical and efficient multimodal interface, human factors issues such as user modeling, usability studies, speech and gesture interaction/integration, and redundancy are discussed. This paper can be helpful to the researchers in the field of study that enhances the performance of multimodal interaction.
Designing Mobile Multimodal Applications
Multimodality in Mobile …, 2009
In this chapter we describe a set of techniques and tools that aim at supporting designers while creating mobile multimodal applications. We explain how the additional difficulties that designers face during this process, especially those related to multimodalities, can be tackled. In particular, we present a scenario generation and context definition framework that can be used to drive design and support evaluation within realistic settings, promoting in-situ design and richer results. In conjunction with the scenario framework, we detail a prototyping tool that was developed to support the early stage prototyping and evaluation process of mobile multimodal applications, from the first sketch-based prototypes up to the final quantitative analysis of usage results. As a case study, we describe a mobile application for accessing and reading rich digital books. The application aims at offering users, in particular blind users, means to read and annotate digital books and it was designed to be used on Pocket PCs and Smartphones, including a set of features that enhance both content and usability of traditional books.
Multimodal interfaces with voice and gesture input
1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century
This is a preprint of a paper intended for publication in a journal or proceedings. Since changes may be made before publication, this preprint is made available with the understanding that it will not be cited or reproduced without the permission of the ~ I DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, r e a m-men&dtion, or favoring by the United States Government or any agency thereof. ?he views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Building Supportive Multimodal User Interfaces
In this paper, we describe and discuss solutions capable of helping in the development of supportive multimodal user interfaces. Based on the specifications and design of European Union funded project GUIDE (Gentle User Interfaces for Elderly People), we show how it is possible to use several modalities of interaction as well as adapting UIs, as a mean of providing users with ideal interaction in every application, and preventing or resolving errors resulting from missed or wrong user-device inputs.