MIMI: A Multimodal and Adaptive Interface for an In-Car Communication System (original) (raw)

Comparison of manual vs. speech-based interaction with in-vehicle information systems

Accident Analysis and Prevention, 2009

This study examined whether speech-based interfaces for different in-vehicle-information-systems (IVIS) reduce the distraction caused by these systems. For three frequently used systems (audio, telephone with name selection, navigation system with address entry and point of interest selection) speech, manual control and driving without IVIS (baseline) were compared. The Lane Change Task was used to assess driving performance. Additionally, gaze behavior and a subjective measure of distraction were analyzed. Speech interfaces improved driving performance, gaze behavior and subjective distraction for all systems with the exception of point-of-interest entry. However, these improvements were overall not strong enough to reach the baseline performance level. Only in easy segments of the driving task the performance level was comparable to baseline. Thus, speech-based IVIS have to be further developed to keep the cognitive complexity at an adequate level which does not disturb driving. However, looking at the benefits, speech control is a must for the car of the future.

Development and Evaluation of Automotive Speech Interfaces: Useful Information from the Human Factors and the Related Literature

International Journal of Vehicular Technology, 2013

Drivers often use infotainment systems in motor vehicles, such as systems for navigation, music, and phones. However, operating visual-manual interfaces for these systems can distract drivers. Speech interfaces may be less distracting. To help designing easy-to-use speech interfaces, this paper identifies key speech interfaces (e.g., CHAT, Linguatronic, SYNC, Siri, and Google Voice), their features, and what was learned from evaluating them and other systems. Also included is information on key technical standards (e.g., ISO 9921, ITU P.800) and relevant design guidelines. This paper also describes relevant design and evaluation methods (e.g., Wizard of Oz) and how to make driving studies replicable (e.g., by referencing SAE J2944). Throughout the paper, there is discussion of linguistic terms (e.g., turn-taking) and principles (e.g., Grice’s Conversational Maxims) that provide a basis for describing user-device interactions and errors in evaluations.

Are mobile in-car communication systems feasible?

Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference on - SAICSIT '12, 2012

The issue of driver distraction remains critical despite efforts that aim to reduce its effects. In-Car Communication Systems (ICCS) were introduced to address visual and manual distraction occurring when using a mobile phone whilst driving. ICCS running on mobile phone have increased the number of people using ICCS as they can be installed at no cost and the quality of speech recognition on mobile devices is improving. Little research, however, has been conducted to investigate usability problems with mobile ICCS. This paper proposes a new model to address some of the issues found with current mobile ICCS, called the multimodal interface for mobile info-communication with context (MIMIC). This paper discusses the design and usability evaluation of a prototype mobile ICCS, designed using MIMIC. Several tasks were evaluated using different metrics including time on task, task completion, task success, number of errors, flexibility, user satisfaction and workload. Results obtained show that the users gave a good overall rating to the mobile ICCS, which indicates that users will easily accept such technology. Future work will include redesigning the speech user interface in order to address the usability issues found with the current prototype.

Developing a Conversational In-Car Dialog System

12th World Congress …, 2005

In recent years, an increasing number of new devices have found their way into the cars we drive. Speech-operated devices in particular provide a great service to drivers by minimizing distraction, so that they can keep their hands on the wheel and their eyes ...

The Impact of an Adaptive User Interface on Reducing Driver Distraction

This paper discusses the impact of an adaptive prototype in-car communication system (ICCS), called MIMI (Multimodal Interface for Mobile Info-communication), on driver distraction. Existing ICCSs attempt to minimise the visual and manual distraction, but more research needs to be done to reduce cognitive distraction. MIMI was designed to address usability and safety issues with existing ICCSs. Few ICCSs available today consider the driver’s context in the design of the user interface. An adaptive user interface (AUI) was designed and integrated into a conventional dialogue system in order to prevent the driver from receiving calls and sending text messages under high distraction conditions. The current distraction level is detected by a neural network using the driving speed and steering wheel angle of the car as inputs. An adaptive version of MIMI was compared to a non-adaptive version in a user study conducted using a simple driving simulator. The results obtained showed that the adaptive version provided several usability and safety benefits, including reducing the cognitive load, and that the users preferred the adaptive version.

Gesturing on the steering wheel, a comparison with speech and touch interaction modalities

2015

This paper compares an emergent interaction modality for the In-Vehicle Infotainment System (IVIS), i.e., gesturing on the steering wheel, with two more popular modalities in modern cars: touch and speech. We conducted a betweensubjects experiment with 20 participants for each modality to assess the interaction performance with the IVIS and the impact on the driving performance. Moreover, we compared the three modalities in terms of usability, subjective workload and emotional response. The results showed no statically significant differences between the three interaction modalities regarding the various indicators for the driving task performance, while significant differences were found in measures of IVIS interaction performance: users performed less interactions to complete the secondary tasks with the speech modality, while, in average, a lower task completion time was registered with the touch modality. The three interfaces were comparable in all the subjective metrics.

Human Computer Inte raction in the Car

2010

Cars have become complex interactive systems. Mechanical controls and electrical systems are transformed to the digital realm. It is common that drivers operate a vehicle and, at the same time, interact with a variety of devices and applications. Texting while driving, looking up an address for the navigation system, and taking a phone call are just some common examples that add value for the driver, but also increase the risk of driving. Novel interaction technologies create many opportunities for designing useful and attractive in-car user interfaces. With technologies that assist the user in driving, such as assistive cruise control and lane keeping, the user interface is essential to the way people perceive the driving experience. New means for user interface development and interaction design are required as the number of factors influencing the design space for automotive user interfaces is increasing. In comparison to other domains, a trial and error approach while the product is already in the market is not acceptable as the cost of failure may be fatal. User interface design in the automotive domain is relevant across many areas ranging from primary driving control, to assisted functions, to navigation, information services, entertainment and games.

Multimodal Interaction in Modern Automobiles

This paper describes a few innovative solutions for application of multimodal interaction techniques in modern automobiles to ensure driving comfort, safety and security. The solutions are based on computer vision and speech processing techniques.

A Software System for Designing and Evaluating In-car Information System Interfaces

IFAC Proceedings Volumes, 1992

To integrate the requirements of ergonomists, car designers, and computer scientist for psycho-ergonomic studies and design of in-car information systems, we have developed a set of software tools called "IN-CAR-DISPLAYS". The purpose of this article is to describe the functionalities and the applications of this set of tools. More precisely, this software system provides the appropriate tools which include a wide range of functions which enable : easy and rapid prototyping of displays for different information sets and different types of dialogues for in-car information systems, testing of possible physical designation devices, real-time processing for dynamic management of different modalities used to present information and for dialogue purposes, recording of driver reaction rates and manipulatory responses during dialogues to assist with evaluations of responses. The original interest of this work is that its software development is based on a systemic philosophy which integrates the points of view of in-car human factor specialists and experts in user interfaces.

A comparison of three interaction modalities in the car

2016

This paper compares an emergent interaction modality for the In-Vehicle Infotainment System (IVIS), i.e., gesturing on the steering wheel, with two more popular modalities in modern cars: touch in the central dashboard and speech. We conducted a between-subjects experiment with 20 participants for each modality to assess the interaction performance with the IVIS and the impact on the driving performance. Moreover, we compared the three modalities in terms of usability, subjective workload and emotional response. The results showed no statically significant differences between the three interaction modalities regarding the various indicators for the driving task performance, while significant differences were found in measures of IVIS interaction performance: users performed less interactions to complete the secondary tasks with the speech modality, while, in average, a lower task completion time was registered with the touch modality. The three interfaces were comparable in terms of perceived usability, mental workload and emotional response.