Are mobile in-car communication systems feasible? (original) (raw)

MIMI: A Multimodal and Adaptive Interface for an In-Car Communication System

satnac.org.za

The number of cars provided with in-car communication systems has increased noticeably. Research has shown that using a mobile phone whilst driving has usability and safety issues. In order to minimise these issues, user interfaces that promote the usage of the hands and eyes solely for the driving task have been proposed. This paper discusses the design of MIMI (Multimodal Interface for Mobile Infocommunication), a prototype multimodal in-car communication system using a speech interface, supplemented with steering wheel button input. The design of MIMI is discussed, together with its architecture, including the dialogue manager, the adaptive module and the workload manager. The adaptive module contains a workload manager that helps MIMI to present information to the driver such that driver distraction is reduced.

Comparison of manual vs. speech-based interaction with in-vehicle information systems

Accident Analysis and Prevention, 2009

This study examined whether speech-based interfaces for different in-vehicle-information-systems (IVIS) reduce the distraction caused by these systems. For three frequently used systems (audio, telephone with name selection, navigation system with address entry and point of interest selection) speech, manual control and driving without IVIS (baseline) were compared. The Lane Change Task was used to assess driving performance. Additionally, gaze behavior and a subjective measure of distraction were analyzed. Speech interfaces improved driving performance, gaze behavior and subjective distraction for all systems with the exception of point-of-interest entry. However, these improvements were overall not strong enough to reach the baseline performance level. Only in easy segments of the driving task the performance level was comparable to baseline. Thus, speech-based IVIS have to be further developed to keep the cognitive complexity at an adequate level which does not disturb driving. However, looking at the benefits, speech control is a must for the car of the future.

Development and Evaluation of Automotive Speech Interfaces: Useful Information from the Human Factors and the Related Literature

International Journal of Vehicular Technology, 2013

Drivers often use infotainment systems in motor vehicles, such as systems for navigation, music, and phones. However, operating visual-manual interfaces for these systems can distract drivers. Speech interfaces may be less distracting. To help designing easy-to-use speech interfaces, this paper identifies key speech interfaces (e.g., CHAT, Linguatronic, SYNC, Siri, and Google Voice), their features, and what was learned from evaluating them and other systems. Also included is information on key technical standards (e.g., ISO 9921, ITU P.800) and relevant design guidelines. This paper also describes relevant design and evaluation methods (e.g., Wizard of Oz) and how to make driving studies replicable (e.g., by referencing SAE J2944). Throughout the paper, there is discussion of linguistic terms (e.g., turn-taking) and principles (e.g., Grice’s Conversational Maxims) that provide a basis for describing user-device interactions and errors in evaluations.

Developing a Conversational In-Car Dialog System

12th World Congress …, 2005

In recent years, an increasing number of new devices have found their way into the cars we drive. Speech-operated devices in particular provide a great service to drivers by minimizing distraction, so that they can keep their hands on the wheel and their eyes ...

In-Car Dictation and Driver’s Distraction: A Case Study

Human-Computer Interaction. Towards Mobile and Intelligent Interaction Environments, 2011

We describe a prototype dictation UI for use in cars and evaluate it by measuring (1) driver's distraction, (2) task completion time, and (3) task completion quality. We use a simulated lane change test (LCT) to assess driving quality while using the prototype, while texting using a cell phone and when just driving. The prototype was used in two modes-with and without a display (eyes-free). Several statistics were collected from the reference and distracted driving LCT trips for a group of 11 test subjects. These statistics include driver's mean deviation from ideal path, the standard deviation of driver's lateral position on the road, reaction times and the amount and quality of entered text. We confirm that driving performance was significantly better when using a speech enabled UI compared to texting using a cell phone. Interestingly, we measured a significant improvement in driving quality when the same dictation prototype was used in eyes-free mode.

The Impact of an Adaptive User Interface on Reducing Driver Distraction

This paper discusses the impact of an adaptive prototype in-car communication system (ICCS), called MIMI (Multimodal Interface for Mobile Info-communication), on driver distraction. Existing ICCSs attempt to minimise the visual and manual distraction, but more research needs to be done to reduce cognitive distraction. MIMI was designed to address usability and safety issues with existing ICCSs. Few ICCSs available today consider the driver’s context in the design of the user interface. An adaptive user interface (AUI) was designed and integrated into a conventional dialogue system in order to prevent the driver from receiving calls and sending text messages under high distraction conditions. The current distraction level is detected by a neural network using the driving speed and steering wheel angle of the car as inputs. An adaptive version of MIMI was compared to a non-adaptive version in a user study conducted using a simple driving simulator. The results obtained showed that the adaptive version provided several usability and safety benefits, including reducing the cognitive load, and that the users preferred the adaptive version.

Commute UX: Voice enabled in-car infotainment system

2009

Voice enabled dialog systems are well suited for in-car applications. Driving is an eyes-busy and hands-busy task and the only wideband communication channel left is speech. Such systems are in the midst of a transformation from a cool gadget to an integral part of the modern automobile. In this paper we highlight the major requirements for an in-car dialog system including usability during conditions of cognitive load, efficiency though multimodal user interface, dealing with locations, handling the noise in the car with better sound capture and robust speech recognition. We then present Commute UX, our prototype multi-modal dialog system for in-car infotainment system.

Design and Elimination of Driving Distraction

IJAEM , 2023

Driving is already a complex task that requires varying degrees of cognitive and physical stress. With the advancement of technology, the automobile has become the work place of media consumption, communication center and interconnection. The car's futures have also increased. As a result, the user interaction in the car becomes crowded and complicated. This increases the number of distracted driving and increases the number of traffic accidents caused by distracted driving. This paper focuses on two main aspects of the current automobile environment, multi-modal interaction (MMI) and advanced driver assistance system (ADAS) to reduce interference. It also provides indepth market research for the future trend of smart car technology. After careful analysis, it has been found that a fun filled with many underlying picture information screen, one with a large number of small button at the center of the stack, and terrible voice recognition (VR) led to a high cognitive load, and these are the cause of driver distraction. While VR has become a standard technology, the current state of technology focuses on functional design and sales driven approaches. Most automakers have focused on the virtual reality better, but perfect in the VR is not the answer, as there are inherent challenges and limitations in respect to the in-car environment and cognitive load.

Evaluating Demands Associated with the Use of Voice-Based In-Vehicle Interfaces

Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting, 2016

This panel addresses current efforts associated with the evaluation of demands associated with the use of voice-based in-vehicle interfaces. As generally implemented, these systems are perhaps best characterized as mixed-mode interfaces drawing upon varying levels of auditory, vocal, visual, manual and cognitive resources. Numerous efforts have quantified demands associated with these systems and several have proposing evaluation methods. However, there has been limited discussion in the scientific literature on the benefits and drawbacks of various measures of workload; appropriate reference points for comparison (i.e. just driving, visual-manual versions of the task one is looking to replace, etc.); the relationship of demand characteristics to safety; and practical design considerations that can be gleamed from efforts to date. Panelists will discuss scientific progress in the topic areas. Each panelist is expected to present a brief perspective followed by discussion and Q&A. Figure 1: Re-conceptualization of DVI demands: (a) "traditional view point" of visual-auditory-cognitive-psychomotor dimensions; (b) proposed conceptualization of modern multi-modal system demands