Voice interfaced vehicle user help (original) (raw)
Related papers
Comparison of manual vs. speech-based interaction with in-vehicle information systems
Accident Analysis and Prevention, 2009
This study examined whether speech-based interfaces for different in-vehicle-information-systems (IVIS) reduce the distraction caused by these systems. For three frequently used systems (audio, telephone with name selection, navigation system with address entry and point of interest selection) speech, manual control and driving without IVIS (baseline) were compared. The Lane Change Task was used to assess driving performance. Additionally, gaze behavior and a subjective measure of distraction were analyzed. Speech interfaces improved driving performance, gaze behavior and subjective distraction for all systems with the exception of point-of-interest entry. However, these improvements were overall not strong enough to reach the baseline performance level. Only in easy segments of the driving task the performance level was comparable to baseline. Thus, speech-based IVIS have to be further developed to keep the cognitive complexity at an adequate level which does not disturb driving. However, looking at the benefits, speech control is a must for the car of the future.
An Innovative Vocal Interface for Automotive Information Systems
Proc of the 6th Intern. Conference on Enterprise Information Systems (ICEIS 2004), 2004
The design of interfaces for automotive information systems is a critical task. In fact, in the vehicular domain the user is busy in the primary task of the driving, and any visual distraction inducted by the telematic systems can bring to serious consequences. Since road safety is paramount, it is needed to define new interaction metaphors, not affecting the driver's visual workload, such as auditory interfaces. In this paper we propose an innovative automotive auditory interaction paradigm, whose main goals are not to require visual attention, to be smart for expert users, as well as easy to use for inexperienced users. This is achieved by a new atomic dialogue paradigm, based on a help-on-demand mechanism, to provide a vocal support to users in trouble. Finally, we present some examples of dialogue based on such approach.
Evaluating Demands Associated with the Use of Voice-Based In-Vehicle Interfaces
Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting, 2016
This panel addresses current efforts associated with the evaluation of demands associated with the use of voice-based in-vehicle interfaces. As generally implemented, these systems are perhaps best characterized as mixed-mode interfaces drawing upon varying levels of auditory, vocal, visual, manual and cognitive resources. Numerous efforts have quantified demands associated with these systems and several have proposing evaluation methods. However, there has been limited discussion in the scientific literature on the benefits and drawbacks of various measures of workload; appropriate reference points for comparison (i.e. just driving, visual-manual versions of the task one is looking to replace, etc.); the relationship of demand characteristics to safety; and practical design considerations that can be gleamed from efforts to date. Panelists will discuss scientific progress in the topic areas. Each panelist is expected to present a brief perspective followed by discussion and Q&A. Figure 1: Re-conceptualization of DVI demands: (a) "traditional view point" of visual-auditory-cognitive-psychomotor dimensions; (b) proposed conceptualization of modern multi-modal system demands
International Journal of Vehicular Technology, 2013
Drivers often use infotainment systems in motor vehicles, such as systems for navigation, music, and phones. However, operating visual-manual interfaces for these systems can distract drivers. Speech interfaces may be less distracting. To help designing easy-to-use speech interfaces, this paper identifies key speech interfaces (e.g., CHAT, Linguatronic, SYNC, Siri, and Google Voice), their features, and what was learned from evaluating them and other systems. Also included is information on key technical standards (e.g., ISO 9921, ITU P.800) and relevant design guidelines. This paper also describes relevant design and evaluation methods (e.g., Wizard of Oz) and how to make driving studies replicable (e.g., by referencing SAE J2944). Throughout the paper, there is discussion of linguistic terms (e.g., turn-taking) and principles (e.g., Grice’s Conversational Maxims) that provide a basis for describing user-device interactions and errors in evaluations.
CHAT: a conversational helper for automotive tasks
2006
Spoken dialogue interfaces, mostly command-and-control, become more visible in applications where attention needs to be shared with other tasks, such as driving a car. The deployment of the simple dialog systems, instead of more sophisticated ones, is partly because the computing platforms used for such tasks have been less powerful and partly because certain issues from these cognitively challenging tasks have not been well addressed even in the most advanced dialog systems. This paper reports the progress of our research effort in developing a robust, wide-coverage, and cognitive load-sensitive spoken dialog interface called CHAT: Conversational Helper for Automotive Tasks. Our research in the past few years has led to promising results, including high task completion rate, dialog efficiency, and improved user experience.
Developing a Conversational In-Car Dialog System
12th World Congress …, 2005
In recent years, an increasing number of new devices have found their way into the cars we drive. Speech-operated devices in particular provide a great service to drivers by minimizing distraction, so that they can keep their hands on the wheel and their eyes ...
Analyzing the effects of spoken dialog systems on driving behavior
2006
This paper presents an evaluation of a spoken dialog system for automotive environments. Our overall goal was to measure the impact of user-system interaction on the user's driving performance, and to determine whether adding context-awareness to the dialog system might reduce the degree of user distraction during driving. To address this issue, we incorporated context-awareness into a spoken dialog system, and implemented three system features using user context, network context and dialog context. A series of experiments were conducted under three different configurations: driving without a dialog system, driving while using a context-aware dialog system, and driving while using a context-unaware dialog system. We measured the differences between the three configurations by comparing the average car speed, the frequency of speed changes and the angle between the car's direction and the centerline on the road. These results indicate that context-awareness could reduce the degree of user distraction when using a dialog system during driving.
Voice Operated Information System (VOIS) for driver's route guidance
Mathematical and Computer Modelling, 1995
This paper describes work performed at U.C. Davis in the area of Advanced Travel Information Systems (ATIS). The project develops a Voice Operated Information System (VOIS) for drivers' information and guidance. The principal aim of this work was to develop a suitable interface for the untrained user (driver), and investigate the degree to which dialogue control can be used to compensate for deficiencies in existing information systems' interfaces. The focus of this work has been on providing pre-trip or en-route information. However, the techniques developed here are believed to be equally applicable to a wide range of other information systems (electronic yellow pages, route navigation system, etc.). In this work, more emphasis is placed on the media used to interface with the information system rather than developing an extensive database. In other words, the database is small and options are few in the information system. This helps in evaluating the benefits and drawbacks of using voice as the user interface media. The system is composed of several subsystems (modules), the user input and output subsystems, the dialogue controller, the database, etc. The dialogue controller is an independent unit with welldefined interfaces to the other system components. The dialogue controller outputs a question to the speech output subsystem, and simultaneously outputs a set of syntax rules to the speech input system. These rules define the subset of the total user input language which the dialogue controller is prepared to interpret at that point in the dialogue. Using these rules as guidance, the speech input subsystem processes the user's response and returns it to the dialogue controller as a frame-like structure. These frames have information about user request. The dialogue controller interprets the reply frame and the cycle then repeats until the user's query is fully established. The above outline presents the broad framework in which we have developed a dialogue controlled pre-trip information system. Such systems are very useful because there is no need for the driver to divert his attention from the driving task to interact with the information system. Once this system is fully established, we plan to use it as one of the primary user mterfaces for subsequent prototype developments.
Automotive User Interfaces in the Age of Automation (Dagstuhl Seminar 16262)
2016
The next big change in the automotive domain will be the move towards automated and semi-automated driving. We can expect an increasing level of autonomous driving in the coming years, resulting in new opportunities for the car as an infotainment platform when standard driving tasks will be automated. This change also comes with a number of challenges to automotive user interfaces. Core challenges for the assistance system and the user interface will be distributing tasks between the assistance system and the driver, the re-engagement of drivers in semi-automated driving back to the driving task, and collaborative driving in which cars collectively work together (e.g., platoons). Overall, in the coming years we will need to design interfaces and applications that make driving safe while enabling communication, work, and play in human-operated vehicles. This Dagstuhl seminar brought together researchers from human computer interaction, cognitive psychology, human factors psychology a...
2015
Experiment 4 was undertaken as an exploratory study of driver behavior with and without ACC active during single-task baseline driving and when interacting with voiceinvolved and primary visual-manual infotainment secondary tasks. An analysis sample of 24 participants, equally balanced by gender and two age groups (20-29 and 60-69), was given training exposure to a production ACC system in a 2014 Chevrolet Impala under highway driving conditions through controlled interaction with a confederate lead vehicle. Assessment periods with and without ACC followed. While participants reported high levels of trust in this automated technology, heart rate and skin conductance levels showed modest but highly consistent and statistically significant elevations when ACC was active. Self-report measures suggested that participants felt more support from the assistive technology when engaged in voice-based tasks that generally allowed for continued visual orientation toward the roadway than during visual-manual secondary tasks. ACC status had no significant effect on glance activity during secondary tasks, but visual scanning was observed to change during single task driving when ACC was active. Specifically, drivers shifted more of their visual attention off the forward roadway. Although the observed shift in the distribution of glances and time looking off of the forward roadway may well be appropriate to the conditions, developing a better understanding of how automation influences the distribution of attention seems appropriate.