Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch (original) (raw)
Related papers
Intent Inference for Hand Pointing Gesture-Based Interactions in Vehicles
IEEE transactions on cybernetics, 2015
Using interactive displays, such as a touchscreen, in vehicles typically requires dedicating a considerable amount of visual as well as cognitive capacity and undertaking a hand pointing gesture to select the intended item on the interface. This can act as a distractor from the primary task of driving and consequently can have serious safety implications. Due to road and driving conditions, the user input can also be highly perturbed resulting in erroneous selections compromising the system usability. In this paper, we propose intent-aware displays that utilize a pointing gesture tracker in conjunction with suitable Bayesian destination inference algorithms to determine the item the user intends to select, which can be achieved with high confidence remarkably early in the pointing gesture. This can drastically reduce the time and effort required to successfully complete an in-vehicle selection task. In the proposed probabilistic inference framework, the likelihood of all the nominal...
Experimental Evaluations of Touch Interaction Considering Automotive Requirements
Lecture Notes in Computer Science, 2011
Three different usability studies present evaluation methods for cross-domain human-computer-interaction. The first study compares different input devices like touch screen, turn-push-controller or handwriting recognition under regard of human error probability, input speed and subjective usability assessment. The other experiments had a focus on typical automotive issues: interruptibility and the influence of oscillations of the cockpit on the interaction.
Touch versus mid-air gesture interfaces in road scenarios - measuring driver performance degradation
2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), 2016
We present a study aimed at comparing the degradation of the driver's performance during touch gesture vs mid-air gesture use for infotainment system control. To this end, 17 participants were asked to perform the Lane Change Test. This requires each participant to steer a vehicle in a simulated driving environment while interacting with an infotainment system via touch and mid-air gestures. The decrease in performance is measured as the deviation from an optimal baseline. This study concludes comparable deviations from the baseline for the secondary task of infotainment interaction for both interaction variants. This is significant as all participants are experienced in touch interaction, however have had no experience at all with mid-air gesture interaction, favoring mid-air gestures for the long-term scenario.
Data-Centric Engineering, 2020
In various scenarios, the motion of a tracked object, for example, a pointing apparatus, pedestrian, animal, vehicle, and others, is driven by achieving a premeditated goal such as reaching a destination. This is albeit the various possible trajectories to this endpoint. This paper presents a generic Bayesian framework that utilizes stochastic models that can capture the influence of intent (viz., destination) on the object behavior. It leads to simple algorithms to infer, as early as possible, the intended endpoint from noisy sensory observations, with relatively low computational and training data requirements. This framework is introduced in the context of the novel predictive touch technology for intelligent user interfaces and touchless interactions. It can determine, early in the interaction task or pointing gesture, the interface item the user intends to select on the display (e.g., touchscreen) and accordingly simplify as well as expedite the selection task. This is shown to...
Natural, intuitive finger based input as substitution for traditional vehicle control
Both amount as well as dynamicity of content to be displayed in a car increases steadily, forcing manufacturer to change over to customizable screens integrated in dashboard and center console instead of dozens to hundreds of individual control signals. In addition, new requirements such as Internet access in the car or web services accessible while driving invalidates rudimentary display formats. Traditional forms of interaction such as buttons or knobs are unsuitable to respond to dynamic content shown on digital screens, requesting new mechanisms for distraction-free yet effective user (driver) input. We pick up this problem by introducing a novel sensing device allowing for natural, contactless, and eyes-free operation by relating finger movements in the area of the gearshift to screen coordinates. To assess quality features of this interface two research questions were formulated , (i) that the application of such a device would allow for natural, intuitive mouse pointer control in a similar manner than traditional forms of input and (ii) that the interface is insusceptible to varying workload conditions of the driver. Results from experimentation have revealed that, with respect to the first hypothesis, proximity sensing in a two-dimensional plane is a viable approach to directly control a mouse cursor on a screen integrated into the dashboard. A generally accepted conclusion on the assumption that the index of performance of the interface does not change with varying workload (hypothesis ii) cannot be drawn. To simulate different conditions of workload a dual task signal-response setting was used.
A Wearable Virtual Touch System for Cars
ArXiv, 2021
In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this paper, we have proposed two different modalities for drivers to virtually touch the dashboard display using a laser tracker with a mechanical switch and an eye gaze switch. We compared performances of our proposed modalities against conventional touch modality in automotive environment by comparing pointing and selection times of representative secondary task and also analysed effect on driving performance in terms of deviation from lane, average speed, variation in perceived workload and system usability. We did not find significant difference in driving and pointing performance between l...
SN Applied Sciences
Methods of information presentation in the automotive space have been evolving continuously in recent years. As technology pushes forward the boundaries of what is possible, automobile manufacturers are trying to keep up with the current trends. Traditionally, the often-long development and quality control cycles of the automotive sector ensured slow yet steady progress. However, the exponential advancement in the mobile and hand-held computing space seen in the last 10 years has put immense pressure on automobile manufacturers to try to catch up. For this reason, we now see manufacturers trying to explore new techniques for in-vehicle interaction (IVI), which were ignored in the past. However, recent attempts have either simply extended the interaction model already used in mobile or handheld computing devices or increased visual-only presentation-of-information with limited expansion to other modalities (i.e. audio or haptics). This is also true for system interaction which generally happens within complex driving environments, making the primary task of a driver (driving) even more challenging. Essentially, there is an inherent need to design and research IVI systems that complement and natively support a multimodal interaction approach, providing all the necessary information without increasing driver's cognitive load or at a bare minimum his/her visual load. In this research we focus on the key elements of IVI system: touchscreen interaction by developing prototype devices that can complement the conventional visual and auditory modalities in a simple and natural manner. Instead of adding primitive touch feedback cues to increase redundancy or complexity, we approach the issue by looking at the current requirements of interaction and complementing the existing system with natural and intuitive input and output methods, which are less affected by environmental noise than traditional multimodal systems.
Surface gesture interaction in the automotive context is still exploratory and lacking guidelines. To address this issue, a guessability study was developed to associate end-user gestures with functionalities of an in-vehicle HMI system. Interaction with the system was performed indirectly, with the use of surface gestures. Participants were presented with instructions, followed by a static interface image of a menu (e.g.: music list, contact list), and prompted to create a gesture that would allow them to respond to the instruction (e.g.: "Select previous" or "Make a call"). Results demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the creation of a gestures' taxonomy for adjustment, acceptance, refusal, and navigation actions. The guessability methodology proved to be useful and demonstrated how user-centered design can improve the usability of an interaction even at an advanced stage of the design and development process.
Developing Predictive Equations to Model the Visual Demand of In-Vehicle Touchscreen HMIs
International Journal of Human–Computer Interaction, 2017
Touchscreen HMIs are commonly employed as the primary control interface and touchpoint of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully-functioning prototypes, test rigs and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modelling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick-Hyman Law decision/search time and Fitts' Law pointing time, and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behaviour associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimise visual demand.
2007
Abstract The increasing quantity and complexity of in-vehicle systems creates a demand for user interfaces which are suited to driving. The steering wheel is a common location for the placement of buttons to control navigation, entertainment, and environmental systems, but what about a small touchpad? To investigate this question, we embedded a Synaptics StampPad in a computer game steering wheel and evaluated seven methods for selecting from a list of over 3000 street names.