Learning force patterns with a multimodal system using contextual cues (original) (raw)

Haptic Feedback Enhances Force Skill Learning

Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC'07), 2007

This paper explores the use of haptic feedback to teach an abstract motor skill that requires recalling a sequence of forces. Participants are guided along a trajectory and are asked to learn a sequence of onedimensional forces via three paradigms: haptic training, visual training, or combined visuohaptic training. The extent of learning is measured by accuracy of force recall. We find that recall following visuohaptic training is significantly more accurate than recall following visual or haptic training alone, although haptic training alone is inferior to visual training alone. This suggests that in conjunction with visual feedback, haptic training may be an effective tool for teaching sensorimotor skills that have a forcesensitive component to them, such as surgery. We also present a dynamic programming paradigm to align and compare spatiotemporal haptic trajectories.

Imitation Learning of Positional and Force Skills Demonstrated

Advanced Robotics, 2011

A method to learn and reproduce robot force interactions in a Human-Robot Interaction setting is proposed. The method allows a robotic manipulator to learn to perform tasks which require exerting forces on external objects by interacting with a human operator in an unstructured environment. This is achieved by learning two aspects of a task: positional and force profiles. The positional profile is obtained from task demonstrations via kinesthetic teaching. The force profile is obtained from additional demonstrations via a haptic device. A human teacher uses the haptic device to input the desired forces which the robot should exert on external objects during the task execution.

WYFIWIF: A haptic communication paradigm for collaborative motor skills learning

2010

Motor skills transfer is a challenging issue for many applications such as surgery, design and industry. In order to design virtual environments that support motor skills learning, a deep understanding of humans' haptic interactions is required. To ensure skills transfer, experts and novices need to collaborate. This requires the construction of the common frame of reference between the teacher and the learner in order to understand each other. In this paper, human-human haptic collaboration is investigated in order to understand how haptic information is exchanged. Furthermore, WYFIWIF (What You Feel Is What I Feel), a haptic communication paradigm is introduced. This paradigm is based on a hand guidance metaphor. The paradigm helps operators to construct an efficient common frame of reference by allowing a direct haptic communication. A learning virtual environment is used to evaluate this haptic communication paradigm. Hence, 60 volunteer students performed a needle insertion learning task. The results of this experiment show that, compared to conventional methods, the learning method based on haptic communication improves the novices' performance in such a task. We conclude that the WYFIWIF paradigm facilitate expert-novice haptic collaboration to teach motor skills.

Motor Skill Training Assistance Using Haptic Attributes

First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005

In this paper we describe our efforts to develop a new strategy for providing assistance using haptics in a virtual environment when training for a motor skill. Using a record and play strategy, the proposed assistance method will provide closest possible replication of expert's skill. We have defined a new paradigm called "Haptic Attributes" where we relate a unique haptic force profile to every task performed using motor skills. This has been combined with an earlier concept called Sympathetic Haptic to develop a new paradigm in training complex skill based tasks such as writing, surgery or playing musical instruments. As a demonstration, a virtual environment that can be used for training handwriting was designed and implemented. Position based feedback assistance and training with no assistance were tested against our method in a series of human subject tests. Results prove our method to be superior to training methods tested which use position based or no assistance.

Design of a Multi-Modal Dexterity Training Interface for Medical and Biological Sciences

Advances in medical technologies and clinical practice book series, 2016

This chapter presents an overview of the design of an interactive medical/biological training environment using a multi-modal user interface. We describe the software architecture required to develop such environment. Then we introduce the physics-based models of the objects interacting in the virtual scenes. We discuss the implementation of the dexterity enhancing training tasks combined with the associated definitions of metrics which can be used as a part of score keeping operation. A virtual mentoring agent was used throughout the training tasks for guidance in terms of multi-modal feedback including graphics, haptic and audio feedback cues. A fuzzy logic based method was used to evaluate and compare the performance metrics of the trainee in relationship to both novice and expert user.

Learning force-based robot skills from haptic demonstration

Frontiers in Artificial Intelligence and Applications, 2010

Locally weighted as well as Gaussian mixtures learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration. Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. For the first time, force/torque feedback through a haptic device has been used for teaching a teleoperated robot to empty a rigid container. The memory-based LWPLS and the non-memory-based LWPR algorithms [1,2,3], as well as both the batch and the incremental versions of GMM/GMR [4,5] were implemented, their comparison leading to very similar results, with the same pattern as regards to both the involved robot joints and the different initial experimental conditions. Tests where the teacher was instructed to follow a strategy compared to others where he was not lead to useful conclusions that permit devising the new research stages, where the taught motion will be refined by autonomous robot rehearsal through reinforcement learning.

Shared Control in Haptic Systems for Performance Enhancement and Training

Journal of Dynamic Systems Measurement and Control-transactions of The Asme, 2006

This paper presents a shared-control interaction paradigm for haptic interface systems, with experimental data from two user studies. Shared control, evolved from its initial telerobotics applications, is adapted as a form of haptic assistance in that the haptic device contributes to execution of a dynamic manual target-hitting task via force commands from an automatic controller. Compared to haptic virtual environments, which merely display the physics of the virtual system, or to passive methods of haptic assistance for performance enhancement based on virtual fixtures, the shared-control approach offers a method for actively demonstrating desired motions during virtual environment interactions. The paper presents a thorough review of the literature related to haptic assistance. In addition, two experiments were conducted to independently verify the efficacy of the shared-control approach for performance enhancement and improved training effectiveness of the task. In the first experiment, shared control is found to be as effective as virtual fixtures for performance enhancement, with both methods resulting in significantly better performance in terms of time between target hits for the manual target-hitting task than sessions where subjects feel only the forces arising from the mass-spring-damper system dynamics. Since shared control is more general than virtual fixtures, this approach may be extremely beneficial for performance enhancement in virtual environments. In terms of training enhancement, shared control and virtual fixtures were no better than practice in an unassisted mode. For manual control tasks, such as the one described in this paper, shared control is beneficial for performance enhancement, but may not be viable for enhancing training effectiveness.