A Virtual Reality-Based Framework for Experiments on Perception of Manual Gestures (original) (raw)

Captured motion data processing for real time synthesis of sign language

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006

This study proposes a roadmap for the creation and specification of a virtual humanoid capable of performing expressive gestures in real time. We present a gesture motion data acquisition protocol capable of handling the main articulators involved in human expressive gesture (whole body, fingers and face). The focus is then shifted to the postprocessing of captured data leading to a motion database complying with our motion specification language and capable of feeding data driven animation techniques. 1

3D avatar for automatic synthesis of signs for the sign languages

2015

This paper discusses a synthesis system that generates, from a XML input representing gesture descriptors, a vector of configuration parameters that are executed by a 3D Avatar for use in the animation of Sign Languages. The development of virtual agents able to reproduce gestures of sign languages is very important to the deaf community, since in general they also have difficulties to read conventional texts. In this research project, a consistent combination of 3D editor Blender, CMarkup parser and graphics engine Irrlicht was used to develop a novel approach to sign synthesis, based on a recent XML model that describes hand gestures using shape, location, movement and orientation descriptors. The described experiments validate the proposed implementation model, which constitutes a promising alternative in the area of synthesis of signals for computational applications of Sign Languages.

The SignCom System for Data-Driven Animation of Interactive Virtual Signers: Methodology and Evaluation

ACM Transactions on …, 2011

In this paper we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer. 1 We follow orthographic convention in this paper, using a lowercase deaf to describe the physical condition of deafness, and an capitalized Deaf to refer to the linguistic and cultural traditions of a people group.

An automated technique for real-time production of lifelike animations of American Sign Language

Universal Access in the Information Society, 2015

Generating sentences from a library of signs implemented through a sparse set of key frames derived from the segmental structure of a phonetic model of ASL has the advantage of flexibility and efficiency, but lacks the lifelike detail of motion capture. These difficulties are compounded when faced with real-time generation and display. This paper describes a technique for automatically adding realism without the expense of manually animating the requisite detail. The new technique layers transparently over and modifies the primary motions dictated by the segmental model, and does so with very little computational cost, enabling real-time production and display. The paper also discusses avatar optimizations that can lower the rendering overhead in real-time displays.

Sign Language 3D Virtual Agent

iiis.org

Accessibility is a growing concern in several research areas. In computer science, programs that provide accessibility are those that enable people with disabilities to use computing resources efficiently. Since virtual environment relies primarily on the visual mode, usually using writing content, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before learn any language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. The aim of this work is to develop a 3D virtual agent for presenting sign language information. The avatar consists of an articulated model, representing a human character, which is able to articulate signs. The virtual agent control is done through text scripts that describe the signs in particular notation developed in the context of this work. A sign language 3D virtual agent provides important improvement in human-computer interaction for deaf people, and may also be a tool for teaching and research of sign languages.

The SignCom system for data-driven animation of interactive virtual signers

ACM Transactions on Interactive Intelligent Systems, 2011

In this article we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer.

Animating Sign Language in the Real Time

The paper presents selected problems of visualizing animated sign language sentences in real time. The presented solution is a part of a system for translation of texts into the sign language. The animation and graphical techniques applied in the system are briefly presented, but the main problems discussed are: how to specify the sign language and how to interpret such a specification. A concise, easy-to-use Szczepankowski's gestographic notation has been adopted. It is widely used in the Polish deaf community. It has been originally intended to be used by humans; thus a part of information it holds is incomplete, inexact, in many cases highly intuitive. The automatic interpretation has to reconstruct all the information that lacks. Another group of problems we have encountered involves issues of kinematics: the motion is to be generated depending on the information that is very general. Techniques like reverse kinematics and collision detecting and avoiding have to be applied....

Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

Universal Access in The Information Society, 2008

Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phoneticlevel description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.