Animating Sign Language in the Real Time (original) (raw)
Related papers
Designing an Animated Character System for American Sign Language
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility
Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is diffculty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the frst animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.
Translation System for Sign Language Learning
International journal of scientific research in computer science, engineering and information technology, 2024
Sign language display software converts text/speech to animated sign language to support the special needs population, aiming to enhance communication comfort, health, and productivity. Advancements in technology, particularly computer systems, enable the development of innovative solutions to address the unique needs of individuals with special requirements, potentially enhancing their mental wellbeing. Using Python and NLP, a process has been devised to detect text and live speech, converting it into animated sign language in real-time. Blender is utilized for animation and video processing, while datasets and NLP are employed to train and convert text to animation. This project aims to cater to a diverse range of users across different countries where various sign languages are prevalent. By bridging the gap between linguistic and cultural differences, such software not only facilitates communication but also serves as an educational tool. Overall, it offers a costeffective and widely applicable solution to promote inclusivity and accessibility.
An Animator of Gestures Applied to Sign Languages
2014
In this paper we discuss some of the major issues linked to the unwritten status of signed languages and to the inadequacy of the notation and transcription tools that are most widely used. Drawing on previous and ongoing research, we propose that the development of a written form appears to be necessary for defining more appropriate representational tools for research purposes.
Auslan Jam: A Graphical Sign Language Display System
2002
Australian sign language, Auslan, uses a combination of hand shapes, positions and movements, as well as facial expressions to communicate. Our ongoing research is to develop an automatic translator that graphically translates English to Auslan. Currently, we have implemented object-oriented C++ graphics libraries to build a whole upper body model. The model is a kinematic tree that allows physiologically possible movements that are necessary for Auslan sign display. We have experimented with two angle representations: Euler and quaternion. Using interpolation algorithms, we determined that the quaternion angle representation is more reliable than the Euler angle representation. This paper presents the design and implementation of the Auslan Jam system and our research into angle representation techniques. It also discusses future development of the model and the sign translator interface, and identifies possible improvements.
An automated technique for real-time production of lifelike animations of American Sign Language
Universal Access in the Information Society, 2015
Generating sentences from a library of signs implemented through a sparse set of key frames derived from the segmental structure of a phonetic model of ASL has the advantage of flexibility and efficiency, but lacks the lifelike detail of motion capture. These difficulties are compounded when faced with real-time generation and display. This paper describes a technique for automatically adding realism without the expense of manually animating the requisite detail. The new technique layers transparently over and modifies the primary motions dictated by the segmental model, and does so with very little computational cost, enabling real-time production and display. The paper also discusses avatar optimizations that can lower the rendering overhead in real-time displays.
Animation Generation Process for Sign Language Synthesis
2009
In this paper, we propose a new process for Sign Language synthesis. Our approach is based on a geometric description of signs to animate a signing avatar. We work as follows: the signs are divided in timing units and treated separately in a sequential process. The final results are merged into a global animation file describing the whole sign in terms of articulatory angles for a skeleton.
TEXT-TO-SIGN LANGUAGE SYNTHESIS TOOL
This document presents an approach for generating VRML animation sequences from Sign Language notation, based on MPEG-4 Face and Body Animation. The proposed application aims in providing a computer-based sign-language synthesis output for the deaf and the hearing impaired. Moreover the application may be used as a teaching tool for relatives of deaf people as well as people interested in learning the sign language. The application receives text sentences as input and provides as output 3D animated VRML sequences able to be visualised in any VRML-compliant browser.