A Real-Time Automatic Translation of Text to Sign Language (original) (raw)
Related papers
Sign Language e-Learning system for hearing-impaired community of Pakistan
International Journal of Information Technology, 2018
An increasing number of ratios in illness and injuries has lead to increase in the number of disabled people at the early age of birth. In this work we presented a web based e-learning system that is completely concerned with the sign language. In Pakistan, there is a scarcity of sign language resources available to assist deaf children in strengthening the language skills necessary for academic advancement. Thus, e-Learning for hearing impaired people is an application developed not only for deaf children but also for normal people to communicate with the deaf people. It is a web based application focusing on sign language of Pakistan which allows non-vocal information exchange. The purpose of this research work is twofold. First is to provide an e-learning system for educating sign language to deaf/hard-of-hearing people of Pakistan. Secondly, to develop a text translation system to translate English into its equivalent Pakistani sign language gestures in order to overcome communication barriers. The system also contains feedback for improving the quality of application by taking suggestions from normal and disable people. The proposed application is distributed in modular client server entities; the framework is specially designed for parents to edify their children the indispensable communication skills. We believe that proper communication lays the foundation for accelerated learning and reduced frustration.
Application for Iraqi sign language translation on Android system
International Journal of Electrical and Computer Engineering (IJECE), 2020
Deaf people suffer from difficulty in social communication, especially those who have been denied the blessing of hearing before the acquisition of spoken language and before learning to read and write. For the purpose of employing mobile devices for the benefit of these people, their teachers and everyone who has contact with them, this research aims to design an application for social communication and learning by translating Iraqi sign language into text in Arabic and vice versa. Iraqi sign language has been chosen because of a lack of applications for this field. The current research, to the best of our knowledge, is the first of its kind in Iraq. The application is open source; words that are not found in the application database can be processed by translating them into letters alphabetically. The importance of the application lies in the fact that it is a means of communication and e-learning through Iraqi sign language, reading and writing in Arabic. Similarly, it is regarded as a means of social communication between deaf people and those with normal hearing. This application is designed by using JAVA language and it was tested on several deaf students at Al-Amal Institute for Special Needs Care in Mosul, Iraq. It was well comprehended and accepted. 1. INTRODUCTION It is in human nature to seek to help others in ways possible according to abilities, expertise and knowledge. Similar to how medical experts have an important role in treating patients, and medical engineers in manufacturing tools and instruments to help the sick and injured, such as the manufacture of prostheses for the handicapped and headphones for the hearing impaired, there is also a major role for the programmer in creating and developing applications and technologies to help people with special needs and in order to make their lives easier. The role of the programmer lies in creating and developing applications and technologies to help deaf people to communicate with each other, as well as with people with normal hearing, in addition to facilitating their education. Undoubtedly, that the deaf constitute a large and influential percentage in society, but they suffer from the difficulty of communication. Sign language, as a means of communication, has been developed over thousands of years. It is defined as movements using one or both hands and sometimes using facial expressions [1]. With the invention of the Internet and the development of technology, the deaf have had a share in benefiting from this technology represented by providing means to educate and communicate. This issue has become an important area that has attracted the attention of many scientists and researchers working in this field.
Indian Sign Language (ISL) Translation System For Sign Language Learning
Sign language is a language which uses visually transmitted sign patterns to convey meaning. It is the combination of hand shapes, orientation and movement of hands, arms or body, and facial expressions. Our System is capable of recognizing sign-language symbols can be used as a means of communication with hard of hearing people. Our paper proposes a system to help normal people can easily communicate with hard of hearing people. Instead we are using a camera and microphone as a device to implement the Indian Sign Language (ISL) system. The ISL translation system has translation of voice into Indian Sign Language. The ISL translation system uses microphone or USB camera to get images or continuous video image (from normal people) which can be interpreted by the application. Acquired voices are assumed to be translation, scale and rotation invariant. In this process the steps of translation are acquisition of images, binarized, classification, hand shape edge detection and feature extraction. After getting vectors feature extraction state then pattern matching done by comparing existing database. The GUI application is displaying and sending the message to the receiver. This system makes normal people to communicate easily with deaf/dumb person. Also in video calling or chatting this application helps the hard speaking and hearing people.
A Translator for Indian Sign Language to Text and Speech
International Journal for Research in Applied Science and Engineering Technology IJRASET, 2020
Verbal Communication is the only way using which people have interacted with each other over the years but the case stands different for the disabled. The barrier created between the impaired and the normal people is one of the setbacks of the society. For the impaired people (deaf & mute), sign language is the only way to communicate. In order to help the deaf and mute communicate efficiently with the normal people, an effective solution has been devised. Our aim is to design a system which analyses and recognizes various alphabets from a database of sign images. In order to accomplish this, the application uses various techniques of Image Processing such as segmentation & feature extraction. We use the machine learning technique, Convolutional Neural Network for detection of sign language. We convert the image by cropping the background and keeping only gesture, after that we convert the gesture into black & white scale in png format into 55*60 resolution. This system will help to eradicate the barrier between the deaf-mute & normal people. This system will standardize the Indian Sign Language in India. It will also improve the quality of teaching and learning in deaf and mute institutes. Just as Hindi is recognized as the standard language for conversation throughout India, ISL will be recognized as the standard sign language throughout India. The main aim of this work is serving the mankind that is achieved by providing better teaching and better learning.
2015
Communication is the best media used by the people to communicate with each other. The problem arises when normal people and deaf-dumb people want to communicate with each other. Sign Language is a language which is used for communication by the deaf and dumb people. This project is used to reduce the communication barrier between the deaf-dumb people and the normal people. The Sign language interpreter we are developed uses a hand glove fitted with flex sensors that can interpret the English letters and numbers in American Sign Language (ASL) & some one-handed letters in Indian sign language (ISL). IndexTerms--Gesture, Flex Sensor, ARM7TDMI, Text To Speech Conversion
Smart Phone Based Indian Sign Language Maker for Aiding Deaf and Dumb People
Sign language is the native language of deaf and dumb people which they prefer to use it on their daily life. This paper describes architecture of a smart phone based Indian sign language maker for aiding deaf and dumb people (special people). This system comprised of two features namely, speech to gesture (Indian Sign Language) and text to speech. The speech to gesture can be done with the help of Speech recognizer, semantic analyzer, gesture sequence generator and gesture player. The text to speech is done with the help of Google text to speech engine. The aim of the project is to aid special people by translating the speech to Indian Sign Language and converting text to voice. Speech recognition and semantic analysis is done with the help of the Google voice. Gesture sequence generation and gesture playing is the place where we focused our work. Gesture sequence is generated based on the semantic analyzer output in which the gesture sequence generator can recognize. Gesture sequence generator will produce sequences read by gesture player which is nothing but an animated human agent who will perform hand signs corresponding to the input given. This gesture sequence generator will also learn from the user if the speech is not properly recognized or the words are not found in the sequence generator dictionary. This paper is an initial work of the project in which, gesture generation is done for particular domains like railway, airport and bus transport kiosk where a deaf person can respond to the phone calls from various customers. The future work will be a complete system which can produce hand signs for any domain.
CENTRAL ASIAN JOURNAL OF MATHEMATICAL THEORY AND COMPUTER SCIENCES, 2021
This study was conducted with the aim to develop a web-based real-time application which recognizes Filipino Sign Language (FSL) and converts it into text. Purposive sampling was used to determine a total of 30 respondents: 9 Special Education Students, 7 Special Education Teachers, and 14 Non-Disabled People. The study focused on the following variables: the independent variable, the level of acceptability in terms of content, design, and functionality; and the dependent variable, SPEAK THE SIGN: A Real-Time Sign Language to Text Converter Application for Basic Filipino Words and Phrases. A researcher-made questionnaire was used to gather data on both variables. The statistical tools used in the study were frequency count, sum, percentage, and mean. The results show that according to the three sets of respondents, the level of acceptability of the web-based realtime converter application in terms of content, design, and functionality falls under the-Very Highly Acceptable bracket. The very highly acceptability of the application among the three sets of respondents suggest that the application was a user-friendly and beneficial for the respondents in closing the communication gap. This can also be an excellent way for non-disabled people to fully understand and appreciate the importance of learning primary Filipino Sign Language(FSL).
Automatic sign language translator model
2014
In this paper we present the overall study that includes the model developed (VS-Virtual Sign Model) and the experiences performed, with an automatic bidirectional sign language translator, between written and sign language, which is being supervised by the research group GILT (Graphics, interaction & learning technologies) under the frame of a national project called Virtual Sign (VS project). This project aims to develop and evaluate a model that facilitates access for the deaf and hearing impaired to digital content -in particular the educational content and learning objects -creating the conditions for greater social inclusion of deaf and hearing impaired people. Access to digital content will be supported by an automatic translator between Portuguese Writing (LEP) and Portuguese Sign Language (LGP) supported by an interaction model.
An Integrated Two Way ISL (Indian Sign Language) Translation System -A New Approach
Sign language is a language which uses visually transmitted sign patterns to convey meaning. It is the combination of hand shapes, orientation and movement of hands, arms or body, and facial expressions. Our System is capable of recognizing sign-language symbols can be used as a means of communication with hard of hearing people and hard speaking people. Our paper proposes a system to help those people to communicate with normal people without sophisticated devices like power, data gloves and coloured finger cap and etc. Instead we are using a camera and microphone as a device to implement the Integrated Two Way ISL (Indian Sign Language) system. The Two Way ISL translation system has two parts, translation of sign language into voice and reverse. The Two Way ISL translation system uses webcam or USB camera to get images or continuous video image (from hard speaking) which can be interpreted by the application. Acquired images are assumed to be translation, scale and rotation invariant. Assuming signer is right angle to the camera. In this process the steps of translation are acquisition of images, binarised, classification, hand shape edge detection and feature extraction. After getting vectors feature extraction state then pattern matching done by comparing existing database. The interpreted symbols (meaning-words) can be translated into text information (Words in English or Tamil). Using text to voice synthesis, text will be converted as a voice output or voices file (to normal person). The GUI application is displaying and sending the message as a text message or voice message to the receiver. The reverse is, getting voice input (from normal person) converted to text then it is matching with database for sign symbol to display (to hard hearing receiver).This system makes deaf/dumb people to communicate easily with normal speaking person. Also in video calling or chatting this application helps the hard speaking and hearing people.
A prototype text to British Sign Language (BSL) translation system
Proceedings of the 41st Annual Meeting on …, 2003
We demonstrate a text to sign language translation system for investigating sign language (SL) structure and assisting in production of sign narratives and informative presentations 1. The system is demonstrable on a conventional PC laptop computer.