Development of Simultaneous Translation Algorithm for Sign Language: An Example of International Language (original) (raw)

Translation of Sign Language for Deaf and Dumb People

International Journal of Recent Technology and Engineering (IJRTE), 2020

Deaf-mute people can communicate with normal people with help of sign languages. Our project objective is to analyse and translate the sign language that is hand gestures into text and voice. For this process, RealTimeImage made by deaf-mute peopleiscapturedanditisgivenasinput to the pre-processor. Then, feature extraction process by using otsu’s algorithm and classification by using SVM(support Vector Machine) can be done. After the text for corresponding sign has been produced. The obtained text is converted into voice with use of MATLAB function. Thus hand gestures made by deaf-mute people has been analysed and translated into text and voice for better communication.

A Real-Time Automatic Translation of Text to Sign Language

Computers, Materials & Continua, 2022

Communication is a basic need of every human being; by this, they can learn, express their feelings and exchange their ideas, but deaf people cannot listen and speak. For communication, they use various hands gestures, also known as Sign Language (SL), which they learn from special schools. As normal people have not taken SL classes; therefore, they are unable to perform signs of daily routine sentences (e.g., what are the specifications of this mobile phone?). A technological solution can facilitate in overcoming this communication gap by which normal people can communicate with deaf people. This paper presents an architecture for an application named Sign4PSL that translates the sentences to Pakistan Sign Language (PSL) for deaf people with visual representation using virtual signing character. This research aims to develop a generic independent application that is lightweight and reusable on any platform, including web and mobile, with an ability to perform offline text translation. The Sign4PSL relies on a knowledge base that stores both corpus of PSL Words and their coded form in the notation system. Sign4PSL takes English language text as an input, performs the translation to PSL through sign language notation and displays gestures to the user using virtual character. The system is tested on deaf students at a special school. The results have shown that the students were able to understand the story presented to them appropriately.

Automatic Translation of Real-Time Voice to Indian Sign Language for Deaf and Dumb People

International Journal of Innovative Research in Science, Engineering and Technology (IJIRSET), 2023

Sign language is a universal way of communication for challenged people with speaking and hearing limitations. Multiple mediums are accessible to translate or to acknowledge sign language and convert them to text. However, the text to signing conversion systems has been rarely developed; this is often thanks tothe scarcity of any sign language dictionary. Our projectaims at creating a system that consists of a module that initially transforms voice input to English text and which parses the sentence, then to which Indian sign language grammar rules are applied. This is done by eliminating stop words from the reordered sentence. Indian SignLanguage (ISL) does not sustain the inflections of theword. Hence, stemming is applied to vary over the wordsto their root/ stem class. All words of the sentence are then Checked against the labels in the dictionary containingvideos representing each of the words. The present systems are limited to only straight conversion of wordsinto ISL, whereas the proposed system is innovative, as our system aims to rework these sentences into ISL as pergrammar in the real domain.

Text Analysis and Speech to Sign Language Translation

2019

American Sign Language is the most widely known sign language. There are more sign languages used as well. The current scenario is that deaf individuals have already learned the language and use it for their daily communication. The only hurdle is that normal person have to learn the sign language In this paper an architecture is proposed based on machine learning. The system is designed in 3 modules speech to text, text to sign and sign to animation. The first module is implemented using a speech recognition API. The second through a machine learning algorithm. The end module consists of a 3D animated avatar. .

Translation System for Sign Language Learning

International journal of scientific research in computer science, engineering and information technology, 2024

Sign language display software converts text/speech to animated sign language to support the special needs population, aiming to enhance communication comfort, health, and productivity. Advancements in technology, particularly computer systems, enable the development of innovative solutions to address the unique needs of individuals with special requirements, potentially enhancing their mental wellbeing. Using Python and NLP, a process has been devised to detect text and live speech, converting it into animated sign language in real-time. Blender is utilized for animation and video processing, while datasets and NLP are employed to train and convert text to animation. This project aims to cater to a diverse range of users across different countries where various sign languages are prevalent. By bridging the gap between linguistic and cultural differences, such software not only facilitates communication but also serves as an educational tool. Overall, it offers a costeffective and widely applicable solution to promote inclusivity and accessibility.

An Open CV Framework of Automated Sign language to Text translator for Speech and Hearing Impaired

Generally hearing impaired people use sign language for communication, but they find difficulty in communicating with others who don " t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into text in order to make the communication take place between the mute communities and the general public as possible. Computer recognition of sign language is an important research problem for enabling communication with hearing impaired people. This project introduces an efficient and fast algorithm for identification of the number of fingers opened in a gesture representing text of the Binary Sign Language. The system does not require the hand to be perfectly aligned to the camera and any specific back ground for camera. The project uses image processing system to identify, especially English alphabetic sign language used by the deaf people to communicate. The basic objective of this project is to develop a computer based intelligent system that will enable the hearing impaired significantly to communicate with others using their natural hand gestures. The idea consisted of designing and building up an intelligent system using image processing, machine learning and artificial intelligence concepts to take visual inputs of sign language " s hand gestures and generate easily recognizable form of outputs. Hence the objective of this project is to develop an intelligent system which can act as a translator between the sign language and the spoken language dynamically and can make the communication between people with hearing impairment and normal people both effective and efficient. The system is we are implementing for Binary sign language but it can detect any sign language with prior image processing.

Indian Sign Language (ISL) Translation System For Sign Language Learning

Sign language is a language which uses visually transmitted sign patterns to convey meaning. It is the combination of hand shapes, orientation and movement of hands, arms or body, and facial expressions. Our System is capable of recognizing sign-language symbols can be used as a means of communication with hard of hearing people. Our paper proposes a system to help normal people can easily communicate with hard of hearing people. Instead we are using a camera and microphone as a device to implement the Indian Sign Language (ISL) system. The ISL translation system has translation of voice into Indian Sign Language. The ISL translation system uses microphone or USB camera to get images or continuous video image (from normal people) which can be interpreted by the application. Acquired voices are assumed to be translation, scale and rotation invariant. In this process the steps of translation are acquisition of images, binarized, classification, hand shape edge detection and feature extraction. After getting vectors feature extraction state then pattern matching done by comparing existing database. The GUI application is displaying and sending the message to the receiver. This system makes normal people to communicate easily with deaf/dumb person. Also in video calling or chatting this application helps the hard speaking and hearing people.

A new technology on translating Indonesian spoken language into Indonesian sign language system

International Journal of Electrical and Computer Engineering (IJECE), 2021

People with hearing disabilities are those who are unable to hear, resulted in their disability to communicate using spoken language. The solution offered in this research is by creating a one way translation technology to interpret spoken language to Indonesian sign language system (SIBI). The mechanism applied here is by catching the sentences (audio) spoken by common society to be converted to texts, by using speech recognition. The texts are then processed in text processing to select the input texts. The next stage is stemming the texts into prefixes, basic words, and suffixes. Each words are then being indexed and matched to SIBI. Afterwards, the system will arrange the words into SIBI sentences based on the original sentences, so that the people with hearing disabilities can get the information contained within the spoken language. This technology success rate were tested using Confusion Matrix, which resulted in precision value of 76%, accuracy value of 78%, and recall value of 79%. This technology has been tested in SMP-LB Karya Mulya on the 7 th grader students with the total of 9 students. From the test, it is obtained that 86% of students stated that this technology runs very well.

Automatic sign language translator model

2014

In this paper we present the overall study that includes the model developed (VS-Virtual Sign Model) and the experiences performed, with an automatic bidirectional sign language translator, between written and sign language, which is being supervised by the research group GILT (Graphics, interaction & learning technologies) under the frame of a national project called Virtual Sign (VS project). This project aims to develop and evaluate a model that facilitates access for the deaf and hearing impaired to digital content -in particular the educational content and learning objects -creating the conditions for greater social inclusion of deaf and hearing impaired people. Access to digital content will be supported by an automatic translator between Portuguese Writing (LEP) and Portuguese Sign Language (LGP) supported by an interaction model.

A Survey on Sign Language Translation Systems

International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022

Sign language is a way of communicating using hand gestures, movements and facial expressions, instead of spoken words. It is the medium of communication used by people who are deaf or have hearing impairments to exchange information between their own community and with normal people. In order to bridge the communication gap between people with hearing and speaking disabilities and people who do not use sign language, a lot of research work using machine learning algorithms has been done. Hence, Sign language translator came into picture. Sign Language Translators are generally used to interpret signs and gestures from deaf and hard hearing people and convert them into text.