Speech and sliding text aided sign retrieval from hearing impaired sign news videos (original) (raw)

Conversion of Sign Language Video to Text and Speech

International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022

Sign Language recognition (SLR) is a significant and promising technique to facilitate communication for hearingimpaired people. Here, we are dedicated to finding an efficient solution to the gesture recognition problem. This work develops a sign language (SL) recognition framework with deep neural networks, which directly transcribes videos of SL sign to word. We propose a novel approach, by using Video sequences that contain both the temporal as well as the spatial features. So, we have used two different models to train both the temporal as well as spatial features. To train the model on the spatial features of the video sequences we use the (Convolutional Neural Networks) CNN model. CNN was trained on the frames obtained from the video sequences of train data. We have used RNN(recurrent neural network) to train the model on the temporal features. A trained CNN model was used to make predictions for individual frames to obtain a sequence of predictions or pool layer outputs for each video. Now this sequence of prediction or pool layer outputs was given to RNN to train on the temporal features. Thus, we perform sign language translation where input video will be given, and by using CNN and RNN, the sign shown in the video is recognized and converted to text and speech.

Real Time Sign Language Recognition and Translation to Text for Vocally and Hearing Impaired People

International journal of engineering research and technology, 2020

With a population of around 7.8 billion today communication is a strong means for understanding each other. Around 9,000 million individuals are vocally and hearing impaired. Because of the impediment there is a communication gap between the vocally handicapped people and the typical individuals. Gesture based communication is the fundamental method of communication for this section of our society. This language uses a set of representations which are finger sign, expression or mixture of both to precise their information among others. This system presents a completely unique approach of application based translation of sign-action analysis, recognition and generating a text description in English. Where it uses two important steps training and testing. In training set of fifty different domains of video samples are collected, each domain contains 5 samples and assigns a category of words to every video sample and it will be stored in the database. Wherein testing, test sample under...

An Open CV Framework of Automated Sign language to Text translator for Speech and Hearing Impaired

Generally hearing impaired people use sign language for communication, but they find difficulty in communicating with others who don " t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into text in order to make the communication take place between the mute communities and the general public as possible. Computer recognition of sign language is an important research problem for enabling communication with hearing impaired people. This project introduces an efficient and fast algorithm for identification of the number of fingers opened in a gesture representing text of the Binary Sign Language. The system does not require the hand to be perfectly aligned to the camera and any specific back ground for camera. The project uses image processing system to identify, especially English alphabetic sign language used by the deaf people to communicate. The basic objective of this project is to develop a computer based intelligent system that will enable the hearing impaired significantly to communicate with others using their natural hand gestures. The idea consisted of designing and building up an intelligent system using image processing, machine learning and artificial intelligence concepts to take visual inputs of sign language " s hand gestures and generate easily recognizable form of outputs. Hence the objective of this project is to develop an intelligent system which can act as a translator between the sign language and the spoken language dynamically and can make the communication between people with hearing impairment and normal people both effective and efficient. The system is we are implementing for Binary sign language but it can detect any sign language with prior image processing.

IRJET- Transcribing of Sign Language to Human and Computer Readable Text in Realtime using Computer Vision

IRJET, 2020

Gesture based communication is useful in correspondence between mute/deaf individuals and non-mute individuals. Different investigate ventures are in progress on various communication via gesture based systems around the world. The goal of this task is to structure a framework that can decipher the Indian gesture based sign language (ISL) precisely with the goal that the less fortunate individuals will have the op0on to interact with people and different technologies without the need of a translator and use services like retail stores, banks, and so forth with ease. The system introduced here portrays a framework for programmed acknowledgment of Indian gesture based communication on where a normal camera is utilized to obtain the signs. To utilize the venture in real-life conditions, first we made a database containing 2000 signs, with 200 pictures for every sign. Direct pixel and techniques by KNN were utilized and the signs were identified.

SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLE

IRJET, 2022

Communicating with people is a fundamental form of interaction, and being able to do so with people who have hearing impairments has proven to be a significant challenge. As a result, we are using Sign language as a critical tool for bridging the communication gap between hearing and deaf people. Communicating through sign language is difficult, and people with hearing impairments struggle to share their feelings and opinions with people who do not speak that language. To address this issue, we require a versatile and robust product. We proposed a system that aims to translate hand signals into their corresponding words as accurately as possible using data collected and another research. This can be accomplished through the use of various machine learning algorithms and techniques for classifying data into words. For our proposed system, we are using the Convolutional neural networks (CNN) algorithm. The process entails being able to capture multiple images of various hand gestures using a web cam in this analysis, and the system being able to predict and display the corresponding letters/words of the captured images of hand gestures that have previously been trained with Audio. Once an image is captured, it goes through a series of procedures that include computer vision techniques such as gray-scale conversion, mask operation, and dilation. The main goal and purpose of this project will be to remove the barrier between deaf and dumb people and normal people.