Sign Language in the Intelligent Sensory Environment (original) (raw)

Real-Time Sign Language Detector

IRJET, 2022

eople who are deaf or hard of hearing, as well as others who are unable to communicate verbally, use sign language to interact with people and within their communities. A group of modern languages known as sign languages use a visual-manual modality to convey information. Real-time finger-spelling recognition in Sign Language presents a conundrum that is examined. Using webcam photos, we produced a dataset for the usual hand gestures used in ISL as well as a dataset for the identification of 36 different gestures (alphabets and numbers). Using a hand motion as input, the system instantly displays the recognized character on the monitor screen. This human-computer interaction (HCI) project aims to recognize several alphabets (a-z), digits (0-9) and a number of common ISL hand gestures. We employed a Pre-Trained SSD Mobile net V2 architecture trained on our own dataset to apply Transfer learning to the issue. We developed a solid model that reliably categorises Sign language in the vast majority of cases. The use of sensors (such glove sensors) and other image processing methods (like the edge detection approach, the Hough Transform, and others) has been used in numerous research in the past in this field, but these technologies are rather expensive, and many people cannot afford them. Different human-computer interaction methods for posture recognition were researched and assessed during the project. An assortment of image processing strategies with human movement categorization were shown to be the best answer. The technology can identify selected Sign Language signs with an accuracy of 70–80% even in low light and without a controlled background. As a result, we developed this free and user-friendly app to help such people. Apart from a small number of people, not everyone is conversant in sign language, thus they could need an interpreter, which could be troublesome and expensive. By developing algorithms that can instantly foresee alphanumeric hand motions in sign language, this research aims to close the communication gap. The major objective of this project is to develop an intelligent computer-based system that will enable deaf people to efficiently communicate with others by using hand gestures.

Sign Language Recognition System

For many deaf and dumb people, sign language is the principle means of communication. Normal people in such cases end up facing problems while communicating with speech impaired people. In our proposed system, we can automatically recognize sign language to help normal people to communicate more effectively with speech impaired people. This system recognizes the hand signs with the help of specially designed gloves. These recognized gestures are translated into a text and voice in real time. Thus this system reduces the communication gap between normal and the speech impaired people.

A Review Paper on Real Time Sign Language Detection for the Deaf and Dumb

International Journal of Advanced Trends in Computer Science and Engineering , 2022

Hand gesture is one of the styles used in sign language for non-verbal communication. It's most generally used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. Plathora sign language systems had been developed by numerous makers around the world but they're neither flexible nor cost-effective for the end druggies. Hence, it's a software which presents a system prototype that's suitable to automatically recognize sign language to help deaf and dumb people to communicate more effectively with each other or normal people. Dumb people are generally deprived of normal communication with other people in the society, also normal people find it hard to understand and communicate with them. These people have to depend on an interpreter or on some kind of visual communication. An interpreter won't be always available and visual communication is substantially delicate to understand. As a normal person is ignorant of the grammar or meaning of numerous gestures that are part of a sign language, it's primarily limited to their families and/ or deaf and dumb community.

Real Time Sign Language Recognition System for Hearing and Speech Impaired People

International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022

Sign Language is globally used by more than 70 million impaired people to communicate and is characterized by fast, highly articulate motion of hand gesture which is difficult for verbal speakers to understand. This limitation combined with the lack of knowledge about sign language by verbal speakers creates a separation where both parties are unable to effectively communicate, to overcome this limitation we propose a new method for sign language recognition using OpenCV (A python library) which is used for pre-processing images and extracting different skin toned hands from the background. In this method hand gesture are used to make signs which are detected by YOLOv5 algorithm for object detection which is the fastest algorithm till date while Convolutional-Neural-Networks (CNN) are used for training gesture and to classify the images, and further we proposed a system which translates speech into sign language so that the words of the verbal speaker can be transmitted to the deaf/mute. This automated system first detects speech using the JavaScript Web-Speech API and converts it into text because the recognized text is processed using the Natural Language Toolkit and aligns token text with the sign language library (sign language videos) videos according to well-known text and finally shows a compiled output which is displayed through avatar animation for a deaf / dumb person. The proposed system has various advantages like Portability, User-friendly Interface and Voice Module. The software is also very cost-effective which only needs a laptop camera or webcam and hand gesture, system accuracy is compared to high-quality methods and is found to be the best.

IRJET- Transcribing of Sign Language to Human and Computer Readable Text in Realtime using Computer Vision

IRJET, 2020

Gesture based communication is useful in correspondence between mute/deaf individuals and non-mute individuals. Different investigate ventures are in progress on various communication via gesture based systems around the world. The goal of this task is to structure a framework that can decipher the Indian gesture based sign language (ISL) precisely with the goal that the less fortunate individuals will have the op0on to interact with people and different technologies without the need of a translator and use services like retail stores, banks, and so forth with ease. The system introduced here portrays a framework for programmed acknowledgment of Indian gesture based communication on where a normal camera is utilized to obtain the signs. To utilize the venture in real-life conditions, first we made a database containing 2000 signs, with 200 pictures for every sign. Direct pixel and techniques by KNN were utilized and the signs were identified.