A Deep Learning Framework for Real-Time Sign Language Recognition Based on Transfer Learning (original) (raw)
Related papers
Sign Language Recognition using Deep Learning
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
Millions of people with speech and hearing impairments communicate with sign languages every day. For hearingimpaired people, gesture recognition is a natural way of communicating, much like voice recognition is for most people. In this study, we look at the issue of translating/converting sign language to text and propose a better solution based on machine learning techniques. We want to establish a system that hearing-impaired people may utilise in their everyday lives to promote communication and collaboration between hearing-impaired people and people who aren't trained in American Sign Language (ASL). To develop a deep learning model for the ASL dataset, we'll use a technique called Transfer Learning in combination with Data Augmentation.
Sign Language Recognition System Using TensorFlow Object Detection API
Communications in Computer and Information Science
Communication is defined as the act of sharing or exchanging information, ideas or feelings. To establish communication between two people, both of them are required to have knowledge and understanding of a common language. But in the case of deaf and dumb people, the means of communication are different. Deaf is the inability to hear and dumb is the inability to speak. They communicate using sign language among themselves and with normal people but normal people do not take seriously the importance of sign language. Not everyone possesses the knowledge and understanding of sign language which makes communication difficult between a normal person and a deaf and dumb person. To overcome this barrier, one can build a model based on machine learning. A model can be trained to recognize different gestures of sign language and translate them into English. This will help a lot of people in communicating and conversing with deaf and dumb people. The existing Indian Sing Language Recognition systems are designed using machine learning algorithms with single and double-handed gestures but they are not real-time. In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a TensorFlow model to create a real-time Sign Language Recognition system. The system achieves a good level of accuracy even with a limited size dataset.
American Sign Language Recognition Based on Transfer Learning Algorithms
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2023
This research focuses on recognizing American Sign Language (ASL) letters and numbers, addressing the evolving technology landscape and the growing demand for improved user experiences among those primarily using sign language for communication. Leveraging deep learning, particularly through transfer learning, this study aims to enhance ASL recognition technology. Various deep learning models, including VGG16, ResNet50, MobileNetV2, InceptionV3, and CNN, are evaluated using an ASL dataset sourced from the Modified National Institute of Standards and Technology (MNIST) database, featuring ASL alphabetic letters represented through hand gestures. InceptionV3 emerges as the top-performing model, achieving an accuracy of 0.96. Transfer learning, which fine-tunes pre-trained models with ASL data, significantly improves recognition accuracy, making it especially valuable when labeled ASL data is limited. While InceptionV3 stands out, other models like VGG16, MobileNetV2, and ResNet50 demonstrate acceptable performance, offering flexibility for model selection based on specific application needs and computational resources. These findings underscore the effectiveness of deep learning and transfer learning techniques, providing a foundation for intuitive sign language recognition systems and contributing to breaking down communication barriers for the deaf and mute community.
Transfer learning for Azerbaijani Sign Language Recognition
Informatics and control problems, 2022
The goal of sign language technologies is to develop a bridging solution for the communication gap between the hearing-impaired community and the rest of society. Real-time Sign Language Recognition (SLR) is a state-of-the-art subject that promises to facilitate communication between the hearing-impaired community and others. Our research uses transfer learning to provide vision-based sign language recognition. We investigated recent works that use CNN-based methods and provided a literature review on deep learning systems for the sign language recognition (SLR) problem. This paper discusses the architecture of deep learning methods for SLR systems and explains a transfer learning application for fingerspelling sign classification. For the experiments, we used the Azerbaijani Sign Language Fingerspelling dataset and got 88.0% accuracy.
Real Time Video Recognition of Signs for Deaf and Dump Using Deep Learning
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
To pass the message is one of the essential prerequisites for endurance in the general public. Gesture based communication is a typical specialized technique for hard of hearing stupid local area. It makes out of an assortment scope of motions, activities and, surprisingly, facial feelings.Gesture based communication is utilized by 70 million individuals all over the planet. Understanding communication via gestures is one of the essential empowering influences in assisting clients of gesture-based communication with speaking with the remainder of the general public. The hard of hearing and dump local area moves back with regards to the intelligent part with ordinary individuals. This makes a tremendous hole among hard of hearing and dump individuals and ordinary individuals. Since our local area have no clue about communication through signing. In this project an application is created which will fill in as a learning instrument first of all in communication via gestures that includes hand recognition. Application is made to change gesture-based communication over to message. An application which makes an interpretation of Sign language to message, which utilizes the portable camera to catch the picture of the hand motion. Then, at that point, the caught picture goes through the series of activity. The CNN model is utilized to extricate the elements of the caught picture and makes an interpretation of it into text.
Isolated sign language recognition using hidden transfer learning
Zenodo (CERN European Organization for Nuclear Research), 2023
Sign language (SL) is a visual language that people with speech and hearing disabilities use to communicate in their everyday conversations. It is entirely an optical communication language due to its native grammar. Sadly, learning and practicing sign language is not that common in our society; as a result, this research created a prototype for sign language recognition. Hand detection was used to create a system that will serve as a learning tool for sign language beginners. We have created an improved Deep CNN model that can recognize which letter, word, or digit of the American Sign Language (ASL) is being signed from an image of a signing hand. We have extracted the features from the images by using Transfer Learning and building a model using a Deep Convolutional Neural Network or Deep CNN. We have evaluated the proposed model with our custom dataset as well as with an existing dataset. Our improved Deep CNN model gives only a 4.95% error rate. In addition, we have compared the improved Deep CNN model with other traditional methods and here the improved Deep CNN model achieved an accuracy rate of 95% and outperforms the other models.
Sign Language Recognition System for Dumb and Deaf using Deep Learning
Zenodo (CERN European Organization for Nuclear Research), 2023
The process of communication involves sharing and exchanging information, thoughts, ideas, etc. Communication is categorized into two basic types Verbal in which we listen to others to understand the meaning and Non-verbal communication which is the transmission of the message is done through signals. So sign language is the type of non-verbal way of communication. Sign language is the language used by the dumb and deaf communities for communication. It is the oldest language used for communication. Normal people cannot understand the sign language used by dumb and deaf people. There are many sign languages used by the dumb and deaf community like word-level sign vocabulary, finger spelling, and non-manual features. So we have implemented American sign language which is a finger spelling sign language. In our method, we have created a data set, applied filters to the data set, trained the model, and predicted the output in a text format that is understandable to normal people.
Real Time Sign Language Recognition Using Deep Learning
Sign language is an extremely important communication tool for many deaf and mute people. So we proposed a model to recognize sign gestures using YOLOv5 (You only look once version 5). This model can detect sign gestures in complex environment also. For this model we got the accuracy of 88.4% with precision of 76.6% and recall of 81.2%. The proposed model has evaluated on a labeled dataset Roboflow. Additionally we added some images for training and testing to get better accuracy. We compared our model with CNN (convolutional neural network) where we got accuracy of 52.98%. We checked this model for real time detection also and got the accurate results.
Real Time Sign Language Detection
IRJET, 2022
Throughout the long term, correspondence has played an indispensable job in return of data and sentiments in one's day to day existence. Sign language is the main medium through which deaf and mute individuals can interact with rest of the world through various hand motions. With the advances in machine learning, it is possible to detect sign language in real time. We have utilized the OpenCv python library, Tensorflow Object Detection pipeline and transfer learning to train a deep learning model that detects sign languages in Real time.
SILINGO -SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS
Sign Language is a medium for conversation used by the deaf and mute people that focuses on hand gestures, movements, and expressions. The hearing and speech impaired individuals have difficulty in conveying their thoughts and messages to the people. Recognizing a Sign Language is a topic of deep research and will help people who can't understand sign language and break down this communication barrier between deaf/dumb/mute people with other people. Sign Language Recognition using Hand Gesture is a System which presents a novel, organic, interactive, and easy to use method of engaging with computers that is more recognizable to homo sapiens. Human-machine interface, language of sign, and immersive game technology are all examples of applications for gesture recognition. People who are not deaf, on the other hand, find it difficult or impossible to converse with deaf people. They must depend on an interpreter, which is both costly and inconvenient for the persons trying to converse with deaf/mute/dumb people. This project aims to provide a method that can employ the abilities of layers of Convolutional Neural Network (CNN) to detect and identify hand signs taken in real time using a device with camera.