A Saudi Sign Language Recognition System based on Convolutional Neural Networks (original) (raw)

Gesture based Arabic Sign Language Recognition for Impaired People based on Convolution Neural Network

International Journal of Advanced Computer Science and Applications, 2021

The Arabic Sign Language has endorsed outstanding research achievements for identifying gestures and hand signs using the deep learning methodology. The term "forms of communication" refers to the actions used by hearingimpaired people to communicate. These actions are difficult for ordinary people to comprehend. The recognition of Arabic Sign Language (ArSL) has become a difficult study subject due to variations in Arabic Sign Language (ArSL) from one territory to another and then within states. The Convolution Neural Network has been encapsulated in the proposed system which is based on the machine learning technique. For the recognition of the Arabic Sign Language, the wearable sensor is utilized. This approach has been used a different system that could suit all Arabic gestures. This could be used by the impaired people of the local Arabic community. The research method has been used with reasonable and moderate accuracy. A deep Convolutional network is initially developed for feature extraction from the data gathered by the sensing devices. These sensors can reliably recognize the Arabic sign language's 30 hand sign letters. The hand movements in the dataset were captured using DG5-V hand gloves with wearable sensors. For categorization purposes, the CNN technique is used. The suggested system takes Arabic sign language hand gestures as input and outputs vocalized speech as output. The results were recognized by 90% of the people.

A Moroccan Sign Language Recognition Algorithm Using a Convolution Neural Network

Journal of ICT Standardization

Gesture recognition is an open phenomenon in computer vision, and one of the topics of current interest. Gesture recognition has many applications in the interpretation of sign language in deaf-mutes, one is in human-computer interaction, and the other is in immersive game technology. For this purpose, we have developed a model of image processing recognition of gestures, based on Artificial Neural Networks, starting from data collection, identification, tracking and classification of gestures, to the display of the obtained results. We propose an approach to contribute to the translation of sign language into voice/text format. In this paper, we present a Moroccan sign language recognition system using a Convolutional Neural Network (CNN). This system includes an important data set of more than 20 files. Each file contains 1,000 static images of each signal from several different angles that we collected with the camera. Different Sign Language models were evaluated and compared wi...

Arabic Sign Language Recognition through Deep Neural Networks Fine-Tuning

International Journal of Online and Biomedical Engineering (iJOE)

Sign Language is considered the main communication tool for deaf or hearing impaired people. It is a visual language that uses hands and other parts of the body to provide people who are in need to full access of communication with the world. Accordingly, the automation of sign language recognition has become one of the important applications in the areas of Artificial Intelligence and Machine learning. Specifically speaking, Arabic sign language recognition has been studied and applied using various intelligent and traditional approaches, but with few attempts to improve the process using deep learning networks. This paper utilizes transfer learning and fine tuning deep convolutional neural networks (CNN) to improve the accuracy of recognizing 32 hand gestures from the Arabic sign language. The proposed methodology works by creating models matching the VGG16 and the ResNet152 structures, then, the pre-trained model weights are loaded into the layers of each network, and finall...

Indonesia Sign Language Recognition using Convolutional Neural Network

International Journal of Advanced Computer Science and Applications, 2021

In daily life, the deaf use sign language to communicate with others. However, the non-deaf experience difficulties in understanding this communication. To overcome this, sign recognition via human-machine interaction can be utilized. In Indonesia, the deaf use a specific language, referred to as Indonesia Sign Language (BISINDO). However, only a few studies have examined this language. Thus, this study proposes a deep learning approach, namely, a new convolutional neural network (CNN) to recognize BISINDO. There are 26 letters and 10 numbers to be recognized. A total of 39,455 data points were obtained from 10 respondents by considering the lighting and perspective of the person: specifically, bright and dim lightning, and from first and second-person perspectives. The architecture of the proposed network consisted of four convolutional layers, three pooling layers, and three fully connected layers. This model was tested against two common CNNs models, AlexNet and VGG-16. The results indicated that the proposed network is superior to a modified VGG-16, with a loss of 0.0201. The proposed network also had smaller number of parameters compared to a modified AlexNet, thereby reducing the computation time. Further, the model was tested using testing data with an accuracy of 98.3%, precision of 98.3%, recall of 98.4%, and F1-score of 99.3%. The proposed model could recognize BISINDO in both dim and bright lighting, as well as the signs from the first-and second-person perspectives.

Sign Language Recognition Using Convolutional Neural Networks

— Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, non-linear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs.

A Deep Learning based Approach for Recognition of Arabic Sign Language Letters

International Journal of Advanced Computer Science and Applications

No one can deny that the deaf-mute community has communication problems in daily life. Advances in artificial intelligence over the past few years have broken through this communication barrier. The principal objective of this work is creating an Arabic Sign Language Recognition system (ArSLR) for recognizing Arabic letters. The ArSLR system is developed using our image pre-processing method to extract the exact position of the hand and we proposed architecture of the Deep Convolutional Neural Network (CNN) using depth data. The goal is to make it easier for people who have hearing problems to interact with normal people. Based on user input, our method will detect and recognize hand-sign letters of the Arabic alphabet automatically. The suggested model is anticipated to deliver encouraging results in the recognition of Arabic sign language with an accuracy score of 97,07%. We conducted a comparison study in order to evaluate proposed system, the obtained results demonstrated that this method is able to recognize static signs with greater accuracy than the accuracy obtained by similar other studies on the same dataset used.

Automatic recognition of Arabic alphabets sign language using deep learning

International Journal of Electrical and Computer Engineering (IJECE), 2022

Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outpe...

SILINGO -SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

Sign Language is a medium for conversation used by the deaf and mute people that focuses on hand gestures, movements, and expressions. The hearing and speech impaired individuals have difficulty in conveying their thoughts and messages to the people. Recognizing a Sign Language is a topic of deep research and will help people who can't understand sign language and break down this communication barrier between deaf/dumb/mute people with other people. Sign Language Recognition using Hand Gesture is a System which presents a novel, organic, interactive, and easy to use method of engaging with computers that is more recognizable to homo sapiens. Human-machine interface, language of sign, and immersive game technology are all examples of applications for gesture recognition. People who are not deaf, on the other hand, find it difficult or impossible to converse with deaf people. They must depend on an interpreter, which is both costly and inconvenient for the persons trying to converse with deaf/mute/dumb people. This project aims to provide a method that can employ the abilities of layers of Convolutional Neural Network (CNN) to detect and identify hand signs taken in real time using a device with camera.

Arabic Sign Language Recognition using Lightweight CNN-based Architecture

International Journal of Advanced Computer Science and Applications, 2022

Communication is a critical skill for humans. People who have been deprived from communicating through words like the rest of humans, usually use sign language. For sign language, the main signs features are the handshape, the location, the movement, the orientation and the non-manual component. The vast spread of mobile phones presents an opportunity for hearing-disabled people to engage more into their communities. Designing and implementing a novel Arabic Sign Language (ArSL) recognition system would significantly affect their quality of life. Deep learning models are usually heavy for mobile phones. The more layers a neural network has, the heavier it is. However, typical deep neural network necessitates a large number of layers to attain adequate classification performance. This project aims at addressing the Arabic Sign Language recognition problem and ensuring a trade-off between optimizing the classification performance and scaling down the architecture of the deep network to reduce the computational cost. Specifically, we adapted Efficient Network (EfficientNet) models and generated lightweight deep learning models to classify Arabic Sign Language gestures. Furthermore, a real dataset collected by many different signers to perform hand gestures for thirty different Arabic alphabets. Then, an appropriate performance metrics used in order to assess the classification outcomes obtained by the proposed lightweight models. Besides, preprocessing and data augmentation techniques were investigated to enhance the models generalization. The best results were obtained using the EfficientNet-Lite 0 architecture and the Label smooth as loss function. Our model achieved 94% and proved to be effective against background variations.

Deep Learning Approach For Sign Language Recognition

Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI), 2023

Sign language is a method of communication that uses hand gestures between people with hearing loss. Each hand sign represents one meaning, but several terms don't have sign language, so they have to be spelled alphabetically. Problems occur when communicating between normal people with hearing loss, because not everyone understands sign language, so a model is needed to recognize sign language as well as a learning tool for beginners who want to learn sign language, especially alphabetic sign language. This study aims to create a hand sign language recognition model for alphabetic letters using a deep learning approach. The main contribution of this research is to produce a real-time hand sign language image acquisition, and hand sign language recognition model for Alphabet. The model used is a seven-layer Convolutional Neural Network (CNN). This model is trained using the ASL alphabet database which consists of 27 categories, where each category consists of 3000 images or a total of 87,000 hand gesture images measuring 200×200 pixels. First, the background correction process is carried out and the input image size is changed to 32×32 pixels using the bicubic interpolation method. Next, separate the dataset for training and validation respectively 75% and 25%. Finally the process of testing the model using data input of hand sign language images from a web camera. The test results show that the proposed model has good performance with an accuracy value of 99%. The experimental results show that image preprocessing using background correction can improve model performance.