A COMPARATIVE STUDY USING 2D CNN AND TRANSFER LEARNING TO DETECT AND CLASSIFY ARABIC-SCRIPT-BASED SIGN LANGUAGE (original) (raw)
Related papers
Automatic recognition of Arabic alphabets sign language using deep learning
International Journal of Electrical and Computer Engineering (IJECE), 2022
Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outpe...
Improved Recognition of Kurdish Sign Language Using Modified CNN
MDPI, 2024
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition are presented. Recognizing the Kurdish alphabet is the primary focus of this investigation. Using a variety of activation functions over several iterations, the model was trained and then used to make predictions on the KuSL2023 dataset. There are a total of 71,400 pictures in the dataset, drawn from two separate sources, representing the 34 sign languages and alphabets used by the Kurds. A large collection of real user images is used to evaluate the accuracy of the suggested strategy. A novel Kurdish Sign Language (KuSL) model for classification is presented in this research. Furthermore, the hand region must be identified in a picture with a complex backdrop, including lighting, ambience, and image color changes of varying intensities. Using a genuine public dataset, real-time classification, and personal independence while maintaining high classification accuracy, the proposed technique is an improvement over previous research on KuSL detection. The collected findings demonstrate that the performance of the proposed system offers improvements, with an average training accuracy of 99.05% for both classification and prediction models. Compared to earlier research on KuSL, these outcomes indicate very strong performance.
Arabic Sign Language Recognition using Faster R-CNN
International Journal of Advanced Computer Science and Applications
Deafness does not restrict its negative effect on the person's hearing, but rather on all aspect of their daily life. Moreover, hearing people aggravated the issue through their reluctance to learn sign language. This resulted in a constant need for human translators to assist deaf person which represents a real obstacle for their social life. Therefore, automatic sign language translation emerged as an urgent need for the community. The availability and the widespread use of mobile phones equipped with digital cameras promoted the design of image-based Arabic Sign Language (ArSL) recognition systems. In this work, we introduce a new ArSL recognition system that is able to localize and recognize the alphabet of the Arabic sign language using a Faster Region-based Convolutional Neural Network (R-CNN). Specifically, faster R-CNN is designed to extract and map the image features, and learn the position of the hand in a given image. Additionally, the proposed approach alleviates both challenges; the choice of the relevant features used to encode the sign visual descriptors, and the segmentation task intended to determine the hand region. For the implementation and the assessment of the proposed Faster R-CNN based sign recognition system, we exploited VGG-16 and ResNet-18 models, and we collected a real ArSL image dataset. The proposed approach yielded 93% accuracy and confirmed the robustness of the proposed model against drastic background variations in the captured scenes.
Effective Kurdish Sign Language Detection and Classification Using Convolutional Neural Networks
Sign Language Recognition (SLR) has an important role among the deaf-dump community since it is used as a medium of instruction to execute daily activities such as communication, teaching, learning, and social interactions. In this paper, a real-time model has been implemented for Kurdish sign recognition using Convolutional Neural Network (CNN) algorithm. The main objective of this study is to recognize the Kurdish alphabetic. The model has been trained and predicted on the KuSL2022 dataset using different activation functions for a number of epochs. The dataset consists of 71,400 images for the 34 Kurdish sign languages and alphabets collected from two different datasets. The accuracy of the proposed method is evaluated on a dataset of real images collected from many users. The obtained results show that the proposed system's performance increased for both classification and prediction models, with an average train accuracy of 99.91 %. These results outperform previous studies...
Kurdish Sign Language Recognition Based on Transfer Learning
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2023
Sign language is used to communicate with deaf and dumb people; it is difficult for ordinary people to communicate with them. Hence, computer vision and automatic identification can reduce the difficulties of reaching them. Deep learning algorithms were used to distinguish sign language in different languages and styles. Convolutional Neural Networks (CNNs) are used in computer vision, particularly pre-trained algorithms. This research proposes using transfer and machine learning to distinguish Kurdish Sign Language (KSL). A KSL dataset was created to characterize the Kurdish language at the level of numbers and letters, using pre-trained algorithms for feature extraction and machine learning algorithms for classification. The proposed method was tested on two data sets; KSL and American Sign Language (ASL). The algorithms (VGG19 and RESNET101) are implemented in the feature extraction phase with pretrained weights. The algorithms: Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF), is dependent on the classification stage, and the CNN is designed for the KSL model. The efficiency of the proposed models is evaluated using (accuracy, recall, precision, and F1 score) metrics. The proposed model's outcomes illustrated that VGG19 is better than (RESNET101 and proposed CNN) algorithms in terms of feature extraction, and the random forest is the best classifier which achieved an accuracy rate of 95% at the numbers level and 97% at the level of the letter for KSL and ASL.
Arabic Sign Language Recognition using Lightweight CNN-based Architecture
International Journal of Advanced Computer Science and Applications, 2022
Communication is a critical skill for humans. People who have been deprived from communicating through words like the rest of humans, usually use sign language. For sign language, the main signs features are the handshape, the location, the movement, the orientation and the non-manual component. The vast spread of mobile phones presents an opportunity for hearing-disabled people to engage more into their communities. Designing and implementing a novel Arabic Sign Language (ArSL) recognition system would significantly affect their quality of life. Deep learning models are usually heavy for mobile phones. The more layers a neural network has, the heavier it is. However, typical deep neural network necessitates a large number of layers to attain adequate classification performance. This project aims at addressing the Arabic Sign Language recognition problem and ensuring a trade-off between optimizing the classification performance and scaling down the architecture of the deep network to reduce the computational cost. Specifically, we adapted Efficient Network (EfficientNet) models and generated lightweight deep learning models to classify Arabic Sign Language gestures. Furthermore, a real dataset collected by many different signers to perform hand gestures for thirty different Arabic alphabets. Then, an appropriate performance metrics used in order to assess the classification outcomes obtained by the proposed lightweight models. Besides, preprocessing and data augmentation techniques were investigated to enhance the models generalization. The best results were obtained using the EfficientNet-Lite 0 architecture and the Label smooth as loss function. Our model achieved 94% and proved to be effective against background variations.
Arabic Sign Language Recognition through Deep Neural Networks Fine-Tuning
International Journal of Online and Biomedical Engineering (iJOE)
Sign Language is considered the main communication tool for deaf or hearing impaired people. It is a visual language that uses hands and other parts of the body to provide people who are in need to full access of communication with the world. Accordingly, the automation of sign language recognition has become one of the important applications in the areas of Artificial Intelligence and Machine learning. Specifically speaking, Arabic sign language recognition has been studied and applied using various intelligent and traditional approaches, but with few attempts to improve the process using deep learning networks. This paper utilizes transfer learning and fine tuning deep convolutional neural networks (CNN) to improve the accuracy of recognizing 32 hand gestures from the Arabic sign language. The proposed methodology works by creating models matching the VGG16 and the ResNet152 structures, then, the pre-trained model weights are loaded into the layers of each network, and finall...
ArSL-CNN a convolutional neural network for Arabic sign language gesture recognition
Indonesian Journal of Electrical Engineering and Computer Science, 2021
Sign language (SL) is a visual language means of communication for people who are Deaf or have hearing impairments. In Arabic-speaking countries, there are many Arabic sign languages (ArSL) and these use the same alphabets. This study proposes ArSL-CNN, a deep learning model that is based on a convolutional neural network (CNN) for translating Arabic SL (ArSL). Experiments were performed using a large ArSL dataset (ArSL2018) that contains 54049 images of 32 sign language gestures, collected from forty participants. The results of the first experiments with the ArSL-CNN model returned a train and test accuracy of 98.80% and 96.59%, respectively. The results also revealed the impact of imbalanced data on model accuracy. For the second set of experiments, various re-sampling methods were applied to the dataset. Results revealed that applying the synthetic minority oversampling technique (SMOTE) improved the overall test accuracy from 96.59% to 97.29%, yielding a statistically signicant...
A Deep Learning based Approach for Recognition of Arabic Sign Language Letters
International Journal of Advanced Computer Science and Applications
No one can deny that the deaf-mute community has communication problems in daily life. Advances in artificial intelligence over the past few years have broken through this communication barrier. The principal objective of this work is creating an Arabic Sign Language Recognition system (ArSLR) for recognizing Arabic letters. The ArSLR system is developed using our image pre-processing method to extract the exact position of the hand and we proposed architecture of the Deep Convolutional Neural Network (CNN) using depth data. The goal is to make it easier for people who have hearing problems to interact with normal people. Based on user input, our method will detect and recognize hand-sign letters of the Arabic alphabet automatically. The suggested model is anticipated to deliver encouraging results in the recognition of Arabic sign language with an accuracy score of 97,07%. We conducted a comparison study in order to evaluate proposed system, the obtained results demonstrated that this method is able to recognize static signs with greater accuracy than the accuracy obtained by similar other studies on the same dataset used.
Integrated Mediapipe with a CNN Model for Arabic Sign Language Recognition
Journal of Electrical and Computer Engineering, 2023
Deaf and dumb people struggle with communicating on a day-today basis. Current advancements in artifcial intelligence (AI) have allowed this communication barrier to be removed. A letter recognition system for Arabic sign language (ArSL) has been developed as a result of this efort. Te deep convolutional neural network (CNN) structure is used by the ArSL recognition system in order to process depth data and to improve the ability for hearing-impaired to communicate with others. In the proposed model, letters of the hand-sign alphabet and the Arabic alphabet would be recognized and identifed automatically based on user input. Te proposed model should be able to identify ArSL with a rate of accuracy of 97.1%. In order to test our approach, we carried out a comparative study and discovered that it is able to diferentiate between static indications with a higher level of accuracy than prior studies had achieved using the same dataset.