Improved Recognition of Kurdish Sign Language Using Modified CNN (original) (raw)
Related papers
Effective Kurdish Sign Language Detection and Classification Using Convolutional Neural Networks
Sign Language Recognition (SLR) has an important role among the deaf-dump community since it is used as a medium of instruction to execute daily activities such as communication, teaching, learning, and social interactions. In this paper, a real-time model has been implemented for Kurdish sign recognition using Convolutional Neural Network (CNN) algorithm. The main objective of this study is to recognize the Kurdish alphabetic. The model has been trained and predicted on the KuSL2022 dataset using different activation functions for a number of epochs. The dataset consists of 71,400 images for the 34 Kurdish sign languages and alphabets collected from two different datasets. The accuracy of the proposed method is evaluated on a dataset of real images collected from many users. The obtained results show that the proposed system's performance increased for both classification and prediction models, with an average train accuracy of 99.91 %. These results outperform previous studies...
Indonesia Sign Language Recognition using Convolutional Neural Network
International Journal of Advanced Computer Science and Applications, 2021
In daily life, the deaf use sign language to communicate with others. However, the non-deaf experience difficulties in understanding this communication. To overcome this, sign recognition via human-machine interaction can be utilized. In Indonesia, the deaf use a specific language, referred to as Indonesia Sign Language (BISINDO). However, only a few studies have examined this language. Thus, this study proposes a deep learning approach, namely, a new convolutional neural network (CNN) to recognize BISINDO. There are 26 letters and 10 numbers to be recognized. A total of 39,455 data points were obtained from 10 respondents by considering the lighting and perspective of the person: specifically, bright and dim lightning, and from first and second-person perspectives. The architecture of the proposed network consisted of four convolutional layers, three pooling layers, and three fully connected layers. This model was tested against two common CNNs models, AlexNet and VGG-16. The results indicated that the proposed network is superior to a modified VGG-16, with a loss of 0.0201. The proposed network also had smaller number of parameters compared to a modified AlexNet, thereby reducing the computation time. Further, the model was tested using testing data with an accuracy of 98.3%, precision of 98.3%, recall of 98.4%, and F1-score of 99.3%. The proposed model could recognize BISINDO in both dim and bright lighting, as well as the signs from the first-and second-person perspectives.
A Saudi Sign Language Recognition System based on Convolutional Neural Networks
International journal of engineering research and technology, 2020
Sign language is the main communication method for deaf people. It is a collection of signs that deaf people use to deal with each other. Deaf people find it difficult to communicate with normal people, as most of them do not understand the signs of the sign language. Sign language recognition systems translate the signs into natural languages and thus shorten the gap between deaf and normal people. Many studies have been done on different sign languages. There is a considerable number of studies on the standard Arabic sign language. In Saudi Arabia, deaf people use the Saudi language, which is different from standard Arabic. This study proposes a smart recognition system for Saudi sign language based on convolutional neural networks. The system is based on the Saudi Sign language dictionary, which was published recently in 2018. In this study, we constructed a dataset of 40 Saudi signs with about 700 images for each sign. We then developed a deep convolutional neural network and tr...
Turkish sign language digits classification with CNN using different optimizers
International advanced researches and engineering journal, 2020
Sign language is a way for hearing-impaired people to communicate among themselves and with people without hearing impairment. Communication with the sign language is difficult because few people know this language and the language does not have universal patterns. Sign language interpretation is the translation of visible signs into speech or writing. The sign language interpretation process has reached a practical solution with the help of computer vision technology. One of the models widely used for computer vision technology that mimics the work of the human eye in a computer environment is deep learning. Convolutional neural networks (CNN), which are included in deep learning technology, give successful results in sign language recognition as well as other image recognition applications. In this study, the dataset containing 2062 images consisting of Turkish sign language digits was classified with the developed CNN model. One of the important parameters used to minimize network error of the CNN model during the training is the learning rate. The learning rate is a coefficient used to update other parameters in the network depending on the network error. The optimization of the learning rate is important to achieve rapid progress without getting stuck in local minimums while reducing network error. There are several optimization techniques used for this purpose. In this study, the success of four different training and test processes performed with SGD, RMSprop, Adam and Adamax optimizers were compared. Adam optimizer, which is widely used today with its high performance, was found to be the most successful technique in this study with 98.42% training and 98.55% test accuracy.
A Sign Language Prediction Model using Convolution Neural Network
A Sign Language Prediction Model using Convolution Neural Network, 2021
The barrier between the hearing and the deaf communities in Kenya is a major challenge leading to a major gap in the communication sector where the deaf community is left out leading to inequality. The study used primary and secondary data sources to obtain information about this problem, which included online books, articles, conference materials, research reports, and journals on sign language and hand gesture recognition systems. To tackle the problem, CNN was used. Naturally captured hand gesture images were converted into grayscale and used to train a classification model that is able to identify the English alphabets from A-Z. Then identified letters are used to construct sentences. This will be the first step into breaking the communication barrier and the inequality. A sign language recognition model will assist in bridging the exchange of information between the deaf and hearing people in Kenya. The model was trained and tested on various matrices where we achieved an accuracy score of a 99% value when run on epoch of 10, the log loss metric returning a value of 0 meaning that it predicts the actual hand gesture images. The AUC and ROC curves achieved a 0.99 value which is excellent.
Sign Language Prediction Model using Convolution Neural Network
IJID (International Journal on Informatics for Development)
The barrier between the hearing and the deaf communities in Kenya is a major challenge leading to a major gap in the communication sector where the deaf community is left out leading to inequality. The study used primary and secondary data sources to obtain information about this problem, which included online books, articles, conference materials, research reports, and journals on sign language and hand gesture recognition systems. To tackle the problem, CNN was used. Naturally captured hand gesture images were converted into grayscale and used to train a classification model that is able to identify the English alphabets from A-Z. Then identified letters are used to construct sentences. This will be the first step into breaking the communication barrier and the inequality. A sign language recognition model will assist in bridging the exchange of information between the deaf and hearing people in Kenya. The model was trained and tested on various matrices where we achieved an acc...
IJERT-Sign Language Recognition System using Convolutional Neural Network and Computer Vision
International Journal of Engineering Research and Technology (IJERT), 2020
https://www.ijert.org/sign-language-recognition-system-using-convolutional-neural-network-and-computer-vision https://www.ijert.org/research/sign-language-recognition-system-using-convolutional-neural-network-and-computer-vision-IJERTV9IS120029.pdf Conversing to a person with hearing disability is always a major challenge. Sign language has indelibly become the ultimate panacea and is a very powerful tool for individuals with hearing and speech disability to communicate their feelings and opinions to the world. It makes the integration process between them and others smooth and less complex. However, the invention of sign language alone, is not enough. There are many strings attached to this boon.The sign gestures often get mixed and confused for someone who has never learnt it or knows it in a different language. However, this communication gap which has existed for years can now be narrowed with the introduction of various techniques to automate the detection of sign gestures. In this paper, we introduce a Sign Language recognition using American Sign Language. In this study, the user must be able to capture images of the hand gesture using web camera and the system shall predict and display the name of the captured image. We use the HSV colour algorithm to detect the hand gesture and set the background to black. The images undergo a series of processing steps which include various Computer vision techniques such as the conversion to grayscale, dilation and mask operation. And the region of interest which, in our case is the hand gesture is segmented. The features extracted are the binary pixels of the images. We make use of Convolutional Neural Network(CNN) for training and to classify the images. We are able to recognise 10 American Sign gesture alphabets with high accuracy. Our model has achieved a remarkable accuracy of above 90%.
Acta Informatica Malaysia (AIM, 2023
Nowadays, Sign Language Recognition (SLR) plays a significant impact in the disabled community because it is utilized as a learning tool for everyday tasks like interaction, education, training, and human activities. All three of these languages-Arabic, Persian, and Kurdish-share the same writing system, called the Arabic script. In order to categorize sign languages written in the Arabic alphabet, this article employs convolutional neural networks (CNN) and transfer learning (mobileNet) methods. The study's primary goal is to develop a common standard for alphabetic sign language in Arabic, Persian, and Kurdish. Different activation functions were used throughout the model's extensive training on the ASSL2022 dataset. There are a total of 81857 images included in the collection, gathered from two sources and representing the 40 Arabic-script-based alphabets. As can be seen from the data obtained, the proposed models perform well, with an average training accuracy of 99.7% for CNN and 99.32% for transfer learning. When compared to other research involving languages written in the Arabic script, this one achieves better detection and identification accuracy.
SILINGO -SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS
Sign Language is a medium for conversation used by the deaf and mute people that focuses on hand gestures, movements, and expressions. The hearing and speech impaired individuals have difficulty in conveying their thoughts and messages to the people. Recognizing a Sign Language is a topic of deep research and will help people who can't understand sign language and break down this communication barrier between deaf/dumb/mute people with other people. Sign Language Recognition using Hand Gesture is a System which presents a novel, organic, interactive, and easy to use method of engaging with computers that is more recognizable to homo sapiens. Human-machine interface, language of sign, and immersive game technology are all examples of applications for gesture recognition. People who are not deaf, on the other hand, find it difficult or impossible to converse with deaf people. They must depend on an interpreter, which is both costly and inconvenient for the persons trying to converse with deaf/mute/dumb people. This project aims to provide a method that can employ the abilities of layers of Convolutional Neural Network (CNN) to detect and identify hand signs taken in real time using a device with camera.
A Moroccan Sign Language Recognition Algorithm Using a Convolution Neural Network
Journal of ICT Standardization
Gesture recognition is an open phenomenon in computer vision, and one of the topics of current interest. Gesture recognition has many applications in the interpretation of sign language in deaf-mutes, one is in human-computer interaction, and the other is in immersive game technology. For this purpose, we have developed a model of image processing recognition of gestures, based on Artificial Neural Networks, starting from data collection, identification, tracking and classification of gestures, to the display of the obtained results. We propose an approach to contribute to the translation of sign language into voice/text format. In this paper, we present a Moroccan sign language recognition system using a Convolutional Neural Network (CNN). This system includes an important data set of more than 20 files. Each file contains 1,000 static images of each signal from several different angles that we collected with the camera. Different Sign Language models were evaluated and compared wi...