NATURAL LANGUAGE PROCESSING SERVICES IN ASSISTIVE TECHNOLOGY (original) (raw)
Related papers
Literation Hearing Impairment (I-Chat Bot): Natural Language Processing (NLP) and Naïve Bayes Method
Journal of Physics: Conference Series
A part from advances in Artificial Intelligence and Natural Language Processing, designed CHATBOT will try to understand user requests accurately, so that they do not give the wrong answer or no response. The difficulty of getting specific information about hearing impairments that can be asked properly is asking someone who understands that it still cannot be found on search engines on the internet which is a favorite tool for most people in obtaining information. By using Machine Learning and with the help of Artificial Intelligence, a question and answer (conversation) interaction mechanism is created to gain literacy knowledge that supports the educational process. Therefore, the CHATBOT application was developed in this study as a media for retrieving information about hearing loss. Using the NLP method and the Naïve Bayes Algorithm for classifications used to get input classes to I-Chat Bot, as well as to test hypotheses using the Technology Acceptance Model developed (extende...
Journal of Physics: Conference Series volume 1201 issue 1 pages 1201 IOP Publishing , 2019
A part from advances in Artificial Intelligence and Natural Language Processing, designed CHATBOT will try to understand user requests accurately, so that they do not give the wrong answer or no response. The difficulty of getting specific information about hearing impairments that can be asked properly is asking someone who understands that it still cannot be found on search engines on the internet which is a favorite tool for most people in obtaining information. By using Machine Learning and with the help of Artificial Intelligence, a question and answer (conversation) interaction mechanism is created to gain literacy knowledge that supports the educational process. Therefore, the CHATBOT application was developed in this study as a media for retrieving information about hearing loss. Using the NLP method and the Naïve Bayes Algorithm for classifications used to get input classes to I-Chat Bot, as well as to test hypotheses using the Technology Acceptance Model developed (extended). The result is an I-Chat Bot with artificial intelligence that understands user input and provides an appropriate response and produces a preferred and easy system model to be used in the search for information about required hearing impairments. This result paper also gets the value of the test accuracy with Precision 98.6%, Recall 88.75% and Accuracy 88.75%.
Virtual Assistant for Deaf and Dumb
International Journal for Research in Applied Science and Engineering Technology
Voice is the future of computing interfaces, such as Robotics or the Internet of things. It is very difficult for those who have hearing or speaking disabilities. Almost all products or applications which are being developed today have voice-controlled features in them. It creates an indifference among the normal people and those with hearing or speaking disabilities. These applications should be able to be used by all sections of the potential users equally, so we propose a system that will bridge the gap between all the potential users whether normal or disabled (hearing or speaking impaired). Our systems take in input1 in the form of sign language video, it first breaks the gesture video into frames and then apply Convolutional Neural Network (CNN) on these gesture-frames so as to extract meaningful text, which is then spoken aloud by the device (on which the system is running) this is fed to Google Assistant/Amazon Alexa as input2, the response is converted to text and displayed on the screen and also spoken aloud by the device. This way people with hearing or speaking disabilities can communicate with a virtual assistant.
Desktop Voice Assistant System with the Help of Natural Language Processing
In the era of Technology, everyone aspires to live a more comfortable, convenient, and tech-savvy. In this hustle, everyone requires a personal assistant to help them with their everyday tasks.A personal voice assistant is software that can carry out various tasks and give services to a person based on their spoken commands. This is accomplished by a synchronous process that involves the speech patterns recognition and subsequent creation of synthetic speech, followed by the operations of command given. These assistants allow users to automate operations such as texting, mailing, emergency contact dialling, giving home remedies for diseases and media playback etc. Also, they might be emotionally active. People are developing reliant on technology as it advances, and the computer or laptop is one of the most widely used platforms. We all want to make using these computers more pleasant and easy to use. The usual way to provide a command to the computer is to type it in, but a more convenient approach is to speak it. Giving voice input is useful not just for regular individuals, but also for those who are visually handicapped. unable to provide input using a keyboard. Only India has 18.4 million blind individuals, accounting for one-fourth of all the people worldwide. India is home to one in every three blind people. This necessitates the use of a voice assistant that can not only receive commands via speech but also execute the required instructions and provide output by voice or any other method.
IRJET- Storytelling App for Children with Hearing Impairment Using Natural Language Processing (NLP
IRJET, 2020
During childhood, children's teachers, parents or grandparents read them a lot of fantastic stories. They did that for the time we couldn't read. Unfortunately, not everyone is blessed with an ability to hear. The children with hearing impairments might not have had a chance to know such stories at least in their childhood. This project is based on an app which narrates children's stories to hearing impaired children by taking in stories in form of text as input and giving images of sign language gestures and speech as output. In this paper, a platform Kahani which translates written English into Indian Sign Language is presented.
A Text to Speech and Speech to Text Application for Students with Hearing and Speaking Impairments
2021
Hearing impairment (HI) is a defect in which a person cannot hear properly or can only hear partially and it is usually measured using the hearing threshold which has a unit in decibels (dB). Deafness can be caused by accidents, old age, or sicknesses. Speech impairment is a defect in which a person cannot speak properly or has deficiency in the power of speech. Many text to speech (TTS) and speech to text (STT) applications have been made to make communication easier but hardy with a feature to record previous conversations. This study employed iterative development model to develop a text to speech and speech to text mobile application with the capability to track and record previous conversations which would yield benefits as progress in the academic and social life of hearing and speech impaired students. Analysis of the existing systems was done and the proposed system was designed using unified modeling language (UML) tools and others such as Ionic Framework, Angular JS, Hyper...
Storytelling App for Children with Hearing Impairment Using Natural Language Processing (NLP
During childhood, children's teachers, parents or grandparents read them a lot of fantastic stories. They did that for the time we couldn't read. Unfortunately, not everyone is blessed with an ability to hear. The children with hearing impairments might not have had a chance to know such stories at least in their childhood. This project is based on an app which narrates children's stories to hearing impaired children by taking in stories in form of text as input and giving images of sign language gestures and speech as output. In this paper, a platform Kahani which translates written English into Indian Sign Language is presented.
Assistive Technology for Deaf People Based on Android Platform
Procedia Computer Science, 2016
Social communication is one of the most important pillars that our society based on. It is well-known that the language is the only way to communicate and interact with each other verbally or non-verbal way. People with special needs are members of this society and have the right to enjoy the communication with the external environment in an easy and professional manner. This paper aims to provide an interesting application that guarantees ultimate communication with the disabled users and vice versa. The key feature of this application is employing the Arabic language as a medium of communication to learn all the sign language terms. The power of this application appears in two aspects: first of all, the ability of normal people to communicate with the targeted people without having any previous knowledge on signs language. This can be either achieved by voice recognition of words or by typing the words in the Arabic language. The application is then displays the appropriate image(s) in the sign language. Secondly, and more importantly, people with special needs communicate with normal people by choosing the signs images on their phones from the numerous categories stored in the databases which express their ideas and thoughts. Consequently, the set of images is transformed into a text paragraph. We evaluated our application by testing it on real deaf and dumb users. We carefully created scenarios on realistic situations. The early results are promising as all deaf found the proposed technology useful and 90% of them wanted to use it on daily basis.
Application of Natural Language Processing Techniques to Augmentative Communication Systems
2012
Natural language processing is an efficient example to denote the interaction between humans and computers. The computer fetches the input; and transforms the meaningful information into natural language; and finally, produces natural language as the output. NLP falls under the category of computational linguistics. There are several forms of techniques and applications of the natural language processing. The goal of this paper is to identify the potential applications of natural language processing techniques that can be incorporated in the augmentative communication systems. This research can extend the communication rate of the children / individuals who are with physical disabilities.
An Artificial Intelligence-Based Speaking System for Dumb People
Zenodo (CERN European Organization for Nuclear Research), 2023
For deaf-mute people all around the world, Gestures are the main mode of communication for Deaf-Mute people worldwide. In this gesture-based voice system, machine learning presents a real-time vision-based system for monitoring hand finger gestures. It was developed in Python using Raspberry Pi along with a camera module, which is compatible with the Open CV library for computer vision. The Raspberry Pi includes an image processing technique that monitors the fingers of the hand using the extracted attributes. The main purpose of a gesture based speaking system is to develop control communication between humans and computers. This leads to a system that can recognize and monitor known objects and has surveillance and application capabilities. The major goal of the suggested work was to allow the system to function properly. The main objective is to enable the system to recognize and monitor certain properties of objects specified by the Raspberry Pi along with camera module using an appropriate image processing technique. The Open CV library's feature extraction technique of the Open CV library for Python programming runs on Raspberry Pi using an external camera. A gesture based speaking system using machine learning provides a new, intuitive, and simple way to communicate with computers that are more human-like.