B. Vidhyasagar - Academia.edu (original) (raw)

Related Authors

Himanshu Patil

IRJET  Journal

Hind Bitar

Hind Bitar

King AbdulAziz University (KAU) Jeddah, Saudi Arabia

Avinash Harale

Indonesian Journal of Electrical Engineering and Computer Science

Rahib Abiyev

ragib nihal

Muhammad Amin

Abdul Hafeez

Abdul Hafeez

Shaheed Benazir Bhutto Women University, Peshawar

Uploads

Papers by B. Vidhyasagar

Research paper thumbnail of Video Captioning Based on Sign Language Using YOLOV8 Model

Springer, Cham, 2023

One of the fastest-growing research areas is the recognition of sign language. In this field, man... more One of the fastest-growing research areas is the recognition of sign language. In this field, many novel techniques have lately been created. People who are deaf-dumb primarily communicate using sign language. Real-time sign language is essential for people who cannot hear or speak (the dumb and the deaf). Hand gestures are one of the non-verbal communication methods used in sign language. People must be aware of these people's language because it is their only means of communication. In this work, we suggest creating and implementing a model to offer transcripts of the sign language that disabled individuals use during a live meeting or video conference. The dataset utilized in this study is downloaded from the Roboflow website and used for training and testing the data. Transfer Learning is a key idea in this situation since a trained model is utilized to identify the hand signals. The YOLOv8 model, created by Ultralytics, is employed for this purpose and instantly translates the letters of the alphabet (A-Z) into their corresponding texts. In our method, the 26 ASL signs are recognized by first extracting the essential components of each sign from the real-time input video, which is then fed into the Yolo-v8 deep learning model to identify the sign. The output will be matched to the signs contained in the neural network and classified into the appropriate signs based on a comparison between the features retrieved and the original signs present in the database.

Research paper thumbnail of Video Captioning Based on Sign Language Using YOLOV8 Model

Springer, Cham, 2023

One of the fastest-growing research areas is the recognition of sign language. In this field, man... more One of the fastest-growing research areas is the recognition of sign language. In this field, many novel techniques have lately been created. People who are deaf-dumb primarily communicate using sign language. Real-time sign language is essential for people who cannot hear or speak (the dumb and the deaf). Hand gestures are one of the non-verbal communication methods used in sign language. People must be aware of these people's language because it is their only means of communication. In this work, we suggest creating and implementing a model to offer transcripts of the sign language that disabled individuals use during a live meeting or video conference. The dataset utilized in this study is downloaded from the Roboflow website and used for training and testing the data. Transfer Learning is a key idea in this situation since a trained model is utilized to identify the hand signals. The YOLOv8 model, created by Ultralytics, is employed for this purpose and instantly translates the letters of the alphabet (A-Z) into their corresponding texts. In our method, the 26 ASL signs are recognized by first extracting the essential components of each sign from the real-time input video, which is then fed into the Yolo-v8 deep learning model to identify the sign. The output will be matched to the signs contained in the neural network and classified into the appropriate signs based on a comparison between the features retrieved and the original signs present in the database.

Log In