A Survey on Assistance System for Visually Impaired People for Indoor Navigation (original) (raw)
Related papers
Mobile Assistive Application for Blind People in Indoor Navigation
Lecture Notes in Computer Science, 2020
Navigation is an important human task that needs the human sense of vision. In this context, recent technologies developments provide technical assistance to support the visually impaired in their daily tasks and improve their quality of life. In this paper, we present a mobile assistive application called "GuiderMoi" that retrieves information about directions using color targets and identifies the next orientation for the visually impaired. In order to avoid the failure in detection and the inaccurate tracking caused by the mobile camera, the proposed method based on the CamShift algorithm aims to introduce better location and identification of color targets. Tests were conduct in natural indoor scene. The results depending on the distance and the angle of view, defined the accurate values to have a highest rate of target recognition. This work has perspectives for this such as implicating the augmented reality and the intelligent navigation based on machine learning and real-time processing.
Indoor Navigation Control System for Visually Impaired People
Blindness affects the perception of the surrounding environmental conditions. The primary requirement of any visual aid for mobility is obstacle detection. This work proposes an indoor navigation system for the visually impaired people. The system presented in this study is a robust, independent and portable aid to assist the user to navigate with auditory guidance. Computer-based algorithms developed in C sharp for the Microsoft Xbox Kinect 360 sensor allows to build a device for the navigational purpose. Kinect sensor streams both colour and depth data from the surrounding environment in real-time, which is then processed to provide the user with directional feedback using the wireless earphones. The effectiveness of the system was tested in experiments conducted with six blindfolded volunteers who successfully navigated across various indoor locations. Moreover, the user could also follow a specific individual through the output generated from the processed images.
Information and Assisted Navigation System for Blind People
Nowadays public buildings are changing constantly, often people have to take different routes to reach known destinations. At the same time, new services and places are made available to attract more people to the shopping center. This dynamic environment is usually signalled and labelled with visual marks and signs which are not appropriated for blind persons. Therefore, blind users are unintentionally deprived of a full participation in the society. With the purpose of equalize the access to services and spaces among all persons, this work proposes an innovative indoor navigation and information system for public buildings, namely shopping centers, based on existing technologies not used for this purpose. Intending to allow a comfortable and helpful aid on blind persons trips to the shopping center, this proposal system relies on users smartphone and wireless sensors deployed in the environment.
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET, 2020
Today, many of the aid systems deployed for visually impaired people are mostly made for a single purpose. Be it navigation, object detection, or distance perceiving. Also, most of the deployed aid systems use indoor navigation which requires a pre-knowledge of the environment. These aid systems often fail to help visually impaired people in the unfamiliar scenario. In this paper, we propose an aid system developed using object detection and depth perceivement to navigate a person without dashing into an object. The prototype developed detects different types of objects and compute their distances from the user. We also, implemented a navigation feature to get input from the user about the target destination and hence, navigate the impaired person to his/her object destination. With this system, we built a multi-feature, high accuracy navigational aid system by processing the image and performing computation using image processing to help the visually impaired people in their daily life by navigating them effortlessly to their desired destination and to read the data from the image with the help of image processing.
Real Time Indoor Navigation System For Visually Impaired
International Journal of Engineering and Advanced Technology, 2019
Indoor Navigation system is gaining lot of importance these days. It is particularly important to locate places inside a large university campus, Airport, Railway station or Museum. There are many mobile applications developed recently using different techniques. The work proposed in this paper is focusing on the need of visually challenged people while navigating in indoor environment. The approach proposed here implements the system using Beacon. The application developed with the system gives audio guidance to the user for navigation.
An electronic travel aid for navigation of visually impaired persons
2011 Third International Conference on Communication Systems and Networks (COMSNETS 2011), 2011
This paper presents an electronic travel aid for blind people to navigate safely and quickly, an obstacle detection system using ultrasonic sensors and USB camera based visual navigation has been considered. The proposed system detects the obstacles up to 300 cm via sonar and sends feedback (beep sound) to inform the person about its location. In addition to this, an USB webcam is connected with eBox 2300™ Embedded System for capturing the field of view of the user, which is used for finding the properties of the obstacle in particular, in the context of this work, locating a human being. Identification of human presence is based on face detection and cloth texture analysis. The major constraints for these algorithms to run on Embedded System are small image frame (160x120) having reduced faces, limited memory and very less processing time available to achieve real time image processing requirements. The algorithms are implemented in C++ using Visual Studio 5.0 IDE, which runs on Windows CE™ environment.
AIDING VISUALLY CHALLENGED INDIVIDUAL FOR OBJECT DETECTION AND NAVIGATION USING ASSISTIVE TECHNOLOGY
Visual impairers are not a new augury for the society. It is a condition of deficient visual perception. Accessibility in the anonymous environment for the visually challenged individuals is more critical. They are more prone to fall in accidents since they cannot discern their surroundings. The wide availability of equipment’s which help the blind people are quite expensive and cannot be easily expanded. Moreover the most of the techniques prevailing to aid them with the help of IR sensors do have a drawback i.e. in the presence of the sunlight it won’t produce an efficient result. For overcoming these drawbacks, the current paper contributes two concepts. The First concept deals with the identification of hindrance such as trees trunk, opened doors, stair case, etc. This is done with the help of an Obstacle Detection Unit (ODU) which discovers the obstacle that are against the visually impaired by capturing the image of the object present. In the latter case, Navigation Tracking Device (NTD) delivers route for the impaired where they wish to proceed without any human assistance.
International Journal of Engineering Research and Technology (IJERT), 2014
https://www.ijert.org/intelligent-image-processing-approach-for-visually-challenged-persons-for-navigation-assistance https://www.ijert.org/research/intelligent-image-processing-approach-for-visually-challenged-persons-for-navigation-assistance-IJERTV3IS20594.pdf This paper examines intelligent image processing constraints which may need to be considered for visual pros-thesis development and proposes a display framework which incorporates context, task and alerts related to a scene. A simulation device to investigate this framework is also described. Mobility requirements, assessment, and devices are discussed to identify the functions required by a prosthesis, and an overview of state of the art visual pros-theses is provided. Two main computer vision approaches are discussed with application to a visual prosthesis: information reduction and scene understanding. Further enhancement of this research is still in progress.
An automated navigation system for blind people
Bulletin of Electrical Engineering and Informatics, 2022
Proper navigation and detailed perception in familiar or unfamiliar environments are the main roles for human life. Eyesight sense helps humans to abstain from all kinds of dangers and navigate to indoor and outdoor environments. These are challenging activities for blind people in all environments. Many assistive tools have been developed by the blessing of technology like braille compasses and white canes that help them to navigate around in the environment. A vision and cloud-based navigation system for the visually impaired or blind person was developed. Our aim was not only to navigate them but also to perceive the environment in as much detail as a normal person. The proposed system includes ultrasonic sensors detecting obstacles, stereo camera to capture videos to perceive the environment using deep learning algorithms. Face recognition approach identified known faces in front of him. Blind people interacted with the whole system through a speech recognition module and all th...
Smart Navigation Assistance System for Blind People
IRJET, 2022
Real time object detection is a diverse and complex area of computer vision. If there is a single object to be detected in an image, it is known as Image Localization and if there are multiple objects in an image, then it is Object Detection. This detects the semantic objects of a class in digital images and videos. The applications of real time object detection include tracking objects, video surveillance, pedestrian detection, people counting, self-driving cars, face detection, ball tracking in sports and many more. Convolution Neural Networks is a representative tool of Deep learning to detect objects using OpenCV (Opensource Computer Vision), which is a library of programming functions mainly aimed at real time computer vision. This project proposes an integrated guidance system involving computer vision and natural language processing. A system equipped with vision, language and intelligence capabilities is attached to the blind person in order to capture surrounding images and is then connected to a central server programmed with a faster region convolutional neural network algorithm and an image detection algorithm to recognize images and multiple obstacles. The server sends the results back to the smart phone which are then converted into speech for blind person's guidance. The existing system helps the visually impaired people, but they are not effective enough and are also expensive. We aim to provide more effective and cost-efficient system which will not only make their lives easy and simple, but also enhance mobility in unfamiliar terrain.