Navigation Assistance for the Visually Impaired Using RGB-D Sensor With Range Expansion (original) (raw)

Indoor assistance for visually impaired people using a RGB-D camera

2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), 2016

In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz.

Indoor Navigation Assistance for Visually Impaired People via Dynamic SLAM and Panoptic Segmentation with an RGB-D Sensor

Lecture Notes in Computer Science, 2022

Exploring an unfamiliar indoor environment and avoiding obstacles is challenging for visually impaired people. Currently, several approaches achieve the avoidance of static obstacles based on the mapping of indoor scenes. To solve the issue of distinguishing dynamic obstacles, we propose an assistive system with an RGB-D sensor to detect dynamic information of a scene. Once the system captures an image, panoptic segmentation is performed to obtain the prior dynamic object information. With sparse feature points extracted from images and the depth information, poses of the user can be estimated. After the egomotion estimation, the dynamic object can be identified and tracked. Then, poses and speed of tracked dynamic objects can be estimated, which are passed to the users through acoustic feedback.

Demo: Assisting Visually Impaired People Navigate Indoors

2016

Research in Artificial Intelligence, Robotics and Computer Vision has recently made great strides in improving indoor localization. Publicly available technology now allows for indoor localization with very small margins of error. In this demo, we show a system that uses state-of the-art technology to assist visually impaired people navigate indoors. Our system takes advantage of spatial representations from CAD files, or floor plan images, to extract valuable information that later can be used to improve navigation and human-computer interaction. Using depth information, our system is capable of detecting obstacles and guiding the user to avoid them.

An RGB-D Camera-based Travel Aid for the Blind

Open Access Biostatistics & Bioinformatics, 2018

In this paper, we present the application of an RGB-D camera in developing a travel aid for the blind or visually impaired people. The goal of the proposed travel aid is to provide the user with information about obstacles along the travelled path so he or she may walk safely and quickly in an unfamiliar environment. The performance of the travel aid greatly depends on whether the ground plane can be successfully detected. In this paper, we present a new approach to detecting the ground plane from a depth image. Two different experiments were designed to test the performance of the proposed travel aid.

A Kinect Based Indoor Navigation System for the Blind

2015

Title of thesis: A KINECT BASED INDOOR NAVIGATION SYSTEM FOR THE BLIND Alexander Belov, Guodong Fu, Janani Gururam, Francis Hackenburg, Yael Osman, John Purtilo, Nicholas Rossomando, Ryan Sawyer, Ryan Shih, Emily True, Agnes Varghese, Yolanda Zhang Thesis directed by: Professor Rama Chellappa Chair, Department of Electrical and Computer Engineering Team NAVIGATE aims to create a robust, portable navigational aid for the blind. Our prototype uses depth data from the Microsoft Kinect to perform realtime obstacle avoidance in unfamiliar indoor environments. The device augments the white cane by performing two significant functions: detecting overhanging objects and identifying stairs. Based on interviews with blind individuals, we found a combined audio and haptic feedback system best for communicating environmental information. Our prototype uses vibration motors to indicate the presence of an obstacle and an auditory command to alert the user to stairs ahead. Through multiple trials ...

Depth Camera Based Navigation System for Blind People Assistance

United International Journal for Research & Technology, 2021

Blind humans are confronted with numerous difficulties on the subject of spending their regular lives. This research study investigates the efficacy of the diverse systems developed for the navigational assistance of blind and visually impaired humans and proposes a new system. While traditional systems rely on the development of assisting hardware such as canes, suits and sticks, our proposed system explores the utilization of depth cameras. In contrast to the general cameras, the depth cameras can provide the distance of the object from the camera, in the field of view. This information in conjunction with image processing has been utilized in the current research work for efficient identification of obstacles and provision of accurate navigational directives. Currently, the proposed system can inform the user of any object(s) appearing in their frontal path and suggest directions to bypass the object. Validation of the system was performed in real-time and it suggests the system has a high potential for practical use.

Staircase Detection to Guide Visually Impaired People: A Hybrid Approach

Revue d'Intelligence Artificielle, 2019

Eyes and other visionary organs are essential to human physiology, due to their ability to receive and process subtle details to the brain. However, some individuals are unable to visually perceive things in the surroundings. These visually impaired people face various hinderances in the daily life, and cannot navigate themselves without guidance. To help them navigate around the surroundings, this paper develops a hybrid system for the detection of staircase and the ground, using a pre-trained model and an ultrasonic sensor. The proposed system consists of an ultrasonic sensor, an R-GBD camera, a raspberry pi and a buzzer fixed on a walking stick. During the detection process, staircase images are captured by an RGB-D camera and then compared with pre-trained template images. Finally, our system was applied to detect different staircase images under various conditions (e.g. dark and noise), and found to achieve an average accuracy of 98.73%. This research provides an effective aid to the visually impaired.

Blind Navigation Assistance for Visually Impaired Based on Local Depth Hypothesis from a Single Image

Assisting the visually impaired along their navigation path is a challenging task which drew the attention of several researchers. A lot of techniques based on RFID, GPS and computer vision modules are available for blind navigation assistance. In this paper, we proposed a depth estimation technique from a single image based on local depth hypothesis devoid of any user intervention and its application to assist the visually impaired people. The ambient space ahead of the user is captured by a camera and the captured image is resized for computational efficiency. The obstacles in the foreground of the image are segregated using edge detection followed by morphological operations. Then depth is estimated for each obstacle based on local depth hypothesis. The estimated depth map is then compared with the reference depth map of the corresponding depth hypothesis and the deviation of the estimated depth map from the reference depth map is used to retrieve the spatial information about the obstacles ahead of the user.

IRJET- VISUALLY IMPAIRED PEOPLE NAVIGATION AND OBSTACLE DETECTION USING KINECT SENSOR

IRJET, 2020

People with visual impairment find it really difficult to navigate themselves without the help of others and the previous technologies developed to solve this issue is less effective. This project provides a solution to enable people with visual impairment to navigate in their environment effortlessly. The proposed project presents a novel approach for detecting and classifying 3 dimensional objects by using the KINECT sensor. Once the object is identified, it is communicated to the person via voice output and the obstacle can be avoided. The algorithm also differentiates human obstacles from non-human obstacles with the help of skeleton creation and face recognition techniques. In addition to this, 2 motors are placed on both sides of the hips of the person which rotates based on the side where the obstacle is detected (i.e. either on the right side or the left side of the hip). Python is used in the Kinect sensor for human segmentation process. The Arduino IDE is used as an interface to the Arduino board which is programmed in embedded C for the functioning of the ultrasonic sensors and DC motors. The system is capable of performing with an accuracy of 80 percent and it can identify objects at a faster rate compared to other existing devices. In future, the system can be made even smaller for easy portability. The frames per second can be decreased for faster image processing and the detection range can be further increased to help navigate in highly crowded environment.

An Obstacle Detection and Guidance System for Mobility of Visually Impaired in Unfamiliar Indoor Environments

International Journal of Computer and Electrical Engineering, 2014

This paper describes the development of a wearable navigation aid for blind and visually impaired persons to facilitate their movement in unfamiliar indoor environments. It comprises of a Kinect unit, a Tablet PC, a microcontroller, IMU sensors, and vibration actuators. It minimizes reliance on audio instructions for avoiding obstacles and instead guides the blind through gentle vibrations produced in a wearable belt and a light helmet. By differentiating obstacles from the floor, it can detect even relatively small-size obstacles. It can also guide the blind to reach a desired destination (office/room/elevator) within an unfamiliar building with the help of 2-D printed codes, RGB camera of Kinect unit, a compass sensor for orienting the user towards the next direction of movement, and synthesized audio instructions. The developed navigation system has been successfully tested by both blindfolded and blind persons.