Towards Semantically Intelligent Robots (original) (raw)

2 Towards Semantically Intelligent Robots

2018

Approaches are needed for providing advanced autonomous wheeled robots with a sense of self, immediate ambience, and mission. The following list of abilities would form the desired feature set of such approaches: self-localization, detection and correction of course deviation errors, faster and more reliable identification of friend or foe, simultaneous localization and mapping in uncharted environments without necessarily depending on external assistance, and being able to serve as web services. Situations, where enhanced robots with such rich feature sets come to play, span competitions such as line following, cooperative mini sumo fighting, and cooperative labyrinth discovery. In this chapter we look into how such features may be realized towards creating intelligent robots. Currently through-cell localization in robots mainly relies on availability of shaft-encoders. In this regard, we would like to firstly present a simple-to-implement through-cell localization approach for rob...

Autonomous Navigation Framework for Intelligent Robots Based on a Semantic Environment Modeling

Applied Sciences, 2020

Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of three major components: semantic modeling framework (SMF), semantic information processing (SIP) module, and semantic autonomous navigation (SAN) module to enable the robot to perform cognitive tasks. The SMF creates an environment database using Triplet Ontological Semantic Model (TOSM) and builds semantic models of the environment. The environment maps from these semantic models are generated in an on-demand database and downloaded in SIP and SAN modules when required to by the robot. The SIP module contains active environment perception components for recognition and localization. It also feeds relevant perception information to behavior planner for safely performing the task. The SA...

Semantic Mapping and Reasoning Approach for Mobile Robotics

2018

The mobile robots need to have semantic information in their map representation about the entities in the environment in order to reason about their surroundings. Hence the mobile robots can act intelligently in the environment and solve autonomously a variety of robotic tasks. In this study, semantic mapping framework is established to give mobile robots the ability to perform high-level robotic tasks based on the semantic information. GeoRoSS is an autonomous mobile robot equipped with a reliable and precise 3D laser scanner that digitalizes environments. High quality geometric 3D maps with semantic information are automatically generated after the exploration by the robot.

Relational Model for Robotic Semantic Navigation in Indoor Environments

2017

The emergence of service robots in our environment raises the need to find systems that help the robots in the task of managing the information from human environments. A semantic model of the environment provides the robot with a representation closer to the human perception, and it improves its human-robot communication system. In addition, a semantic model will improve the capabilities of the robot to carry out high level navigation tasks. This paper presents a semantic relational model that includes conceptual and physical representation of objects and places, utilities of the objects, and semantic relation among objects and places. This model allows the robot to manage the environment and to make queries about the environment in order to do plans for navigation tasks. In addition, this model has several advantages such as conceptual simplicity and flexibility of adaptation to different environments. To test the performance of the proposed semantic model, the output for the semantic inference system is associate to the geometric and topological information of objects and places in order to do the navigation tasks.

Multi-hierarchical semantic maps for mobile robotics

2005

The success of mobile robots, and particularly of those interfacing with humans in daily environments (e.g., assistant robots), relies on the ability to manipulate information beyond simple spatial relations. We are interested in semantic information, which gives meaning to spatial information like images or geometric maps. We present a multi-hierarchical approach to enable a mobile robot to acquire semantic information from its sensors, and to use it for navigation tasks. In our approach, the link between spatial and semantic information is established via anchoring. We show experiments on a real mobile robot that demonstrate its ability to use and infer new semantic information from its environment, improving its operation.

A Semantic Classification Approach for Indoor Robot Navigation

Electronics

Autonomous robot navigation has become a crucial concept in industrial development for minimizing manual tasks. Most of the existing robot navigation systems are based on the perceived geometrical features of the environment, with the employment of sensory devices including laser scanners, video cameras, and microwave radars to build the environment structure. However, scene understanding is a significant issue in the development of robots that can be controlled autonomously. The semantic model of the indoor environment offers the robot a representation closer to the human perception, and this enhances navigation tasks and human–robot interaction. In this paper, we propose a low-cost and low-memory framework that offers an improved representation of the environment using semantic information based on LiDAR sensory data. The output of the proposed work is a reliable classification system for indoor environments with an efficient classification accuracy of 97.21% using the collected d...

A semantic approach to sensor-independent vehicle localization

2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014

As intelligent vehicles become more and more capable, they must learn to navigate and localize themselves in a wide variety of environments, including GPS-denied and only crudely mapped areas. We argue that since autonomous vehicles must be able to perceive, and semantically interpret, their immediate environment, they should be able to use abstract semantic information as their sole means of localization. This simplifies the level of detail and precision required from environment maps so that, for example, a rough floor plan of a parking garage will suffice to autonomously navigate it. We propose a concept for semantic localization which only requires a conceptual semantic map of the environment, and can be made to work with any kind of sensor data from which the required semantic information can be extracted. We present a localization algorithm which may be used as a base for semantic navigation, e.g. in context of automated driving, and some initial results of its application in a parking garage scenario.

Semantic Navigation Maps for Mobile Robot Localization on Planetary Surfaces

2013

Exploration with autonomous mobile robots is an important means in recent space exploration missions. One key requirement imposed is the ability to localize the robots in relation to their environment and to represent this environment in an accessible way. This paper discusses map building and localization for mobile robot exploration missions on planetary surfaces using a novel map representation we call semantic navigation map. We define and discuss the concept of our map representation, show how the map is generated, and explain how the localization algorithm utilizes and extends the map during the exploration phase of the mission. Finally, we demonstrate the feasibility of our concept in a virtual testbed.