Learning of the way of abstraction in real robots (original) (raw)
Related papers
Learning and Using Abstractions for Robot Planning
2020
Robot motion planning involves computing a sequence of valid robot configurations that take the robot from its initial state to a goal state. Solving a motion planning problem optimally using analytical methods is proven to be PSPACEHard. Sampling-based approaches have tried to approximate the optimal solution efficiently. Generally, sampling-based planners use uniform samplers to cover the entire state space. In this paper, we propose a deep-learning-based framework that identifies robot configurations in the environment that are important to solve the given motion planning problem. These states are used to bias the sampling distribution in order to reduce the planning time. Our approach works with a unified network and generates domain-dependent network parameters based on the environment and the robot. We evaluate our approach with Learn and Link planner in three different settings. Results show significant improvement in motion planning times when compared with current sampling-...
From Abstract Task Knowledge to Executable Robot Programs
Journal of Intelligent and Robotic Systems, 2008
Robots that are capable of learning new tasks from humans need the ability to transform gathered abstract task knowledge into their own representation and dimensionality. New task knowledge that has been collected e.g. with Programming by Demonstration approaches by observing a human does not a-priori contain any robot-specific knowledge and actions, and is defined in the workspace of the human demonstrator. This article presents a new approach for mapping abstract human-centered task knowledge to a robot execution system based on the target system properties. Therefore the required background knowledge about the target system is examined and defined explicitely.
ADAPT: A Cognitive Architecture for Robotics
2004
ADAPT (Adaptive Dynamics and Active Perception for Thought) is a cognitive architecture specifically designed for robotics. The ADAPT architecture is in the initial state of development. ADAPT manipulates a hierarchy of perceptual and planning schemas that include explicit temporal information and that can be executed in parallel. Perception is active, which means that all perceptual processing is goaldirected and context-sensitive, even down to the raw sensory data. This paper describes the components of ADAPT and how it differs from a number of existing cognitive architectures.
Providing robots with problem awareness skills
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012
Humanoid robots operating in the real world must exhibit very complex behaviors, such as object manipulation or interaction with people. Such capabilities pose the problem of being able to reason on a huge number of different objects, places and actions to carry out, each one relevant for achieving robot goals. This article proposes a functional representation of objects, places and actions described in terms of affordances and capabilities. Everyday problems can be efficiently dealt with by decomposing the reasoning process in two phases, namely problem awareness (which is the focus of this article) and action selection.
Reduction of Learning Time for Robots Using Automatic State Abstraction
Springer Tracts in Advanced Robotics, 2006
The required learning time and curse of dimensionality restrict the applicability of Reinforcement Learning(RL) on real robots. Difficulty in inclusion of initial knowledge and understanding the learned rules must be added to the mentioned problems. In this paper we address automatic state abstraction and creation of hierarchies in RL agent's mind, as two major approaches for reducing the number of learning trials, simplifying inclusion of prior knowledge, and making the learned rules more abstract and understandable. We formalize automatic state abstraction and hierarchy creation as an optimization problem and derive a new algorithm that adapts decision tree learning techniques to state abstraction. The proof of performance is supported by strong evidences from simulation results in nondeterministic environments. Simulation results show encouraging enhancements in the required number of learning trials, agent's performance, size of the learned trees, and computation time of the algorithm.
A Relational Representation for Generalized Knowledge in Robotic Tasks
In this paper, a novel representation is proposed in which experience is summarized by a wealth of control and perception primitives that can be mined to learn combinations of which features are most predictive of task success. Exploiting the inherent relational structure of these primitives and the dependencies between them presents a powerful and widely-applicable new approach in the robotics community. These dependencies are represented as links in a relational dependency network (RDN), and capture information about how a robot's actions and observations affect each other when used together in the full context of the task. For example, a RDN trained as an expert to "pick up" things will represent the best way to reach to an object, knowing that it plans on grasping that object later. Such experts provide information which might not be obvious to a programmer ahead of time, and can be consulted to allow the robot to achieve higher levels of task performance. Furthermore, it seems possible that new, more complex RDNs could be trained by learning the dependencies between existing RDNs. As a result, this paper proposes a hierarchical way of organizing complex behaviors in a principled way.
SAILOR: Perceptual Anchoring For Robotic Cognitive Architectures
arXiv (Cornell University), 2023
Symbolic anchoring is a crucial problem in the field of robotics, as it enables robots to obtain symbolic knowledge from the perceptual information acquired through their sensors. In cognitive-based robots, this process of processing sub-symbolic data from real-world sensors to obtain symbolic knowledge is still an open problem. To address this issue, this paper presents SAILOR, a framework for providing symbolic anchoring in the ROS 2 ecosystem. SAILOR aims to maintain the link between symbolic data and perceptual data in real robots over time, increasing the intelligent behavior of robots. It provides a semantic world modeling approach using two deep learning-based sub-symbolic robotic skills: object recognition and matching function. The object recognition skill allows the robot to recognize and identify objects in its environment, while the matching function enables the robot to decide if new perceptual data corresponds to existing symbolic data. This paper provides a description of the proposed method and the development of the framework, as well as its integration in MERLIN2 (a hybrid cognitive architecture fully functional in robots running ROS 2).
Cognitive Learning for Practical Solution of the Frame Problem
Abstract. The main problem for agents in real environments is how to abstract useful information from a large amount of environmental data. This is called the frame problem. Learning how to perform abstraction is a key function in a practical solution to the frame problem. As such a learning system, we developed Situation Transition Network System (STNS). The system extracts situations and maintains them dynamically in a continuous state space on the basis of rewards from the environment.