KR3: An Architecture for Knowledge Representation and Reasoning in Robotics (original) (raw)

Towards an Architecture for Knowledge Representation and Reasoning in Robotics

Lecture Notes in Computer Science, 2014

This paper describes an architecture that combines the complemen- tary strengths of probabilistic graphical models and declarative programming to enable robots to represent and reason with qualitative and quantitative descrip- tions of uncertainty and domain knowledge. An action language is used for the architecture’s low-level (LL) and high-level (HL) system descriptions, and the HL definition of recorded history is expanded to allow prioritized defaults. For any given objective, tentative plans created in the HL using commonsense reasoning are implemented in the LL using probabilistic algorithms, and the correspond- ing observations are added to the HL history. Tight coupling between the levels helps automate the selection of relevant variables and the generation of policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in complex domains. The architecture is evaluated in simulation and on robots moving objects in indoor domains.

A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics

ABSTRACTThis paper describes an architecture that combines the complementary strengths of probabilistic graphical models and declarative programming to enable robots to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, refining a coarse-resolution transition diagram of the domain to obtain a fine-resolution transition diagram. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action probabilistically, the part of the fine-resolution transition diagram relevant to this action is identified, and a probabilistic representation of the uncertainty in sensing and actuation is included and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.

Mixing Non-Monotonic Logical Reasoning and Probabilistic Planning for Robots

2015

This paper describes an architecture that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with qualitative and quantitative descriptions of domain knowledge and uncertainty. An action language is used for the architecture’s low-level (LL) and high-level (HL) system descriptions, and the HL definition of recorded history is expanded to allow prioritized defaults. For any given objective, each action in the plan created in the HL using non-monotonic logical reasoning is executed probabilistically in the LL, refining the HL description to identify the relevant sorts, fluents and actions, and adding the corresponding action outcomes to the HL history. The HL and LL domain representations are translated into an Answer Set Prolog (ASP) program and a partially observable Markov decision process (POMDP) respectively. ASP-based inference provides a multinomial prior for POMDP state estimation, and populates a Beta den...

REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics

Journal of Artificial Intelligence Research, 2019

This article describes REBA, a knowledge representation and reasoning architecture for robots that is based on tightly-coupled transition diagrams of the domain at two different levels of granularity. An action language is extended to support non-boolean fluents and non-deterministic causal laws, and used to describe the domain's transition diagrams, with the fine-resolution transition diagram being defined as a refinement of the coarse-resolution transition diagram. The coarse-resolution system description, and a history that includes prioritized defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. The zoomed fine-resolution system description, and a probabilistic representation of the uncertainty in sensing and actuation, are used to...

Mixed Logical and Probabilistic Reasoning for Planning and Explanation Generation in Robotics

ArXiv, 2015

Robots assisting humans in complex domains have to represent knowledge and reason at both the sensorimotor level and the social level. The architecture described in this paper couples the non-monotonic logical reasoning capabilities of a declarative language with probabilistic belief revision, enabling robots to represent and reason with qualitative and quantitative descriptions of knowledge and degrees of belief. Specifically, incomplete domain knowledge, including information that holds in all but a few exceptional situations, is represented as a Answer Set Prolog (ASP) program. The answer set obtained by solving this program is used for inference, planning, and for jointly explaining (a) unexpected action outcomes due to exogenous actions and (b) partial scene descriptions extracted from sensor input. For any given task, each action in the plan contained in the answer set is executed probabilistically. The subset of the domain relevant to the action is identified automatically, a...

Towards An Architecture for Representation, Reasoning and Learning in Human-Robot Collaboration

2016

Robots collaborating with humans need to represent knowledge, reason, and learn, at the sensorimotor level and the cognitive level. This paper summarizes the capabilities of an architecture that combines the comple- mentary strengths of declarative programming, proba- bilistic graphical models, and reinforcement learning, to represent, reason with, and learn from, qualitative and quantitative descriptions of incomplete domain knowledge and uncertainty. Representation and reasoning is based on two tightly-coupled domain representations at different resolutions. For any given task, the coarse- resolution symbolic domain representation is translated to an Answer Set Prolog program, which is solved to provide a tentative plan of abstract actions, and to explain unexpected outcomes. Each abstract action is implemented by translating the relevant subset of the corresponding fine-resolution probabilistic representation to a partially observable Markov decision process (POMDP). Any high pro...

What Happened and Why ? A Mixed Architecture for Planning and Explanation Generation in Robotics

2015

This paper describes a mixed architecture that couples the non-monotonic logical reasoning capabilities of a declarative language with probabilistic belief revision, enabling robots to represent and reason with qualitative and quantitative descriptions of knowledge and uncertainty. Incomplete domain knowledge, including information that holds in all but a few exceptional situations, is represented as a Answer Set Prolog (ASP) program. The answer set obtained by solving this program is used for inference, planning, and for jointly explaining (a) unexpected action outcomes; and (b) partial scene descriptions extracted from sensor input. For any given task, each action in the plan contained in the answer set is executed probabilistically. For each such action, observations extracted from sensor inputs perform incremental Bayesian updates to a probabilistic (belief) distribution over a relevant subset of the domain, committing high probability beliefs as statements to the ASP program. T...

Robot task planning and explanation in open and uncertain worlds

A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.