FLARE: Induction with Prior Knowledge (original) (raw)
Related papers
An integrated framework for learning and reasoning
arXiv preprint cs/9508102, 1995
Abstract: Learning and reasoning are both aspects of what is considered to be intelligence. Their studies within AI have been separated historically, learning being the topic of machine learning and neural networks, and reasoning falling under classical (or symbolic) AI. However, learning and reasoning are in many ways interdependent. This paper discusses the nature of some of these interdependencies and proposes a general framework called FLARE, that combines inductive learning using prior knowledge together with reasoning ...
Towards a logical model of induction from examples and communication
2010
This paper focuses on a logical model of induction, and specifically of the common machine learning task of inductive concept learning (ICL). We define an "inductive derivation" relation, which characterizes which hypothesis can be induced from sets of examples, and show its properties. Moreover, we will also consider the problem of communicating inductive inferences between two agents, which corresponds to the multi-agent ICL problem. Thanks to the introduced logical model of induction, we will show that this communication can be modeled using computational argumentation.
A defeasible reasoning model of inductive concept learning from examples and communication
Artificial Intelligence, 2012
This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data.
On Integrating Inductive Learning with Prior Knowledge and Reasoning
1994
ABSTRACT Learning and reasoning are both aspects of what is considered to be intelligence. Their studies within AI have been separated historically, learning being the topic of neural networks and machine learning, and reasoning falling under classical (or symbolic) AI. However, learning and reasoning share many interdependencies, and the integration of the two may lead to more powerful models.
A Practical Approach for Knowledge-Driven Constructive Induction
Citeseer
Learning problems can be difficult for many reasons, one of them is inadequate representation space or description language. Features can be considered as a representational language; when this language contains more features than necessary, subset selection helps ...
A new approach for induction: From a nonaxiomatic logical point of view
Philosophy, Logic, and Artificial Intelligence, 1999
Non-Axiomatic Reasoning System (NARS) is designed to be a general-purpose intelligent reasoning system, which is adaptive and works under insu cient knowledge and resources. This paper focuses on the components of NARS that contribute to the system's induction capacity, and shows how the traditional problems in induction are addressed by the system. The NARS approach of induction uses an term-oriented formal language with an experience-grounded semantics that consistently interprets various types of uncertainty. An induction rule generates conclusions from common instance of terms, and a revision rule combines evidence from di erent sources. In NARS, induction and other types of inference, such as deduction and abduction, are based on the same semantic foundation, and they cooperate in inference activities of the system. The system's control mechanism makes knowledge-driven, context-dependent inference possible.
The utility of knowledge in inductive learning
Machine Learning, 1992
In this paper, we demonstrate how different forms of background knowledge can be integrated with an inductive method for generating function-free Horn clause rules. Furthermore, we evaluate, both theoretically and empirically, the effect that these forms of knowledge have on the cost and accuracy of learning. Lastly, we demonstrate that a hybrid explanation-based and inductive learning method can advantageously use an approximate domain theory. even when this theory is incorrect and incomplete.
A variety of AI induction systems are being successfully used for knowledge acquisition. Key features for the success of these systems are the methods and the man-machine environment they employ to achieve the two key tasks of coercing prior, subjective information out of an expert (in order to augment the knowledge implicit in the training data), and of gaining the expert's acceptance of the knowledge induced. These features are in their infancy in current commercially available tools and principles are just beginning to emerge from research. We refer to the generic approach as interactive induction. In this paper we examine some case studies of t h e phenomenon and then discuss the software features and induction theory that are required to support it.
One-Shot Induction of Generalized Logical Concepts via Human Guidance
2019
We consider the problem of learning generalized first-order representations of concepts from a single example. To address this challenging problem, we augment an inductive logic programming learner with two novel algorithmic contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample-efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a PAC bound. Our experimental analysis on diverse concept learning tasks demonstrates both the effectiveness and efficiency of the proposed approach over a first-order concept learner using only examples.
Inductive reasoning: From Carnap to cognitive science
The dominating models of inductive processes have been based on symbolic representations of knowledge. This was the explicit assumption of the Vienna school and most of Carnap's work on induction follows this principle. However, it is becoming increasingly clear that most cognitive phenomena in humans and animals are based on non-symbolic representations. As an alternative to symbolic representations of information and knowledge, this paper investigates a theory of "conceptual spaces." Such spaces consist of a number of "quality dimensions" which often are derived from perceptual mechanisms. They can be used to describe cognitive processes like concept formation and induction. A 'geometric' model of concept formation is proposed and its relation to prototype theory is discussed. It is shown that Carnap in his later writings was moving towards this approach to induction. It is also be argued that conceptual spaces are suitable for representing the results of information processing in connectionist systems.