Integrating Symbolic Reasoning with Neurally Represented Background Knowledge (original) (raw)
Related papers
Minds and Machines, 2017
This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty (and even more, that there are kinds of uncertainty which are for principled reasons not addressable with probabilistic means); and to provide evidence that logicbased methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: Logic Programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of Input/Output logic for dealing with uncertainty in dynamic normative contexts.
BRAINN: A Connectionist Approach to Symbolic Reasoning
1998
Abstract Hybrid connectionist symbolic systems have been the subject of much recent research in AI. By focusing on the implementation of high-level human cognitive processes (eg, rule-based inference) on low-level, brain-like structures (eg, neural networks), hybrid systems inherit both the efficiency of connectionism and the comprehensibility of symbolism. This paper presents the Basic Reasoning Applicator Implemented as a Neural Network (BRAINN).
Towards generalizable neuro-symbolic reasoners
Doctor of PhilosophyDepartment of Computer ScienceMajor Professor Not ListedSymbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of --not necessarily easily obtained-- data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of...
Abductive reasoning in neural-symbolic systems
Topoi, 2007
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and subsymbolic approaches to abduction. We are interested in benefiting from developments made by each community. In particular, we are interested in the ability of nonsymbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottomup computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration.
Neural-Symbolic Learning and Reasoning: Contributions and Challenges
Neural-symbolic computation aims at integrating robust connectionist learning algorithms with sound symbolic reasoning. The recent impact of neural learning, in particular of deep networks, has led to the creation of new representations that have, so far, not really been used for reasoning. Results on neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and inference in neural computation. This paper presents key challenges and contributions of neuralsymbolic computation to this area.
Neurules: Integrating Symbolic Rules and Neurocomputing
Advances in Informatics, 2000
In this paper, a hybrid knowledge representation formalism that integrates neurocomputing into the symbolic framework of production rules is presented. This is achieved by introducing neurules, a type of integrated rules. Each neurule is considered as an adaline unit, where weights are considered as significance factors. Each significance factor represents the significance of the associated condition in drawing the conclusion. A rule is fired when the corresponding adaline output becomes active. In this way, naturalness and modularity of production rules are retained, and imprecise relations between the conditions and the conclusion of a rule can be represented. Additionally, a number of heuristics used in the inference procedure result in increasing efficiency.
Bayesian case-based reasoning with neural networks
IEEE International Conference on Neural Networks, 1993
Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In this paper we introduce a neural network architecture for efficient case-based reasoning. We show how Pearl's probability propagation algorithm [12] can be implemented as a feedforward neural network and adapted for CBR. In our approach the efficient indexing problem of CBR is naturally implemented by the parallel architecture, and heuristic matching is replaced by a probability metric. This allows our CBR to perform theoretically sound Bayesian reasoning. We also show how the probability propagation actually offers a solution to the adaptation problem in a very natural way.
A neural-symbolic cognitive agent for online learning and reasoning
2011
In real-world applications, the effective integration of learning and reasoning in a cognitive agent model is a difficult task. However, such integration may lead to a better understanding, use and construction of more realistic models. Unfortunately, existing models are either oversimplified or require much processing time, which is unsuitable for online learning and reasoning. Currently, controlled environments like training simulators do not effectively integrate learning and reasoning. In particular, higher-order concepts and cognitive abilities have many unknown temporal relations with the data, making it impossible to represent such relationships by hand. We introduce a novel cognitive agent model and architecture for online learning and reasoning that seeks to effectively represent, learn and reason in complex training environments. The agent architecture of the model combines neural learning with symbolic knowledge representation. It is capable of learning new hypotheses from observed data, and infer new beliefs based on these hypotheses. Furthermore, it deals with uncertainty and errors in the data using a Bayesian inference model. The validation of the model on real-time simulations and the results presented here indicate the promise of the approach when performing online learning and reasoning in real-world scenarios, with possible applications in a range of areas.
Text-based Reasoning with Symbolic Memory Model
In this paper, a sentence-based reasoning model is introduced for the prediction of new individual activities by means of memory reconsolidation that enables the integration of incoming evidence with related past experience. Both the evidence and previous experience are stored in extended semantic networks (ESN) as memory. They are then processed in Bayesian networks for inferring new and unified memory. Symbolic approaches, which focus on the structural aspect of language, ensure the correct extraction of the key information of words according to the context. Effective mechanisms for information propagation, Bayesian networks (BN) construction and combination are adopted to enable inference reasonable and adaptive to different scenarios based on the topic domain. Our model is compared to other reasoning systems through experiments. The results show that our model can both deduce more implicit information from texts, and avoid some incorrect reasoning caused by confusing data in the knowledgebase.
Natlog: a Lightweight Logic Programming Language with a Neuro-symbolic Touch
Electronic Proceedings in Theoretical Computer Science, 2021
We introduce Natlog, a lightweight Logic Programming language, sharing Prolog's unificationdriven execution model, but with a simplified syntax and semantics. Our proof-of-concept Natlog implementation is tightly embedded in the Python-based deep-learning ecosystem with focus on content-driven indexing of ground term datasets. As an overriding of our symbolic indexing algorithm, the same function can be delegated to a neural network, serving ground facts to Natlog's resolution engine. Our open-source implementation is available as a Python package at https: //pypi.org/project/natlog/.