From Logic to Neural Networks and Back (original) (raw)
Neural networks and logical reasoning systems. A translation table
1998
A correspondence is established between the elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the hardware and dynamical operations of neural networks. The correspondence is framed as a general translation dictionary which, hopefully, will allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks.
Making Logic Learnable With Neural Networks
2020
While neural networks are good at learning unspecified functions from training samples, they cannot be directly implemented in hardware and are often not interpretable or formally verifiable. On the other hand, logic circuits are implementable, verifiable, and interpretable but are not able to learn from training data in a generalizable way. We propose a novel logic learning pipeline that combines the advantages of neural networks and logic circuits. Our pipeline first trains a neural network on a classification task, and then translates this, first to random forests, and then to AND-Inverter logic. We show that our pipeline maintains greater accuracy than naive translations to logic, and minimizes the logic such that it is more interpretable and has decreased hardware cost. We show the utility of our pipeline on a network that is trained on biomedical data. This approach could be applied to patient care to provide risk stratification and guide clinical decision-making.
Learning Algorithms via Neural Logic Networks
2019
We propose a novel learning paradigm for Deep Neural Networks (DNN) by using Boolean logic algebra. We first present the basic differentiable operators of a Boolean system such as conjunction, disjunction and exclusive-OR and show how these elementary operators can be combined in a simple and meaningful way to form Neural Logic Networks (NLNs). We examine the effectiveness of the proposed NLN framework in learning Boolean functions and discrete-algorithmic tasks. We demonstrate that, in contrast to the implicit learning in MLP approach, the proposed neural logic networks can learn the logical functions explicitly that can be verified and interpreted by human. In particular, we propose a new framework for learning the inductive logic programming (ILP) problems by exploiting the explicit representational power of NLN. We show the proposed neural ILP solver is capable of feats such as predicate invention and recursion and can outperform the current state of the art neural ILP solvers u...
Artificial Neural Networks that Learn to Satisfy Logic Constraints
2017
Logic-based problems such as planning, theorem proving, or puzzles, typically involve combinatoric search and structured knowledge representation. Artificial neural networks are very successful statistical learners, however, for many years, they have been criticized for their weaknesses in representing and in processing complex structured knowledge which is crucial for combinatoric search and symbol manipulation. Two neural architectures are presented, which can encode structured relational knowledge in neural activation, and store bounded First Order Logic constraints in connection weights. Both architectures learn to search for a solution that satisfies the constraints. Learning is done by unsupervised practicing on problem instances from the same domain, in a way that improves the network-solving speed. No teacher exists to provide answers for the problem instances of the training and test sets. However, the domain constraints are provided as prior knowledge to a loss function th...
A Neural Approach to Extended Logic Programs
International Work-Conference on Artificial and NaturalNeural Networks, 2003
A neural net based development of multi-adjoint logic programming is presented. Transformation rules carry programs into neural networks, where truth-values of rules relate to output of neurons, truth-values of facts represent input, and network functions are determined by a set of general operators; the output of the net being the values of propositional variables under its minimal model. Some experimental
Rewriting Logic Using Strategies for Neural Networks: An Implementation in Maude
Advances in Soft Computing, 2009
A general neural network model for rewriting logic is proposed. This model, in the form of a feedforward multilayer net, is represented in rewriting logic along the lines of several models of parallelism and concurrency that have already been mapped into it. By combining both a right choice for the representation operations and the availability of strategies to guide the application of our rules, a new approach for the classical backpropagation learning algorithm is obtained. An example, the diagnosis of glaucoma by using campimetric fields and nerve fibres of the retina, is presented to illustrate the performance and applicability of the proposed model.
First-order logic learning in Artificial Neural Networks
The 2010 International Joint Conference on Neural Networks (IJCNN), 2010
Artificial Neural Networks have previously been applied in neuro-symbolic learning to learn ground logic program rules. However, there are few results of learning relations using neuro-symbolic learning. This paper presents the system PAN, which can learn relations defined by a logic program clause. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The learning process is insensitive to noisy data thanks to the use of Artificial Neural Networks. The system is tested on two domains.
Neural networks and rational Lukasiewicz logic
2002
Abstract We describe a correspondence between rational Lukasiewicz formulas and neural networks in which the activation function is the truncated identity and synaptic weights are rational numbers. On one hand, having a logical representation (in a given logic) of neural networks could widen the interpretability, amalgamability and reuse of these objects. On the other hand, neural networks could be used to learn formulas from data and as circuital counterparts of (functions represented by) formulas.
A Neural Network Performing Boolean Logic Operations
2003
A neural network, composed of neurons of two types, able to perform Boolean operations is presented. Based on recursive definition of ”basic Boolean operations” the proposed model, for any fixed number of input variables, can either realize all of them paralelly, or accomplish any chosen one of them alone. Moreover, possibilities of combining a few such networks into more complex structures in order to perform superpositions of basic operations are disscussed. The general concept of neural implementation of First Order logic (FO) based on the presented network is also introduced.
Semantic Interpretation of Deep Neural Networks Based on Continuous Logic
ArXiv, 2019
Combining deep neural networks with the concepts of continuous logic is desirable to reduce uninterpretability of neural models. Nilpotent logical systems offer an appropriate mathematical framework to obtain continuous logic based neural networks (CL neural networks). We suggest using a differentiable approximation of the cutting function in the nodes of the input layer as well as in the logical operators in the hidden layers. The first experimental results point towards a promising new approach of machine learning.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Abstract—Developments in deep learning have seen the use of layer-wise unsupervised learning combined with supervised learning for fine-tuning. With this layer-wise approach, a deep network can be seen as a more modular system which lends itself well to learning representations. In this paper we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, and whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end we use a simple symbolic language-a set of logical rules which we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layer-wise networks (or restricted Boltzmann machines). We also show that layer-wise extraction can produce an improvement in the accuracy of Deep Belief Networks. Furthermore, the proposed symbolic characterisation of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Computability of Logical Neural Networks
Journal of Intelligent Systems, 1992
The performance of a learning algorithm is measured by looking at the structure achieved through such learning processes and comparing the desired function / to the function computed by the network acting as a classical automaton. It is important to characterise the functions which can be computed by the network in this fixed structure, since if there is no configuration which allows the computation of f, then a network cannot learn to compute f. We studied the computability of networks of PLNs (Probabilistic Logic Node (Aleksander, 1988)). We suggested a new method of recognition based on stored probabilities with PLN networks. This new method increases the computability power of such networks beyond that of finite state acceptors. We proved that the computability of a PLN network is identical to the computability of a probabilistic automaton (Rabin, 1963). This implies that it is possible to recognise more than finite state languages with such machines.
Neural Network Models of Conditionals: An Introduction
LogKCA-07, Proceedings of the First ILCLI International Workshop on Logic and Philosophy of Knowledge, Communication and Action (ed. by X. Arrazola, J. M. Larrazabal et al.), University of the Basque Country Press
This "lecture notes style" article gives a brief survey of neural network models of conditionals. After short introductions into the studies of neural networks and conditionals, we turn to the notion of an interpreted dynamical system as a unifying concept in the logical investigation of dynamic systems in general, and of neural networks in particular. We explain how conditionals get represented by interpreted dynamical systems, which logical systems these conditionals obey, and what the main open problems in this area are.
Towards generalizable neuro-symbolic reasoners
Doctor of PhilosophyDepartment of Computer ScienceMajor Professor Not ListedSymbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of --not necessarily easily obtained-- data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of...
First-Order Logical Neural Networks
Fourth International Conference on Hybrid Intelligent Systems (HIS'04)
Inductive Logic Programming (ILP) is a well known machine learning technique in learning concepts from relational data. Nevertheless, ILP systems are not robust enough to noisy or unseen data in real world domains. Furthermore, in multi-class problems, if the example is not matched with any learned rules, it cannot be classified. This paper presents a novel hybrid learning method to alleviate this restriction by enabling Neural Networks to handle first-order logic programs directly. The proposed method, called First-Order Logical Neural Network (FOLNN), is based on feedforward neural networks and integrates inductive learning from examples and background knowledge. We also propose a method for determining the appropriate variable substitution in FOLNN learning by using Multiple-Instance Learning (MIL). In the experiments, the proposed method has been evaluated on two first-order learning problems, i.e., the Finite Element Mesh Design and Mutagenesis and compared with the state-of-the-art, the PROGOL system. The experimental results show that the proposed method performs better than PROGOL.
In this work we study the representation of the computational model of artificial neural networks in rewriting logic, along the lines of several models of parallelism and concurrency that have already been mapped into it. We show how crucial is the right choice for the representation operations and the availability of strategies to guide the application of our rules. Finally, we also apply our specification to data used in the diagnosis of glaucoma.
Artificial Intelligence, 1995
The paper presents a connectionist framework that is capable of representing and learning propositional knowledge. An extended version of propositional calculus is developed and is demonstrated to be useful for nonmonotonic reasoning, dealing with conflicting beliefs and for coping with inconsistency generated by unreliable knowledge sources. Formulas of the extended calculus are proved to be equivalent in a very strong sense to symmetric networks (like Hopfield networks and Boltzmann machines), and efficient algorithms are given for translating back and forth between the two forms of knowledge representation. A fast learning procedure is presented that allows symmetric networks to learn representations of unknown logic formulas by looking at examples. A connectionist inference engine is then sketched whose knowledge is either compiled from a symbolic representation or learned inductively from training examples. Experiments with large scale randomly generated formulas suggest that the parallel local search that is executed by the networks is extremely fast on average. Finally, it is shown that the extended logic can be used as a high-level specification language for connectionist networks, into which several recent symbolic systems may be mapped. The paper demonstrates how a rigorous bridge can be constructed that ties together the (sometimes opposing) connectionist and symbolic approaches.