Applications of inductive logic programming (original) (raw)
Related papers
Springer eBooks, 2011
Biological processes where every gene and protein participates is an essential knowledge for designing disease treatments. Nowadays, these annotations are still unknown for many genes and proteins. Since making annotations from in-vivo experiments is costly, computational predictors are needed for different kinds of annotation such as metabolic pathway, interaction network, protein family, tissue, disease and so on. Biological data has an intrinsic relational structure, including genes and proteins, which can be grouped by many criteria. This hinders the possibility of finding good hypotheses when attribute-value representation is used. Hence, we propose the generic Modular Multi-Relational Framework (MMRF) to predict different kinds of gene and protein annotation using Relational Data Mining (RDM). The specific MMRF application to annotate human protein with diseases verifies that group knowledge (mainly protein-protein interaction pairs) improves the prediction, particularly doubling the area under the precision-recall curve.
Preface to special issue on Inductive Logic Programming, ILP 2017 and 2018
Machine Learning
Inductive Logic Programming (ILP) is a field at the intersection of Machine Learning and Logic Programming, based on logic as a uniform representation language for expressing examples, background knowledge and hypotheses. Thanks to the expressiveness of first-order logic, ILP has provided an excellent means for knowledge representation and learning in relevant fields such as graph mining, multirelational data mining and statistical relational learning, not to mention other logic-based non-propositional knowledge representation frameworks.
Logic programs as a basis for machine learning
First order predicate logic appears frequently in Artificial Intelligence. In learning programs, it is often the language used to describe concepts, rules, examples, events, etc. This paper presents an overview of research in logic-related learning systems and describes those features of first order logic which have made it such a useful tool. Two developments are of particular interest to us: the use of logic in what is now called "constructive induction", and the benefits to machine learning contributed by logic programming.
An inductive logic programming framework to learn a concept from ambiguous examples
Lecture Notes in Computer Science, 1998
We address a learning problem with the following peculiarity : we search for characteristic features common to a learning set of objects related to a target concept. In particular we approach the cases where descriptions of objects are ambiguous : they represent several incompatible realities. Ambiguity arises because each description only contains indirect information from which assumptions can be derived about the object. We suppose here that a set of constraints allows the identification of "coherent" sub-descriptions inside each object. We formally study this problem, using an Inductive Logic Programming framework close to characteristic induction from interpretations. In particular, we exhibit conditions which allow a pruned search of the space of concepts. Additionally we propose a method in which a set of hypothetical examples is explicitly calculated for each object prior to learning. The method is used with promising results to search for secondary substructures common to a set of RNA sequences.
The origins of inductive logic programming: A prehistoric tale
Proceedings of the 3rd International Workshop on …, 1993
This paper traces the development of the main ideas that have led to the present state of knowledge in Inductive Logic Programming. The story begins with research in psychology on the subject of human concept learning. Results from this research influenced early efforts in Artificial Intelligence which combined with the formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming.
An extended transformation approach to inductive logic programming
ACM Transactions on Computational Logic, 2001
Inductive logic programming (ILP) is concerned with learning relational descriptions that typically have the form of logic programs. In a transformation approach, an ILP task is transformed into an equivalent learning task in a different representation formalism. Propositionalization is a particular transformation method, in which the ILP task is compiled to an attribute-value learning task. The main restriction of propositionalization methods such as LINUS is that they are unable to deal with nondeterminate local variables in the body of hypothesis clauses. In this paper we show how this limitation can be overcome, by systematic first-order feature construction using a particular individual-centered feature bias. The approach can be applied in any domain where there is a clear notion of individual. We also show how to improve upon exhaustive first-order feature construction by using a relevancy filter. The proposed approach is illustrated on the "trains" and "mutagenesis" ILP domains.
Logic programs as declarative and procedural bias in inductive logic programming
2013
Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article “Computing Machinery and Intelligence”. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation. We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system...
An empirical study of the use of relevance information in inductive logic programming
2003
Inductive Logic Programming (ILP) systems construct models for data using domain-specific background information. When using these systems, it is typically assumed that sufficient human expertise is at hand to rule out irrelevant background information. Such irrelevant information can, and typically does, hinder an ILP system's search for good models. Here, we provide evidence that if expertise is available that can provide a partial-ordering on sets of background predicates in terms of relevance to the analysis task, then this can be used to good effect by an ILP system. In particular, using data from biochemical domains, we investigate an incremental strategy of including sets of predicates in decreasing order of relevance. Results obtained suggest that: (a) the incremental approach identifies, in substantially less time, a model that is comparable in predictive accuracy to that obtained with all background information in place; and (b) the incremental approach using the relevance ordering performs better than one that does not (that is, one that adds sets of predicates randomly). For a practitioner concerned with use of ILP, the implication of these findings are twofold: (1) when not all background information can be used at once (either due to limitations of the ILP system, or the nature of the domain) expert assessment of the relevance of background predicates can assist substantially in the construction of good models; and (2) good "first-cut" results can be obtained quickly by a simple exclusion of information known to be less relevant.
Learning with extended logic programs
1998
Abstract We discuss the adoption of a three-valued setting for inductive concept learning. Distinguishing between what is true, what is false and what is unknown can be useful in situations where decisions have to be taken on the basis of scarce information. In a three-valued setting, we want to learn a de nition for both the target concept and its opposite, considering positive and negative examples as instances of two disjoint classes.