Victor Marek - Academia.edu (original) (raw)
Papers by Victor Marek
Artificial Intelligence, 1999
Transactions on Rough Sets IX, 2008
Approximate reasoning is used in a variety of reasoning tasks in logic-based artificial intellige... more Approximate reasoning is used in a variety of reasoning tasks in logic-based artificial intelligence. In this paper we present several such reasoning schemes and show how they relate and differ from the approach of Pawlak's Rough Sets.
Lecture Notes in Computer Science, 1995
We investigate revision programming, a logic-based mechanism for describing changes in databases ... more We investigate revision programming, a logic-based mechanism for describing changes in databases and enforcing certain type of integrity constraints. We show that revisions justified by an initial database and a revision program can be computed by a sequential execution of the rules of the program (with subsequent check of the applicability of the rules). In general, a program may determine none, exactly one or many justified revisions of a given initial database. We exhibit two classes of programs, sa# and stratified, with the property that for every initial database a unique justified revision exists. We study the complexity of basic problems associated with justified revisions. Although the existence problems are NP.-complete, for safe and stratified programs justified revisions can be computed in polynomial time.
Artificial Intelligence, 1993
Default logic can be regarded as a mechanism to represent families of belief sets of a reasoning ... more Default logic can be regarded as a mechanism to represent families of belief sets of a reasoning agent. As such, it is inherently second-order. In this paper, we study the problem of representability of a family of theories as the set of extensions of a default theory. We give a complete solution to the representability by means of normal default theories. We obtain partial results on representability by arbitrary default theories. In particular, we construct examples of denumerable families of nonincluding theories that are not representable. We also study the concept of equivalence between default theories. We show that for every normal default theory there exists a normal prerequisitefree theory with the same set of extensions. We derive a representation result connecting normal default logic with a version of CWA.
Journal of Logic and Computation, 1992
Page 1. The Pure Logic of Necessitation MELVIN C. FITTING,* V. WIKTOR MAREKf AND MIROSLAWTRUSZCZY... more Page 1. The Pure Logic of Necessitation MELVIN C. FITTING,* V. WIKTOR MAREKf AND MIROSLAWTRUSZCZYNSKI,f * Departments of Mathematics, Computer Science and Philosophy, Graduate Center, City University of ...
Artificial Intelligence, 1999
Default logic was proposed by Reiter as a knowledge representation tool. In this paper, we presen... more Default logic was proposed by Reiter as a knowledge representation tool. In this paper, we present our work on the Default Reasoning System, DeReS, the rst comprehensive and optimized implementation of default logic. While knowledge representation remains the main application area for default logic, as a source of large-scale problems needed for experimentation and as a source of intuitions needed for a systematic methodology of encoding problems as default theories we use here the domain of combinatorial problems. To experimentally study the performance of DeReS we developed a benchmarking system, the TheoryBase. The TheoryBase is designed to support experimental investigations of nonmonotonic reasoning systems based on the language of default logic or logic programming. It allows the user to create parameterized collections of default theories having similar properties and growing sizes and, consequently, to study the asymptotic performance of nonmonotonic systems under investigation. Each theory generated by the TheoryBase has a unique identi er, which allows for concise descriptions of test cases used in experiments, and, thus, facilitates comparative studies. We describe the TheoryBase in this paper and report on our experimental studies of DeReS performance based on test cases generated by the TheoryBase.
Artificial Intelligence, 2010
We provide new perspectives on the semantics of logic programs with constraints. To this end we i... more We provide new perspectives on the semantics of logic programs with constraints. To this end we introduce several notions of computation and propose to use the results of computations as answer sets of programs with constraints. We discuss the rationale behind different classes of computations and study the relationships among them and among the corresponding concepts of answer sets. The proposed semantics generalize the answer set semantics for programs with monotone, convex and/or arbitrary constraints described in the literature.
Cornell University - arXiv, Aug 16, 2011
A fact apparently not observed earlier in the literature of nonmonotonic reasoning is that Reiter... more A fact apparently not observed earlier in the literature of nonmonotonic reasoning is that Reiter, in his default logic paper, did not directly formalize informal defaults. Instead, he translated a default into a certain natural language proposition and provided a formalization of the latter. A few years later, Moore noted that propositions like the one used by Reiter are fundamentally different than defaults and exhibit a certain autoepistemic nature. Thus, Reiter had developed his default logic as a formalization of autoepistemic propositions rather than of defaults. The first goal of this paper is to show that some problems of Reiter's default logic as a formal way to reason about informal defaults are directly attributable to the autoepistemic nature of default logic and to the mismatch between informal defaults and the Reiter's formal defaults, the latter being a formal expression of the autoepistemic propositions Reiter used as a representation of informal defaults. The second goal of our paper is to compare the work of Reiter and Moore. While each of them attempted to formalize autoepistemic propositions, the modes of reasoning in their respective logics were different. We revisit Moore's and Reiter's intuitions and present them from the perspective of autotheoremhood, where theories can include propositions referring to the theory's own theorems. We then discuss the formalization of this perspective in the logics of Moore and Reiter, respectively, using the unifying semantic framework for default and autoepistemic logics that we developed earlier. We argue that Reiter's default logic is a better formalization of Moore's intuitions about autoepistemic propositions than Moore's own autoepistemic logic.
IOS Press, 2008
... Professor Mostowski spent the summer of 1975 in Berkeley and then visiting Mał- gorzata Dubie... more ... Professor Mostowski spent the summer of 1975 in Berkeley and then visiting Mał- gorzata Dubiel-Lachlan and Alistair Lachlan in Vancouver. ... He just heard from Małgorzata Dubiel about Mostowski's death. Our world ended. ...
Logic Programming, 2008
We discuss the development, current state and the future of Answer Set Programming, making predic... more We discuss the development, current state and the future of Answer Set Programming, making predictions that are not necessarily accurate.
In this note we will investigate a form of logic programming with constraints. The constraints th... more In this note we will investigate a form of logic programming with constraints. The constraints that we consider will not be restricted to statements on real numbers as in CLP(R), see [15]. Instead our constraints will be arbitrary global constraints. The basic idea is that the applicability of a given rule is not predicated on the fact that individual variables satisfy certain constraints, but rather on the fact that the least model of the set rules that are ultimately applicable satisfy the constraint of the rule. Thus the role of clauses will be slightly different than in the usual Logic Programming with constraints. In fact, the paradigm we present is closely related to stable model semantics of general logic programming [13]. We will define the notion of a constraint model of our constraint logic program and show that stable models of logic programs as well as the supported models of logic programs are just special cases of constraint models of constraint logic programs. Our def...
Journal of Symbolic Logic, 1998
Progress in satisfiability (SAT) solving has enabled answering long-standing open questions in ma... more Progress in satisfiability (SAT) solving has enabled answering long-standing open questions in mathematics completely automatically resulting in clever though potentially gigantic proofs. We illustrate the success of this approach by presenting the solution of the Boolean Pythagorean triples problem. We also produced and validated a proof of the solution, which has been called the “largest math proof ever”. The enormous size of the proof is not important. In fact a shorter proof would have been preferable. However, the size shows that automated tools combined with super computing facilitate solving bigger problems. Moreover, the proof of 200 terabytes can now be validated using highly trusted systems, demonstrating that we can check the correctness of proofs no matter their size.
Fundamenta Informaticae, 1999
We study properties of rough sets, that is, approximations to sets of records in a database or, m... more We study properties of rough sets, that is, approximations to sets of records in a database or, more formally, to subsets of the universe of an information system. A rough set is a pair hL; U i such that L; U are de nable in the information system and L U. In the paper, we introduce a language, called the language of inclusion-exclusion, to describe incomplete speci cations of (unknown) sets. We use rough sets in order to de ne a semantics for theories in the inclusion-exclusion language. We argue that our concept of a rough set is closely related to that introduced by Pawlak. We show that rough sets can be ordered by the knowledge ordering (denoted kn). We prove that Pawlak's rough sets are characterized as kn-greatest approximations. We show that for any consistent (that is, satis able) theory T in the language of inclusion-exclusion there exists a kn-greatest rough set approximating all sets X that satisfy T. For some classes of theories in the language of inclusion-exclusion, we provide algorithmic ways to nd this best approximation. We also state a number of miscellaneous results and discuss some open problems.
MBNo 070 dO Pubhc reportingj burden for this €liectionl at inj~ortrlllnf Ms etjfteol tO .,*q hor ... more MBNo 070 dO Pubhc reportingj burden for this €liectionl at inj~ortrlllnf Ms etjfteol tO .,*q hor e r_ T~ OMBli~nl hete o 'vw Nst o. 070"1'"emtn dt suct mnd "ntani. oeng the dotmleea t o.
Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, 2016
This paper accompanies panel contribution of the author to the session devoted to personal remini... more This paper accompanies panel contribution of the author to the session devoted to personal reminiscences of Professor Zdzisław Pawlak, a computer scientist and engineer. In particular we discuss some aspects of the work of Pawlak and researchers in his circle of collaborators in 1960ies, and especially, 1970ies. Given the lack of archival materials, the author bases this writing on personal recollections which may, at places, be imprecise.
Introduction Nonmonotonic logics were introduced in the late 70s as knowledge representation form... more Introduction Nonmonotonic logics were introduced in the late 70s as knowledge representation formalisms. Default logic [Rei80], circumscription [McC80], autoepistemic logic [Moo85, Lev90] and logic programming with negation [ABW88, GL88, Apt90] turned out over the years to be most widely studied and most influential in the development of the area. To serve as a knowledge representation tool, a formal system must offer an expressive language with well-understood semantics, and a computational mechanism supporting effective reasoning. In this tutorial we will present theoretical foundations for default logic and logic programming. We will emphasize results that have bearing on algorithms, computational complexity and implementations. We will discuss methodology of programming with nonmonotonic knowledge representation systems and demonstrate an implementation of default logic, Default Reasoning System (or DeReS), developed in the University of Kentucky. This reader contains all
ACM SIGART Bulletin, 1993
Why would one read the papers of masters? There are at least two reasons for doing so. First, by ... more Why would one read the papers of masters? There are at least two reasons for doing so. First, by tracing the development of ideas of someone with true insight it is possible to see the roots of various developments, concepts, and even false starts. In short we see how the domain, as we know it today, emerged in the historical process. Second, it is dangerous if one does not know the past. One is prone to fall into the same potholes as the previous generations.
Transactions on Rough Sets VII
We investigate the operators associated with approximations in the rough set theory introduced by... more We investigate the operators associated with approximations in the rough set theory introduced by Pawlak in his [Paw82,MP84] and extensively studied by the Rough Set community [RS06]. We use universal algebra techniques to establish a natural characterization of operators associated with rough sets.
Artificial Intelligence, 1999
Transactions on Rough Sets IX, 2008
Approximate reasoning is used in a variety of reasoning tasks in logic-based artificial intellige... more Approximate reasoning is used in a variety of reasoning tasks in logic-based artificial intelligence. In this paper we present several such reasoning schemes and show how they relate and differ from the approach of Pawlak's Rough Sets.
Lecture Notes in Computer Science, 1995
We investigate revision programming, a logic-based mechanism for describing changes in databases ... more We investigate revision programming, a logic-based mechanism for describing changes in databases and enforcing certain type of integrity constraints. We show that revisions justified by an initial database and a revision program can be computed by a sequential execution of the rules of the program (with subsequent check of the applicability of the rules). In general, a program may determine none, exactly one or many justified revisions of a given initial database. We exhibit two classes of programs, sa# and stratified, with the property that for every initial database a unique justified revision exists. We study the complexity of basic problems associated with justified revisions. Although the existence problems are NP.-complete, for safe and stratified programs justified revisions can be computed in polynomial time.
Artificial Intelligence, 1993
Default logic can be regarded as a mechanism to represent families of belief sets of a reasoning ... more Default logic can be regarded as a mechanism to represent families of belief sets of a reasoning agent. As such, it is inherently second-order. In this paper, we study the problem of representability of a family of theories as the set of extensions of a default theory. We give a complete solution to the representability by means of normal default theories. We obtain partial results on representability by arbitrary default theories. In particular, we construct examples of denumerable families of nonincluding theories that are not representable. We also study the concept of equivalence between default theories. We show that for every normal default theory there exists a normal prerequisitefree theory with the same set of extensions. We derive a representation result connecting normal default logic with a version of CWA.
Journal of Logic and Computation, 1992
Page 1. The Pure Logic of Necessitation MELVIN C. FITTING,* V. WIKTOR MAREKf AND MIROSLAWTRUSZCZY... more Page 1. The Pure Logic of Necessitation MELVIN C. FITTING,* V. WIKTOR MAREKf AND MIROSLAWTRUSZCZYNSKI,f * Departments of Mathematics, Computer Science and Philosophy, Graduate Center, City University of ...
Artificial Intelligence, 1999
Default logic was proposed by Reiter as a knowledge representation tool. In this paper, we presen... more Default logic was proposed by Reiter as a knowledge representation tool. In this paper, we present our work on the Default Reasoning System, DeReS, the rst comprehensive and optimized implementation of default logic. While knowledge representation remains the main application area for default logic, as a source of large-scale problems needed for experimentation and as a source of intuitions needed for a systematic methodology of encoding problems as default theories we use here the domain of combinatorial problems. To experimentally study the performance of DeReS we developed a benchmarking system, the TheoryBase. The TheoryBase is designed to support experimental investigations of nonmonotonic reasoning systems based on the language of default logic or logic programming. It allows the user to create parameterized collections of default theories having similar properties and growing sizes and, consequently, to study the asymptotic performance of nonmonotonic systems under investigation. Each theory generated by the TheoryBase has a unique identi er, which allows for concise descriptions of test cases used in experiments, and, thus, facilitates comparative studies. We describe the TheoryBase in this paper and report on our experimental studies of DeReS performance based on test cases generated by the TheoryBase.
Artificial Intelligence, 2010
We provide new perspectives on the semantics of logic programs with constraints. To this end we i... more We provide new perspectives on the semantics of logic programs with constraints. To this end we introduce several notions of computation and propose to use the results of computations as answer sets of programs with constraints. We discuss the rationale behind different classes of computations and study the relationships among them and among the corresponding concepts of answer sets. The proposed semantics generalize the answer set semantics for programs with monotone, convex and/or arbitrary constraints described in the literature.
Cornell University - arXiv, Aug 16, 2011
A fact apparently not observed earlier in the literature of nonmonotonic reasoning is that Reiter... more A fact apparently not observed earlier in the literature of nonmonotonic reasoning is that Reiter, in his default logic paper, did not directly formalize informal defaults. Instead, he translated a default into a certain natural language proposition and provided a formalization of the latter. A few years later, Moore noted that propositions like the one used by Reiter are fundamentally different than defaults and exhibit a certain autoepistemic nature. Thus, Reiter had developed his default logic as a formalization of autoepistemic propositions rather than of defaults. The first goal of this paper is to show that some problems of Reiter's default logic as a formal way to reason about informal defaults are directly attributable to the autoepistemic nature of default logic and to the mismatch between informal defaults and the Reiter's formal defaults, the latter being a formal expression of the autoepistemic propositions Reiter used as a representation of informal defaults. The second goal of our paper is to compare the work of Reiter and Moore. While each of them attempted to formalize autoepistemic propositions, the modes of reasoning in their respective logics were different. We revisit Moore's and Reiter's intuitions and present them from the perspective of autotheoremhood, where theories can include propositions referring to the theory's own theorems. We then discuss the formalization of this perspective in the logics of Moore and Reiter, respectively, using the unifying semantic framework for default and autoepistemic logics that we developed earlier. We argue that Reiter's default logic is a better formalization of Moore's intuitions about autoepistemic propositions than Moore's own autoepistemic logic.
IOS Press, 2008
... Professor Mostowski spent the summer of 1975 in Berkeley and then visiting Mał- gorzata Dubie... more ... Professor Mostowski spent the summer of 1975 in Berkeley and then visiting Mał- gorzata Dubiel-Lachlan and Alistair Lachlan in Vancouver. ... He just heard from Małgorzata Dubiel about Mostowski's death. Our world ended. ...
Logic Programming, 2008
We discuss the development, current state and the future of Answer Set Programming, making predic... more We discuss the development, current state and the future of Answer Set Programming, making predictions that are not necessarily accurate.
In this note we will investigate a form of logic programming with constraints. The constraints th... more In this note we will investigate a form of logic programming with constraints. The constraints that we consider will not be restricted to statements on real numbers as in CLP(R), see [15]. Instead our constraints will be arbitrary global constraints. The basic idea is that the applicability of a given rule is not predicated on the fact that individual variables satisfy certain constraints, but rather on the fact that the least model of the set rules that are ultimately applicable satisfy the constraint of the rule. Thus the role of clauses will be slightly different than in the usual Logic Programming with constraints. In fact, the paradigm we present is closely related to stable model semantics of general logic programming [13]. We will define the notion of a constraint model of our constraint logic program and show that stable models of logic programs as well as the supported models of logic programs are just special cases of constraint models of constraint logic programs. Our def...
Journal of Symbolic Logic, 1998
Progress in satisfiability (SAT) solving has enabled answering long-standing open questions in ma... more Progress in satisfiability (SAT) solving has enabled answering long-standing open questions in mathematics completely automatically resulting in clever though potentially gigantic proofs. We illustrate the success of this approach by presenting the solution of the Boolean Pythagorean triples problem. We also produced and validated a proof of the solution, which has been called the “largest math proof ever”. The enormous size of the proof is not important. In fact a shorter proof would have been preferable. However, the size shows that automated tools combined with super computing facilitate solving bigger problems. Moreover, the proof of 200 terabytes can now be validated using highly trusted systems, demonstrating that we can check the correctness of proofs no matter their size.
Fundamenta Informaticae, 1999
We study properties of rough sets, that is, approximations to sets of records in a database or, m... more We study properties of rough sets, that is, approximations to sets of records in a database or, more formally, to subsets of the universe of an information system. A rough set is a pair hL; U i such that L; U are de nable in the information system and L U. In the paper, we introduce a language, called the language of inclusion-exclusion, to describe incomplete speci cations of (unknown) sets. We use rough sets in order to de ne a semantics for theories in the inclusion-exclusion language. We argue that our concept of a rough set is closely related to that introduced by Pawlak. We show that rough sets can be ordered by the knowledge ordering (denoted kn). We prove that Pawlak's rough sets are characterized as kn-greatest approximations. We show that for any consistent (that is, satis able) theory T in the language of inclusion-exclusion there exists a kn-greatest rough set approximating all sets X that satisfy T. For some classes of theories in the language of inclusion-exclusion, we provide algorithmic ways to nd this best approximation. We also state a number of miscellaneous results and discuss some open problems.
MBNo 070 dO Pubhc reportingj burden for this €liectionl at inj~ortrlllnf Ms etjfteol tO .,*q hor ... more MBNo 070 dO Pubhc reportingj burden for this €liectionl at inj~ortrlllnf Ms etjfteol tO .,*q hor e r_ T~ OMBli~nl hete o 'vw Nst o. 070"1'"emtn dt suct mnd "ntani. oeng the dotmleea t o.
Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, 2016
This paper accompanies panel contribution of the author to the session devoted to personal remini... more This paper accompanies panel contribution of the author to the session devoted to personal reminiscences of Professor Zdzisław Pawlak, a computer scientist and engineer. In particular we discuss some aspects of the work of Pawlak and researchers in his circle of collaborators in 1960ies, and especially, 1970ies. Given the lack of archival materials, the author bases this writing on personal recollections which may, at places, be imprecise.
Introduction Nonmonotonic logics were introduced in the late 70s as knowledge representation form... more Introduction Nonmonotonic logics were introduced in the late 70s as knowledge representation formalisms. Default logic [Rei80], circumscription [McC80], autoepistemic logic [Moo85, Lev90] and logic programming with negation [ABW88, GL88, Apt90] turned out over the years to be most widely studied and most influential in the development of the area. To serve as a knowledge representation tool, a formal system must offer an expressive language with well-understood semantics, and a computational mechanism supporting effective reasoning. In this tutorial we will present theoretical foundations for default logic and logic programming. We will emphasize results that have bearing on algorithms, computational complexity and implementations. We will discuss methodology of programming with nonmonotonic knowledge representation systems and demonstrate an implementation of default logic, Default Reasoning System (or DeReS), developed in the University of Kentucky. This reader contains all
ACM SIGART Bulletin, 1993
Why would one read the papers of masters? There are at least two reasons for doing so. First, by ... more Why would one read the papers of masters? There are at least two reasons for doing so. First, by tracing the development of ideas of someone with true insight it is possible to see the roots of various developments, concepts, and even false starts. In short we see how the domain, as we know it today, emerged in the historical process. Second, it is dangerous if one does not know the past. One is prone to fall into the same potholes as the previous generations.
Transactions on Rough Sets VII
We investigate the operators associated with approximations in the rough set theory introduced by... more We investigate the operators associated with approximations in the rough set theory introduced by Pawlak in his [Paw82,MP84] and extensively studied by the Rough Set community [RS06]. We use universal algebra techniques to establish a natural characterization of operators associated with rough sets.