Shaul Markovitch | Technion - Israel Institute of Technology (original) (raw)
Uploads
Papers by Shaul Markovitch
arXiv (Cornell University), Jan 31, 2018
Open Shortest Path First (OSPF) is one of the most widely deployed interior gateway routing proto... more Open Shortest Path First (OSPF) is one of the most widely deployed interior gateway routing protocols on the Internet. It allows routers to calculate their routing tables within a cluster of networks - an autonomous system (AS). An attacker that leverages OSPF to attack an AS may have a catastrophic effect on it. A single malicious router within an AS can poison the routing tables of all other routers of that AS by sending false routing messages, thereby subverting the entire routing process. Finding attacks on the routing protocol is a demanding task as the exact nature of the attack may be unknown. In this work we present a machine learning-based attack detection scheme that is based on topology-agnostic features. The topology-agnostic features allow the trained algorithm to find attacks on a broad range of topologies, as well as networks with dynamic topologies. We validate the algorithm both on synthetic AS topologies and on real-world ISP topology and traffic. We show that the presented algorithm achieves high detection accuracy within a very short detection time, while false error rate remains low.
arXiv (Cornell University), May 31, 1998
arXiv (Cornell University), Aug 22, 2022
arXiv (Cornell University), May 3, 2020
arXiv (Cornell University), Feb 12, 2018
I provide a personal memoir on work that I have done and describe research that innuenced my work... more I provide a personal memoir on work that I have done and describe research that innuenced my work that led to results in the foundation of disjunctive logic programming. I discuss my beginning experiences preceding Kowalski's development of logic programming; my early experiences in logic programming ; theoretical developments that culminated in a theory of disjunctive logic programming; and note some researchers whose work especially innu-enced me.
International Joint Conference on Artificial Intelligence, Jan 6, 2007
arXiv (Cornell University), Jun 28, 2017
arXiv (Cornell University), Nov 1, 2022
Given a class of students, and a pool of questions in the domain of study, what subset will const... more Given a class of students, and a pool of questions in the domain of study, what subset will constitute a "good" exam? Millions of educators are dealing with this difficult problem worldwide, yet exams are still composed manually in non-systematic ways. In this work we present a novel algorithmic framework for exam composition. Our framework requires two input components: a student population represented by a distribution over overlay models, each consisting of a set of mastered abilities, or actions; and a target model ordering that, given any two student models, defines which should be given the higher grade. To determine the performance of a student model on a potential question, we test whether it satisfies a disjunctive action landmark, i.e., whether its abilities are sufficient to follow at least one solution path. We present a novel utility function for evaluating exams, using the described components. An exam is highly evaluated if it is expected to order the studen...
Most existing learning algorithms generate classifiers that take as an input a single untagged in... more Most existing learning algorithms generate classifiers that take as an input a single untagged instance and return its classification. When given a set of instances to classify, the classifier treats each member of the set independently. In this work we introduce a new setup we call batch clas- sification. In this setup the induced classifier receives the testing instances as
Deep Blue (University of Michigan), 1989
Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critica... more Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critical for the success of the learning process. In order to minimize human labeling efforts, we propose a novel active learning solution that does not rely on existing sources of unlabeled data. It uses a small amount of labeled data as the core set for the synthesis of useful membership queries (MQs) - unlabeled instances generated by an algorithm for human labeling. Our solution uses modification operators, functions that modify instances to some extent. We apply the operators on a small set of instances (core set), creating a set of new membership queries. Using this framework, we look at the instance space as a search space and apply search algorithms in order to generate new examples highly relevant to the learner. We implement this framework in the textual domain and test it on several text classification tasks and show improved classifier performance as more MQs are labeled and incorpo...
Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critica... more Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critical for the success of the learning process. In order to minimize human labeling efforts, we propose a novel active learning solution that does not rely on existing sources of unlabeled data. It uses a small amount of labeled data as the core set for the synthesis of useful membership queries (MQs) - unlabeled instances generated by an algorithm for human labeling. Our solution uses modification operators, functions that modify instances to some extent. We apply the operators on a small set of instances (core set), creating a set of new membership queries. Using this framework, we look at the instance space as a search space and apply search algorithms in order to generate new examples highly relevant to the learner. We implement this framework in the textual domain and test it on several text classification tasks and show improved classifier performance as more MQs are labeled and incorpo...
Computing semantic relatedness of natural language texts requires access to vast amounts of commo... more Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r =0.56 to 0.75 for individual words and from r =0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users. 1
The performance of anytime algorithms can be improved by simultaneously solving several instances... more The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types. Finally, we present empirical results of applying our scheduling algorithm to the Latin Square problem. 1.
The size and complexity of software and hardware systems have significantly increased in the past... more The size and complexity of software and hardware systems have significantly increased in the past years. As a result, it is harder to guarantee their correct behavior. One of the most successful methods for automated verification of finite-state systems is model checking. Most of the current model-checking systems use binary decision diagrams (BDDs) for the representation of the tested model and in the verification process of its properties. Generally, BDDs allow a canonical compact representation of a boolean function (given an order of its variables). The more compact the BDD is, the better performance one gets from the verifier. However, nding an optimal order for a BDD is an NP-complete problem. Therefore, several heuristic methods based on expert knowledge have been developed for variable ordering. We propose an alternative approach in which the variable ordering algorithm gains "ordering experience" from training models and uses the learned knowledge for finding good...
arXiv (Cornell University), Jan 31, 2018
Open Shortest Path First (OSPF) is one of the most widely deployed interior gateway routing proto... more Open Shortest Path First (OSPF) is one of the most widely deployed interior gateway routing protocols on the Internet. It allows routers to calculate their routing tables within a cluster of networks - an autonomous system (AS). An attacker that leverages OSPF to attack an AS may have a catastrophic effect on it. A single malicious router within an AS can poison the routing tables of all other routers of that AS by sending false routing messages, thereby subverting the entire routing process. Finding attacks on the routing protocol is a demanding task as the exact nature of the attack may be unknown. In this work we present a machine learning-based attack detection scheme that is based on topology-agnostic features. The topology-agnostic features allow the trained algorithm to find attacks on a broad range of topologies, as well as networks with dynamic topologies. We validate the algorithm both on synthetic AS topologies and on real-world ISP topology and traffic. We show that the presented algorithm achieves high detection accuracy within a very short detection time, while false error rate remains low.
arXiv (Cornell University), May 31, 1998
arXiv (Cornell University), Aug 22, 2022
arXiv (Cornell University), May 3, 2020
arXiv (Cornell University), Feb 12, 2018
I provide a personal memoir on work that I have done and describe research that innuenced my work... more I provide a personal memoir on work that I have done and describe research that innuenced my work that led to results in the foundation of disjunctive logic programming. I discuss my beginning experiences preceding Kowalski's development of logic programming; my early experiences in logic programming ; theoretical developments that culminated in a theory of disjunctive logic programming; and note some researchers whose work especially innu-enced me.
International Joint Conference on Artificial Intelligence, Jan 6, 2007
arXiv (Cornell University), Jun 28, 2017
arXiv (Cornell University), Nov 1, 2022
Given a class of students, and a pool of questions in the domain of study, what subset will const... more Given a class of students, and a pool of questions in the domain of study, what subset will constitute a "good" exam? Millions of educators are dealing with this difficult problem worldwide, yet exams are still composed manually in non-systematic ways. In this work we present a novel algorithmic framework for exam composition. Our framework requires two input components: a student population represented by a distribution over overlay models, each consisting of a set of mastered abilities, or actions; and a target model ordering that, given any two student models, defines which should be given the higher grade. To determine the performance of a student model on a potential question, we test whether it satisfies a disjunctive action landmark, i.e., whether its abilities are sufficient to follow at least one solution path. We present a novel utility function for evaluating exams, using the described components. An exam is highly evaluated if it is expected to order the studen...
Most existing learning algorithms generate classifiers that take as an input a single untagged in... more Most existing learning algorithms generate classifiers that take as an input a single untagged instance and return its classification. When given a set of instances to classify, the classifier treats each member of the set independently. In this work we introduce a new setup we call batch clas- sification. In this setup the induced classifier receives the testing instances as
Deep Blue (University of Michigan), 1989
Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critica... more Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critical for the success of the learning process. In order to minimize human labeling efforts, we propose a novel active learning solution that does not rely on existing sources of unlabeled data. It uses a small amount of labeled data as the core set for the synthesis of useful membership queries (MQs) - unlabeled instances generated by an algorithm for human labeling. Our solution uses modification operators, functions that modify instances to some extent. We apply the operators on a small set of instances (core set), creating a set of new membership queries. Using this framework, we look at the instance space as a search space and apply search algorithms in order to generate new examples highly relevant to the learner. We implement this framework in the textual domain and test it on several text classification tasks and show improved classifier performance as more MQs are labeled and incorpo...
Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critica... more Human labeling of data can be very time-consuming and expensive, yet, in many cases it is critical for the success of the learning process. In order to minimize human labeling efforts, we propose a novel active learning solution that does not rely on existing sources of unlabeled data. It uses a small amount of labeled data as the core set for the synthesis of useful membership queries (MQs) - unlabeled instances generated by an algorithm for human labeling. Our solution uses modification operators, functions that modify instances to some extent. We apply the operators on a small set of instances (core set), creating a set of new membership queries. Using this framework, we look at the instance space as a search space and apply search algorithms in order to generate new examples highly relevant to the learner. We implement this framework in the textual domain and test it on several text classification tasks and show improved classifier performance as more MQs are labeled and incorpo...
Computing semantic relatedness of natural language texts requires access to vast amounts of commo... more Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r =0.56 to 0.75 for individual words and from r =0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users. 1
The performance of anytime algorithms can be improved by simultaneously solving several instances... more The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types. Finally, we present empirical results of applying our scheduling algorithm to the Latin Square problem. 1.
The size and complexity of software and hardware systems have significantly increased in the past... more The size and complexity of software and hardware systems have significantly increased in the past years. As a result, it is harder to guarantee their correct behavior. One of the most successful methods for automated verification of finite-state systems is model checking. Most of the current model-checking systems use binary decision diagrams (BDDs) for the representation of the tested model and in the verification process of its properties. Generally, BDDs allow a canonical compact representation of a boolean function (given an order of its variables). The more compact the BDD is, the better performance one gets from the verifier. However, nding an optimal order for a BDD is an NP-complete problem. Therefore, several heuristic methods based on expert knowledge have been developed for variable ordering. We propose an alternative approach in which the variable ordering algorithm gains "ordering experience" from training models and uses the learned knowledge for finding good...