Bayesian Games for Adversarial Regression Problems (original) (raw)
Related papers
A Game Theoretical Framework for Adversarial Learning
Many data mining applications, ranging from spam filtering to intrusion detection, are faced with active adversaries. In all these applications, initially successful classifiers will degrade easily. This becomes a game between the adversary and the data miner: The adversary modifies its strategy to avoid being detected by the current classifier; the data miner then updates its classifier based on the new threats. In this paper, we investigate the possibility of an equilibrium in this seemingly never ending game, where neither party has an incentive to change. Modifying the classifier causes too many false positives with too little increase in true positives; changes by the adversary decrease the utility of the false negative items that aren't detected. We develop a game theoretic framework where the equilibrium behavior of adversarial learning applications can be analyzed, and provide a solution for finding the equilibrium point. A classifier's equilibrium performance indicates its eventual success or failure. The data miner could then select attributes based on their equilibrium performance, and construct an effective classifier.
Bayesian Optimization for Black-Box Evasion of Machine Learning Systems
2017
Machine learning and big data algorithms have had widespread adoption in recent times, with extensive use in big industries such as advertising, e-commerce, finance, and healthcare. Despite the increased reliance on machine learning algorithms, general understanding of its vulnerabilities are still in the early stages. Because of this, there has to be a better grasp of its security implications to prevent attacks that could undermine the integrity of machine learning systems. Currently, attackers can use adversarial samples to fool even the state-of-the-art machine learning algorithms. Adversarial samples are legitimate inputs altered by adding a specially crafted perturbation, these samples retain their true class while forcing the target algorithm to misclassify them. However, current attack methods require knowledge of the target’s underlying model or training data. To add to this, there is a lack of a general technique that can detect adversarial samples. I introduce a novel bla...
Adversarial Decision Making: Choosing between Models Constructed by Interested Parties
The Journal of Law and Economics, 2016
In this paper, we characterize adversarial decision-making as a choice between competing interpretations of evidence ("models") constructed by interested parties. We show that if a court cannot perfectly determine which party's model is more likely to have generated the evidence, then adversaries face a tradeoff: a model further away from the best (most likely) interpretation has a lower probability of winning, but also a higher payoff following a win. We characterize equilibrium when both adversaries construct optimal models, and use the characterization to compare adversarial decision-making to an inquisitorial benchmark. We find that adversarial decisions are biased, and the bias favors the party with the less-likely, and more extreme, interpretation of the evidence.
Opponent Modeling in Interesting Adversarial Environments
We advance the field of research involving modeling opponents in interesting adversarial environments: environments in which equilibrium strategies are intractable to calculate or undesirable to use. We motivate the need for opponent models in such environments by showing how successful opponent modeling agents can exploit nonequilibrium strategies and strategies using equilibrium approximations. We develop a new, flexible measurement which can be used to quantify how well our model can predict the opponent's behavior independently from the performance of the agent in which it resides. We show how this metric can be used to find areas of model improvement that would otherwise have remained undiscovered and demonstrate the technique for evaluating opponent model quality in the poker domain. We introduce the idea of performance bounds for classes of opponent models, present a method for calculating them, and show how these bounds are a function of only the environment and thus inv...
Optimal adversarial strategies in learning with expert advice
52nd IEEE Conference on Decision and Control, 2013
We propose an adversarial setting for the framework of learning with expert advice in which one of the experts has the intention to compromise the recommendation system by providing wrong recommendations. The problem is formulated as a Markov Decision Process (MDP) and solved by dynamic programming. Somewhat surprisingly, we prove that, in the case of logarithmic loss, the optimal strategy for the malicious expert is the greedy policy of lying at every step. Furthermore, a sufficient condition on the loss function is provided that guarantees the optimality of the greedy policy. Our experimental results, however, show that the condition is not necessary since the greedy policy is also optimal when the square loss is used, even though the square loss does not satisfy the condition. Moreover, the experimental results suggest that, for absolute loss, the optimal policy is a threshold one.
Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings
When learning, such as classification, is used in adversarial settings, such as intrusion detection, intelligent adversaries will attempt to evade the resulting policies. The literature on adversarial machine learning aims to develop learning algorithms which are robust to such adversarial evasion, but exhibits two significant limitations: a) failure to account for operational constraints and b) a restriction that decisions are deterministic. To overcome these limitations, we introduce a conceptual separation between learning, used to infer attacker preferences, and operational decisions, which account for adversarial evasion, enforce operational constraints, and naturally admit randomization. Our approach gives rise to an intractably large linear program. To overcome scalability limitations, we introduce a novel method for estimating a compact parity basis representation for the operational decision function. Additionally, we develop an iterative constraint generation approach which embeds adversary's best response calculation, to arrive at a scalable algorithm for computing near-optimal randomized operational decisions. Extensive experiments demonstrate the efficacy of our approach. 1 Introduction Success of machine learning across a variety of domains has naturally led to its adoption as a tool in security
Large-scale strategic games and adversarial machine learning
2016 IEEE 55th Conference on Decision and Control (CDC), 2016
Decision making in modern large-scale and complex systems such as communication networks, smart electricity grids, and cyber-physical systems motivate novel gametheoretic approaches. This paper investigates big strategic (noncooperative) games where a finite number of individual players each have a large number of continuous decision variables and input data points. Such high-dimensional decision spaces and big data sets lead to computational challenges, relating to efforts in non-linear optimization scaling up to large systems of variables. In addition to these computational challenges, real-world players often have limited information about their preference parameters due to the prohibitive cost of identifying them or due to operating in dynamic online settings. The challenge of limited information is exacerbated in high dimensions and big data sets. Motivated by both computational and information limitations that constrain the direct solution of big strategic games, our investigation centers around reductions using linear transformations such as random projection methods and their effect on Nash equilibrium solutions. Specific analytical results are presented for quadratic games and approximations. In addition, an adversarial learning game is presented where random projection and sampling schemes are investigated.
Partial Adversarial Behavior Deception in Security Games
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020
Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers' decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gr...
Detecting Adversarial Attacks in the Context of Bayesian Networks
2019
In this research, we study data poisoning attacks against Bayesian network structure learning algorithms. We propose to use the distance between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both one-step and long-duration data poisoning attacks. Layer 1 enforces “reject on negative impacts” detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. We show that for a typical small Bayesian network, only a few contaminated cases are needed to corrupt the learned structure. Our detection methods are effective against not only one-step attacks but also sophisticated long-duration attacks. We also present our empirical results.