Kevin Leyton-brown | University of British Columbia (original) (raw)

Papers by Kevin Leyton-brown

Research paper thumbnail of SATenstein: Automatically Building Local Search SAT Solvers from Components

International Joint Conference on Artificial Intelligence, 2009

Designing high-performance algorithms for computation- ally hard problems is a difficult and ofte... more Designing high-performance algorithms for computation- ally hard problems is a difficult and often time-consuming task. In this work, we demonstrate that this task can be automated in the context of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalised, highly param- eterised solver framework, dubbed SATenstein, that in- cludes components gleaned from or

Research paper thumbnail of Bidding Clubs in First-Price Auctions

Computing Research Repository, 2002

We introduce a class of mechanisms, called bidding clubs, that allow agents to coordinate their b... more We introduce a class of mechanisms, called bidding clubs, that allow agents to coordinate their bidding in auctions. Bidding clubs invite a set of agents to join, and each invited agent freely chooses whether to accept the invitation or to participate independently in the auction. Agents who join a bidding club flrst conduct a \knockout auction" within the club; depending

Research paper thumbnail of A Test Suite for Combinatorial Auctions

Research paper thumbnail of Computing Nash Equilibria of Action-Graph Games

Uncertainty in Artificial Intelligence, 2004

Action-graph games (AGGs) are a fully expres- sive game representation which can compactly expres... more Action-graph games (AGGs) are a fully expres- sive game representation which can compactly express both strict and context-specific indepen- dence between players' utility functions. Ac- tions are represented as nodes in a graph G, and the payoff to an agent who chose the action s depends only on the numbers of other agents who chose actions connected to s. We

Research paper thumbnail of Tam-ing the computational complexity of combinatorial auctions

International Joint Conference on Artificial Intelligence, 1999

In combinatorial auctions, multiple goods are sold simultaneously and bidders may bid for arbitra... more In combinatorial auctions, multiple goods are sold simultaneously and bidders may bid for arbitrary combinations of goods. Determining the outcome of such an auction is an optimization problem that is NP-complete in the general case. We propose two methods of overcoming this ...

Research paper thumbnail of Learning the Empirical Hardness of Optimization Problems: The Case of Combinatorial Auctions

Lecture Notes in Computer Science, 2002

We propose a new approach for understanding the algorithm-specific empirical hardness of N P-Hard... more We propose a new approach for understanding the algorithm-specific empirical hardness of N P-Hard problems. In this work we focus on the empirical hardness of the winner determination problem-an optimization problem arising in combinatorial auctions-when solved by ILOG's CPLEX software. We consider nine widely-used problem distributions and sample randomly from a continuum of parameter settings for each distribution. We identify a large number of distribution-nonspecific features of data instances and use statistical regression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.

Research paper thumbnail of Level-0 meta-models for predicting human behavior in games

Proceedings of the fifteenth ACM conference on Economics and computation - EC '14, 2014

ABSTRACT Behavioral game theory seeks to describe the way actual people (as compared to idealized... more ABSTRACT Behavioral game theory seeks to describe the way actual people (as compared to idealized, ``rational'' agents) act in strategic situations. Our own recent work has identified iterative models (such as quantal cognitive hierarchy) as the state of the art for predicting human play in unrepeated, simultaneous-move games [Wright and Leyton-Brown 2012]. Iterative models predict that agents reason iteratively about their opponents, building up from a specification of nonstrategic behavior called level-0. The modeler is in principle free to choose any description of level-0 behavior that makes sense for the given setting; however, in practice almost all existing work specifies this behavior as a uniform distribution over actions. In most games it is not plausible that even nonstrategic agents would choose an action uniformly at random, nor that other agents would expect them to do so. A more accurate model for level-0 behavior has the potential to dramatically improve predictions of human behavior, since a substantial fraction of agents may play level-0 strategies directly, and furthermore since iterative models ground all higher-level strategies in responses to the level-0 strategy. Our work considers ``meta-models'' of level-0 behavior: models of the way in which level-0 agents construct a probability distribution over actions, given an arbitrary game. We evaluated many such meta-models, each of which makes its prediction based only on general features that can be computed from any normal form game. We evaluated the effects of combining each new level-0 meta-model with various iterative models, and in many cases observed large improvements in the models' predictive accuracies. In the end, we recommend a meta-model that achieved excellent performance across the board: a linear weighting of features that requires the estimation of five weights.

Research paper thumbnail of ACM Conference on Electronic Commerce, EC '12, Valencia, Spain, June 4-8, 2012

The papers in these proceedings were presented at the 13th ACM Conference on Electronic Commerce ... more The papers in these proceedings were presented at the 13th ACM Conference on Electronic Commerce (EC'12), held June 4-8 in Valencia, Spain. Since 1999 the ACM Special Interest Group on Electronic Commerce (SIGecom) has sponsored the leading scientific conference on advances in theory, systems, and applications for electronic commerce. The natural focus of the conference is on computer science issues, but the conference is interdisciplinary in nature, including research in economics and research related to (but not limited to) the following three non-exclusive focus areas: TF: Theory and Foundations (Computer Science Theory; Economic Theory) AI: Artificial Intelligence (AI, Agents, Machine Learning, Data Mining) EA: Experimental and Applications (Empirical Research, Experience with E-Commerce Applications) In addition to the main technical program, EC'12 featured four workshops and five tutorials. EC'12 was also co-located with the Autonomous Agents and Multiagent Systems...

Research paper thumbnail of Hierarchical Hardness Models for SAT

Lecture Notes in Computer Science, 2007

Empirical hardness models predict a solver's runtime for a given instance of an N P-hard problem ... more Empirical hardness models predict a solver's runtime for a given instance of an N P-hard problem based on efficiently computable features. Previous research in the SAT domain has shown that better prediction accuracy and simpler models can be obtained when models are trained separately on satisfiable and unsatisfiable instances. We extend this work by training separate hardness models for each class, predicting the probability that a novel instance belongs to each class, and using these predictions to build a hierarchical hardness model using a mixture-of-experts approach. We describe and analyze classifiers and hardness models for four well-known distributions of SAT instances and nine high-performance solvers. We show that surprisingly accurate classifications can be achieved very efficiently. Our experiments show that hierarchical hardness models achieve higher prediction accuracy than the previous state of the art. Furthermore, the classifier's confidence correlates strongly with prediction error, giving a useful per-instance estimate of prediction error.

Research paper thumbnail of Essentials of Game Theory: A Concise Multidisciplinary Introduction

Synthesis Lectures on Artificial Intelligence and Machine Learning, 2008

Research paper thumbnail of A General Framework for Computing Optimal Correlated Equilibria in Compact Games

Lecture Notes in Computer Science, 2011

We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g.,... more We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g., social welfare). Papadimitriou and Roughgarden [2008] gave a sufficient condition for the tractability of this problem; however, this condition only applies to a subset of existing representations. We propose a different algorithmic approach for the optimal CE problem that applies to all compact representations, and give a sufficient condition that generalizes that of Papadimitriou and Roughgarden [2008]. In particular, we reduce the optimal CE problem to the deviation-adjusted social welfare problem, a combinatorial optimization problem closely related to the optimal social welfare problem. This framework allows us to identify new classes of games for which the optimal CE problem is tractable; we show that graphical polymatrix games on tree graphs are one example. We also study the problem of computing the optimal coarse correlated equilibrium, a solution concept closely related to CE. Using a similar approach we derive a sufficient condition for this problem, and use it to prove that the problem is tractable for singleton congestion games.

Research paper thumbnail of Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms

Lecture Notes in Computer Science, 2006

Machine learning can be utilized to build models that predict the run-time of search algorithms f... more Machine learning can be utilized to build models that predict the run-time of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate run-time predictions for incomplete, randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm's parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm's parameters on a perinstance basis in order to optimize its performance. Empirical results for Novelty + and SAPS on random and structured SAT instances show good predictive performance and significant speedups using our automatically determined parameter settings when compared to the default and best fixed parameter settings.

Research paper thumbnail of Identifying Key Algorithm Parameters and Instance Features Using Forward Selection

Lecture Notes in Computer Science, 2013

Research paper thumbnail of Automated Configuration of Mixed Integer Programming Solvers

Lecture Notes in Computer Science, 2010

State-of-the-art solvers for mixed integer programming (MIP) problems are highly parameterized, a... more State-of-the-art solvers for mixed integer programming (MIP) problems are highly parameterized, and finding parameter settings that achieve high performance for specific types of MIP instances is challenging. We study the application of an automated algorithm configuration procedure to different MIP solvers, instance types and optimization objectives. We show that this fully-automated process yields substantial improvements to the performance of three MIP solvers: CPLEX, GUROBI, and LPSOLVE. Although our method can be used "out of the box" without any domain knowledge specific to MIP, we show that it outperforms the CPLEX special-purpose automated tuning tool.

Research paper thumbnail of Evaluating Component Solver Contributions to Portfolio-Based Algorithm Selectors

Lecture Notes in Computer Science, 2012

Portfolio-based methods exploit the complementary strengths of a set of algorithms and-as evidenc... more Portfolio-based methods exploit the complementary strengths of a set of algorithms and-as evidenced in recent competitions-represent the state of the art for solving many NP-hard problems, including SAT. In this work, we argue that a state-of-the-art method for constructing portfolio-based algorithm selectors, SATzilla, also gives rise to an automated method for quantifying the importance of each of a set of available solvers. We entered a substantially improved version of SATzilla to the inaugural "analysis track" of the 2011 SAT competition, and draw two main conclusions from the results that we obtained. First, automaticallyconstructed portfolios of sequential, non-portfolio competition entries perform substantially better than the winners of all three sequential categories. Second, and more importantly, a detailed analysis of these portfolios yields valuable insights into the nature of successful solver designs in the different categories. For example, we show that the solvers contributing most to SATzilla were often not the overall best-performing solvers, but instead solvers that exploit novel solution strategies to solve instances that would remain unsolved without them.

Research paper thumbnail of Sequential Model-Based Optimization for General Algorithm Configuration

Lecture Notes in Computer Science, 2011

Research paper thumbnail of Parallel Algorithm Configuration

Lecture Notes in Computer Science, 2012

State-of-the-art algorithms for solving hard computational problems often expose many parameters ... more State-of-the-art algorithms for solving hard computational problems often expose many parameters whose settings critically affect empirical performance. Manually exploring the resulting combinatorial space of parameter settings is often tedious and unsatisfactory. Automated approaches for finding good parameter settings are becoming increasingly prominent and have recently lead to substantial improvements in the state of the art for solving a variety of computationally challenging problems. However, running such automated algorithm configuration procedures is typically very costly, involving many thousands of invocations of the algorithm to be configured. Here, we study the extent to which parallel computing can come to the rescue. We compare straightforward parallelization by multiple independent runs with a more sophisticated method of parallelizing the model-based configuration procedure SMAC. Empirical results for configuring the MIP solver CPLEX demonstrate that near-optimal speedups can be obtained with up to 16 parallel workers, and that 64 workers can still accomplish challenging configuration tasks that previously took 2 days in 1-2 hours. Overall, we show that our methods make effective use of large-scale parallel resources and thus substantially expand the practical applicability of algorithm configuration methods.

Research paper thumbnail of Boosting as a Metaphor for Algorithm Design

Lecture Notes in Computer Science, 2003

Although some algorithms are better than others on average, there is rarely a best algo-rithm for... more Although some algorithms are better than others on average, there is rarely a best algo-rithm for a given problem. Instead, different algorithms often perform well on different problem instances. Not surprisingly, this phenomenon is most pronounced among algo-rithms for ...

Research paper thumbnail of Towards a universal test suite for combinatorial auction algorithms

Proceedings of the 2nd ACM conference on Electronic commerce - EC '00, 2000

General combinatorial auctions-auctions in which bidders place unrestricted bids for bundles of g... more General combinatorial auctions-auctions in which bidders place unrestricted bids for bundles of goods-are the subject of increasing study. Much of this work has focused on algorithms for finding an optimal or approximately optimal set of winning bids. Comparatively little attention has been paid to methodical evaluation and comparison of these algorithms. In particular, there has not been a systematic discussion of appropriate data sets that can serve as universally accepted and well motivated benchmarks. In this paper we present a suite of distribution families for generating realistic, economically motivated combinatorial bids in five broad real-world domains. We hope that this work will yield many comments, criticisms and extensions, bringing the community closer to a universal combinatorial auction test suite.

Research paper thumbnail of An evaluation of sequential model-based optimization for expensive blackbox functions

Proceeding of the fifteenth annual conference companion on Genetic and evolutionary computation conference companion - GECCO '13 Companion, 2013

ABSTRACT We benchmark a sequential model-based optimization procedure, SMAC-BBOB, on the BBOB set... more ABSTRACT We benchmark a sequential model-based optimization procedure, SMAC-BBOB, on the BBOB set of blackbox functions. We demonstrate that with a small budget of 10xD evaluations of D-dimensional functions, SMAC-BBOB in most cases outperforms the state-of-the-art blackbox optimizer CMA-ES. However, CMA-ES benefits more from growing the budget to 100xD, and for larger number of function evaluations SMAC-BBOB also requires increasingly large computational resources for building and using its models.

Research paper thumbnail of SATenstein: Automatically Building Local Search SAT Solvers from Components

International Joint Conference on Artificial Intelligence, 2009

Designing high-performance algorithms for computation- ally hard problems is a difficult and ofte... more Designing high-performance algorithms for computation- ally hard problems is a difficult and often time-consuming task. In this work, we demonstrate that this task can be automated in the context of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalised, highly param- eterised solver framework, dubbed SATenstein, that in- cludes components gleaned from or

Research paper thumbnail of Bidding Clubs in First-Price Auctions

Computing Research Repository, 2002

We introduce a class of mechanisms, called bidding clubs, that allow agents to coordinate their b... more We introduce a class of mechanisms, called bidding clubs, that allow agents to coordinate their bidding in auctions. Bidding clubs invite a set of agents to join, and each invited agent freely chooses whether to accept the invitation or to participate independently in the auction. Agents who join a bidding club flrst conduct a \knockout auction" within the club; depending

Research paper thumbnail of A Test Suite for Combinatorial Auctions

Research paper thumbnail of Computing Nash Equilibria of Action-Graph Games

Uncertainty in Artificial Intelligence, 2004

Action-graph games (AGGs) are a fully expres- sive game representation which can compactly expres... more Action-graph games (AGGs) are a fully expres- sive game representation which can compactly express both strict and context-specific indepen- dence between players' utility functions. Ac- tions are represented as nodes in a graph G, and the payoff to an agent who chose the action s depends only on the numbers of other agents who chose actions connected to s. We

Research paper thumbnail of Tam-ing the computational complexity of combinatorial auctions

International Joint Conference on Artificial Intelligence, 1999

In combinatorial auctions, multiple goods are sold simultaneously and bidders may bid for arbitra... more In combinatorial auctions, multiple goods are sold simultaneously and bidders may bid for arbitrary combinations of goods. Determining the outcome of such an auction is an optimization problem that is NP-complete in the general case. We propose two methods of overcoming this ...

Research paper thumbnail of Learning the Empirical Hardness of Optimization Problems: The Case of Combinatorial Auctions

Lecture Notes in Computer Science, 2002

We propose a new approach for understanding the algorithm-specific empirical hardness of N P-Hard... more We propose a new approach for understanding the algorithm-specific empirical hardness of N P-Hard problems. In this work we focus on the empirical hardness of the winner determination problem-an optimization problem arising in combinatorial auctions-when solved by ILOG's CPLEX software. We consider nine widely-used problem distributions and sample randomly from a continuum of parameter settings for each distribution. We identify a large number of distribution-nonspecific features of data instances and use statistical regression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.

Research paper thumbnail of Level-0 meta-models for predicting human behavior in games

Proceedings of the fifteenth ACM conference on Economics and computation - EC '14, 2014

ABSTRACT Behavioral game theory seeks to describe the way actual people (as compared to idealized... more ABSTRACT Behavioral game theory seeks to describe the way actual people (as compared to idealized, ``rational'' agents) act in strategic situations. Our own recent work has identified iterative models (such as quantal cognitive hierarchy) as the state of the art for predicting human play in unrepeated, simultaneous-move games [Wright and Leyton-Brown 2012]. Iterative models predict that agents reason iteratively about their opponents, building up from a specification of nonstrategic behavior called level-0. The modeler is in principle free to choose any description of level-0 behavior that makes sense for the given setting; however, in practice almost all existing work specifies this behavior as a uniform distribution over actions. In most games it is not plausible that even nonstrategic agents would choose an action uniformly at random, nor that other agents would expect them to do so. A more accurate model for level-0 behavior has the potential to dramatically improve predictions of human behavior, since a substantial fraction of agents may play level-0 strategies directly, and furthermore since iterative models ground all higher-level strategies in responses to the level-0 strategy. Our work considers ``meta-models'' of level-0 behavior: models of the way in which level-0 agents construct a probability distribution over actions, given an arbitrary game. We evaluated many such meta-models, each of which makes its prediction based only on general features that can be computed from any normal form game. We evaluated the effects of combining each new level-0 meta-model with various iterative models, and in many cases observed large improvements in the models' predictive accuracies. In the end, we recommend a meta-model that achieved excellent performance across the board: a linear weighting of features that requires the estimation of five weights.

Research paper thumbnail of ACM Conference on Electronic Commerce, EC '12, Valencia, Spain, June 4-8, 2012

The papers in these proceedings were presented at the 13th ACM Conference on Electronic Commerce ... more The papers in these proceedings were presented at the 13th ACM Conference on Electronic Commerce (EC'12), held June 4-8 in Valencia, Spain. Since 1999 the ACM Special Interest Group on Electronic Commerce (SIGecom) has sponsored the leading scientific conference on advances in theory, systems, and applications for electronic commerce. The natural focus of the conference is on computer science issues, but the conference is interdisciplinary in nature, including research in economics and research related to (but not limited to) the following three non-exclusive focus areas: TF: Theory and Foundations (Computer Science Theory; Economic Theory) AI: Artificial Intelligence (AI, Agents, Machine Learning, Data Mining) EA: Experimental and Applications (Empirical Research, Experience with E-Commerce Applications) In addition to the main technical program, EC'12 featured four workshops and five tutorials. EC'12 was also co-located with the Autonomous Agents and Multiagent Systems...

Research paper thumbnail of Hierarchical Hardness Models for SAT

Lecture Notes in Computer Science, 2007

Empirical hardness models predict a solver's runtime for a given instance of an N P-hard problem ... more Empirical hardness models predict a solver's runtime for a given instance of an N P-hard problem based on efficiently computable features. Previous research in the SAT domain has shown that better prediction accuracy and simpler models can be obtained when models are trained separately on satisfiable and unsatisfiable instances. We extend this work by training separate hardness models for each class, predicting the probability that a novel instance belongs to each class, and using these predictions to build a hierarchical hardness model using a mixture-of-experts approach. We describe and analyze classifiers and hardness models for four well-known distributions of SAT instances and nine high-performance solvers. We show that surprisingly accurate classifications can be achieved very efficiently. Our experiments show that hierarchical hardness models achieve higher prediction accuracy than the previous state of the art. Furthermore, the classifier's confidence correlates strongly with prediction error, giving a useful per-instance estimate of prediction error.

Research paper thumbnail of Essentials of Game Theory: A Concise Multidisciplinary Introduction

Synthesis Lectures on Artificial Intelligence and Machine Learning, 2008

Research paper thumbnail of A General Framework for Computing Optimal Correlated Equilibria in Compact Games

Lecture Notes in Computer Science, 2011

We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g.,... more We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g., social welfare). Papadimitriou and Roughgarden [2008] gave a sufficient condition for the tractability of this problem; however, this condition only applies to a subset of existing representations. We propose a different algorithmic approach for the optimal CE problem that applies to all compact representations, and give a sufficient condition that generalizes that of Papadimitriou and Roughgarden [2008]. In particular, we reduce the optimal CE problem to the deviation-adjusted social welfare problem, a combinatorial optimization problem closely related to the optimal social welfare problem. This framework allows us to identify new classes of games for which the optimal CE problem is tractable; we show that graphical polymatrix games on tree graphs are one example. We also study the problem of computing the optimal coarse correlated equilibrium, a solution concept closely related to CE. Using a similar approach we derive a sufficient condition for this problem, and use it to prove that the problem is tractable for singleton congestion games.

Research paper thumbnail of Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms

Lecture Notes in Computer Science, 2006

Machine learning can be utilized to build models that predict the run-time of search algorithms f... more Machine learning can be utilized to build models that predict the run-time of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate run-time predictions for incomplete, randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm's parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm's parameters on a perinstance basis in order to optimize its performance. Empirical results for Novelty + and SAPS on random and structured SAT instances show good predictive performance and significant speedups using our automatically determined parameter settings when compared to the default and best fixed parameter settings.

Research paper thumbnail of Identifying Key Algorithm Parameters and Instance Features Using Forward Selection

Lecture Notes in Computer Science, 2013

Research paper thumbnail of Automated Configuration of Mixed Integer Programming Solvers

Lecture Notes in Computer Science, 2010

State-of-the-art solvers for mixed integer programming (MIP) problems are highly parameterized, a... more State-of-the-art solvers for mixed integer programming (MIP) problems are highly parameterized, and finding parameter settings that achieve high performance for specific types of MIP instances is challenging. We study the application of an automated algorithm configuration procedure to different MIP solvers, instance types and optimization objectives. We show that this fully-automated process yields substantial improvements to the performance of three MIP solvers: CPLEX, GUROBI, and LPSOLVE. Although our method can be used "out of the box" without any domain knowledge specific to MIP, we show that it outperforms the CPLEX special-purpose automated tuning tool.

Research paper thumbnail of Evaluating Component Solver Contributions to Portfolio-Based Algorithm Selectors

Lecture Notes in Computer Science, 2012

Portfolio-based methods exploit the complementary strengths of a set of algorithms and-as evidenc... more Portfolio-based methods exploit the complementary strengths of a set of algorithms and-as evidenced in recent competitions-represent the state of the art for solving many NP-hard problems, including SAT. In this work, we argue that a state-of-the-art method for constructing portfolio-based algorithm selectors, SATzilla, also gives rise to an automated method for quantifying the importance of each of a set of available solvers. We entered a substantially improved version of SATzilla to the inaugural "analysis track" of the 2011 SAT competition, and draw two main conclusions from the results that we obtained. First, automaticallyconstructed portfolios of sequential, non-portfolio competition entries perform substantially better than the winners of all three sequential categories. Second, and more importantly, a detailed analysis of these portfolios yields valuable insights into the nature of successful solver designs in the different categories. For example, we show that the solvers contributing most to SATzilla were often not the overall best-performing solvers, but instead solvers that exploit novel solution strategies to solve instances that would remain unsolved without them.

Research paper thumbnail of Sequential Model-Based Optimization for General Algorithm Configuration

Lecture Notes in Computer Science, 2011

Research paper thumbnail of Parallel Algorithm Configuration

Lecture Notes in Computer Science, 2012

State-of-the-art algorithms for solving hard computational problems often expose many parameters ... more State-of-the-art algorithms for solving hard computational problems often expose many parameters whose settings critically affect empirical performance. Manually exploring the resulting combinatorial space of parameter settings is often tedious and unsatisfactory. Automated approaches for finding good parameter settings are becoming increasingly prominent and have recently lead to substantial improvements in the state of the art for solving a variety of computationally challenging problems. However, running such automated algorithm configuration procedures is typically very costly, involving many thousands of invocations of the algorithm to be configured. Here, we study the extent to which parallel computing can come to the rescue. We compare straightforward parallelization by multiple independent runs with a more sophisticated method of parallelizing the model-based configuration procedure SMAC. Empirical results for configuring the MIP solver CPLEX demonstrate that near-optimal speedups can be obtained with up to 16 parallel workers, and that 64 workers can still accomplish challenging configuration tasks that previously took 2 days in 1-2 hours. Overall, we show that our methods make effective use of large-scale parallel resources and thus substantially expand the practical applicability of algorithm configuration methods.

Research paper thumbnail of Boosting as a Metaphor for Algorithm Design

Lecture Notes in Computer Science, 2003

Although some algorithms are better than others on average, there is rarely a best algo-rithm for... more Although some algorithms are better than others on average, there is rarely a best algo-rithm for a given problem. Instead, different algorithms often perform well on different problem instances. Not surprisingly, this phenomenon is most pronounced among algo-rithms for ...

Research paper thumbnail of Towards a universal test suite for combinatorial auction algorithms

Proceedings of the 2nd ACM conference on Electronic commerce - EC '00, 2000

General combinatorial auctions-auctions in which bidders place unrestricted bids for bundles of g... more General combinatorial auctions-auctions in which bidders place unrestricted bids for bundles of goods-are the subject of increasing study. Much of this work has focused on algorithms for finding an optimal or approximately optimal set of winning bids. Comparatively little attention has been paid to methodical evaluation and comparison of these algorithms. In particular, there has not been a systematic discussion of appropriate data sets that can serve as universally accepted and well motivated benchmarks. In this paper we present a suite of distribution families for generating realistic, economically motivated combinatorial bids in five broad real-world domains. We hope that this work will yield many comments, criticisms and extensions, bringing the community closer to a universal combinatorial auction test suite.

Research paper thumbnail of An evaluation of sequential model-based optimization for expensive blackbox functions

Proceeding of the fifteenth annual conference companion on Genetic and evolutionary computation conference companion - GECCO '13 Companion, 2013

ABSTRACT We benchmark a sequential model-based optimization procedure, SMAC-BBOB, on the BBOB set... more ABSTRACT We benchmark a sequential model-based optimization procedure, SMAC-BBOB, on the BBOB set of blackbox functions. We demonstrate that with a small budget of 10xD evaluations of D-dimensional functions, SMAC-BBOB in most cases outperforms the state-of-the-art blackbox optimizer CMA-ES. However, CMA-ES benefits more from growing the budget to 100xD, and for larger number of function evaluations SMAC-BBOB also requires increasingly large computational resources for building and using its models.