Konstantinos Daskalakis - Academia.edu (original) (raw)

Uploads

Papers by Konstantinos Daskalakis

Research paper thumbnail of The Complexity of Hex and the Jordan Curve Theorem

The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topolo... more The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topology. We study their computational relationship, showing that a stylized computational version of Jordan’s theorem is PPAD-complete, and therefore in a sense computationally equivalent to Brouwer’s theorem. As a corollary, our computational result implies that these two theorems directly imply each other mathematically, complementing Maehara's proof that Brouwer implies Jordan [Maehara, 1984]. We then turn to the combinatorial game of Hex which is related to Jordan's theorem, and where the existence of a winner can be used to show Brouwer's theorem [Gale,1979]. We establish that determining who won an (implicitly encoded) play of Hex is PSPACE-complete by adapting a reduction (due to Goldberg [Goldberg,2015]) from Quantified Boolean Formula (QBF). As this problem is analogous to evaluating the output of a canonical path-following algorithm for finding a Brouwer fixed point - an...

Research paper thumbnail of A size-free CLT for poisson multinomials and its applications

Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, 2016

An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent ... more An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set B k = {e1,. .. , e k } of standard basis vectors in R k. We show that any (n, k)-PMD is poly(k σ)close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on n from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on n and 1/ε of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n, k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman,

Research paper thumbnail of On the Structure, Covering, and Learning of Poisson Multinomial Distributions

2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 2015

An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent ... more An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set B k = {e 1 ,. .. , e k } of standard basis vectors in R k. We prove a structural characterization of these distributions, showing that, for all ε > 0, any (n, k)-Poisson multinomial random vector is ε-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent (poly(k/ε), k)-Poisson multinomial random vector. Our structural characterization extends the multi-dimensional CLT of [VV11], by simultaneously applying to all approximation requirements ε. In particular, it overcomes factors depending on log n and, importantly, the minimum eigenvalue of the PMD's covariance matrix. We use our structural characterization to obtain an ε-cover, in total variation distance, of the set of all (n, k)-PMDs, significantly improving the cover size of [DP08, DP15], and obtaining the same qualitative dependence of the cover size on n and ε as the k = 2 cover of [DP09, DP14]. We further exploit this structure to show that (n, k)-PMDs can be learned to within ε in total variation distance fromÕ k (1/ε 2) samples, which is near-optimal in terms of dependence on ε and independent of n. In particular, our result generalizes the single-dimensional result of [DDS12] for Poisson binomials to arbitrary dimension. Finally, as a corollary of our results on PMDs, we give aÕ k (1/ε 2) sample algorithm for learning (n, k)-sums of independent integer random variables (SIIRVs), which is near-optimal for constant k. * Supported by a Sloan Foundation Fellowship, a Microsoft Research Faculty Fellowship, and NSF Award CCF-0953960 (CAREER) and CCF-1101491. † Supported by NSF Award CCF-0953960 (CAREER). ‡ Supported by NSF Award CCF-0953960 (CAREER) and a Simons Award for Graduate Students in Theoretical Computer Science. Theorem 2 (PMD Covers). For all n, k ∈ N, and ε > 0, there exists an ε-cover, in total variation distance, of the set of all (n, k)-PMDs whose size is n k 2 • min 2 poly(k/ε) , 2 O(k 5k •log k+2 (1/ε)). 3 An ε-cover Fε of a set of distributions F is called proper iff Fε ⊆ F. We make a few remarks about our cover. First, the cover is non-proper, containing distributions that are of the form specified in Theorem 1, i.e. are convolutions of a discretized Gaussian and a PMD. Moreover, it is straightforward to see that any cover has size at least n Ω(k) and at least (1/ε) Ω(k). For the first lower bound, count the number of (n, k)-PMDs whose summands are deterministic. For the second, count the number of (1, k)-PMDs whose probabilities are integer multiples of ε. So, for fixed k, our bound has the right qualitative dependence on n (namely polynomial), and a near-right dependence on 1/ε (namely quasi-polynomial rather than polynomial). Moreover, it obtains the same qualitative dependence on n and ε as the k = 2 cover of [DP09, DP14], namely polynomial in n and quasi-polynomial in 1/ε. Learning PMDs In view of tools for hypothesis selection from a cover (see, i.e., Theorem 7), our cover theorem directly implies that (n, k)-PMDs can be learned from O(k 5k • log n • log k+2 (1/ε)/ε 2) samples. These are near-optimal in terms of ε, as Ω(k/ε 2) samples are necessary even for learning a (1, k)-PMD. We show that the dependence on n can be completely removed from the learner, generalizing the results on Poisson Binomial Distributions [DDS12]. Theorem 3 (PMD Learning). For all n, k ∈ N and ε > 0, there is a learning algorithm for (n, k)-PMDs with the following properties: Let X = n i=1 X i be any (n, k)-Poisson multinomial random vector. The algorithm uses min O(k 5k • log k+2 (1/ε)/ε 2), poly(k/ε) samples from X, runs in time 4 min 2 O(k 5k •log k+2 (1/ε)) , 2 poly(k/ε) , and with probability at least 9/10 outputs a (succinct description of a) random vectorX such that d TV (X,X) ≤ ε. Additional Results: Learning k-SIIRVs A (n, k)-SIIRV is the sum of n independent (singledimensional) random variables supported on {0,. .. , k − 1}. SIIRVs generalize Poisson Binomial distributions, which correspond to the case k = 2. At the same time, SIIRVs can be viewed as projections of PMDs onto the vector (0, 1,. .. , k − 1). In particular, if X is a (n, k)-SIIRV, there exists a (n, k)-Poisson multinomial random vector Y , such that X = (0, 1,. .. , k − 1) T • Y. Recent work has established that (n, k)-SIIRVs can be learned from poly(k/ε) samples, independent of n, when even learning a (1, k)-SIIRV already requires Ω(k/ε 2) samples [DDO + 13]. A question arising from this work is finding the optimal dependence of the sample complexity on ε. Demonstrating the expressive power of PMDs, as a corollary of our cover result, we show that the optimal dependence is actuallyÕ k (1/ε 2). Theorem 4 (SIIRV Learning). For all n, k ∈ N and ε > 0, there is a learning algorithm for (n, k)-SIIRVs with the following properties: Let X = n i=1 X i be any (n, k)-SIIRV. The algorithm uses k 5k • O(log k+2 (1/ε)/ε 2) samples from X, runs in time 2 O(k 5k •log k+2 (1/ε)) , and with probability at least 9/10 outputs a random vectorX such that d TV (X,X) ≤ ε. Simultaneous work by Diakonikolas, Kane and Stewart [DKS15] takes a direct approach to solving this problem. Using Fourier-based methods, they give a polynomial-time algorithm which requiresÕ(k/ε 2) samples, obtaining near-optimal dependence on both k and ε.

Research paper thumbnail of Zero-Sum Polymatrix Games: A Generalization of Minmax

Mathematics of Operations Research, 2016

We show that in zero-sum polymatrix games, a multiplayer generalization of two-person zero-sum ga... more We show that in zero-sum polymatrix games, a multiplayer generalization of two-person zero-sum games, Nash equilibria can be found efficiently with linear programming. We also show that the set of coarse correlated equilibria collapses to the set of Nash equilibria. In contrast, other important properties of two-person zero-sum games are not preserved: Nash equilibrium payoffs need not be unique, and Nash equilibrium strategies need not be exchangeable or max-min.

Research paper thumbnail of Learning in Auctions: Regret is Hard, Envy is Easy

2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), 2016

A large line of recent work studies the welfare guarantees of simple and prevalent combinatorial ... more A large line of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiS-PAs) [CKS08, BR11, FFGL13]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to each player is exponential. We show that this obstacle is insurmountable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "noenvy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our result for XOS valuations can be viewed as the first instantiation of approximate welfare maximization in combinatorial auctions with XOS valuations, where both the designer and the agents are computationally bounded and agents are strategic. Our positive and negative results extend to many other simple auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing big gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.

Research paper thumbnail of The Complexity of Optimal Mechanism Design

Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, 2013

Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling o... more Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders [18]. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenue-optimal auction in multi-item settings cannot be found and implemented computationally efficiently, unless ZPP ⊇ P #P. This is true even for a single additive bidder whose values for the items are independently distributed on two rational numbers with rational probabilities. Our result is very general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or non-truthful) that can be implemented in expected polynomial time. In particular, under well-believed complexity-theoretic assumptions, revenue-optimization in very simple multi-item settings can only be tractably approximated. We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flow-interpretation of the solutions of an exponential-size linear program for revenue maximization with an additional supermodularity constraint.

Research paper thumbnail of Strong Duality for a Multiple-Good Monopolist

Proceedings of the Sixteenth ACM Conference on Economics and Computation - EC '15, 2015

We provide a duality-based framework for revenue maximization in a multiple-good monopoly. Our fr... more We provide a duality-based framework for revenue maximization in a multiple-good monopoly. Our framework shows that every optimal mechanism has a certificate of optimality, taking the form of an optimal transportation map between measures. Using our framework, we prove that grand-bundling mechanisms are optimal if and only if two stochastic dominance conditions hold between specific measures induced by the buyer's type distribution. This result strengthens several results in the literature, where only sufficient conditions for grand-bundling optimality have been provided. As a corollary of our tight characterization of grand-bundling optimality, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11]. In contrast, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism. The necessary and sufficient condition for grand bundling optimality is a special case of our more general characterization result that provides necessary and sufficient conditions for the optimality of an arbitrary mechanism (with a finite menu size) for an arbitrary type distribution.

Research paper thumbnail of Revenue Maximization and Ex-Post Budget Constraints

Proceedings of the Sixteenth ACM Conference on Economics and Computation, 2015

We consider the problem of a revenue-maximizing seller with m items for sale to n additive bidder... more We consider the problem of a revenue-maximizing seller with m items for sale to n additive bidders with hard budget constraints, assuming that the seller has some prior distribution over bidder values and budgets. The prior may be correlated across items and budgets of the same bidder, but is assumed independent across bidders. We target mechanisms that are Bayesian Incentive Compatible, but that are ex-post Individually Rational and ex-post budget respecting. Virtually no such mechanisms are known that satisfy all these conditions and guarantee any revenue approximation, even with just a single item. We provide a computationally efficient mechanism that is a 3-approximation with respect to all BIC, ex-post IR, and ex-post budget respecting mechanisms. Note that the problem is NP-hard to approximate better than a factor of 16/15, even in the case where the prior is a point mass [Chakrabarty and Goel 2010]. We further characterize the optimal mechanism in this setting, showing that it can be interpreted as a distribution over virtual welfare maximizers. We prove our results by making use of a black-box reduction from mechanism to algorithm design developed by [Cai et al. 2013]. Our main technical contribution is a computationally efficient 3-approximation algorithm for the algorithmic problem that results by an application of their framework to this problem. The algorithmic problem has a mixed-sign objective and is NP-hard to optimize exactly, so it is surprising that a computationally efficient approximation is possible at all. In the case of a single item (m = 1), the algorithmic problem can be solved exactly via exhaustive search, leading to a computationally efficient exact algorithm and a stronger characterization of the optimal mechanism as a distribution over virtual value maximizers. 1 The terms financially constrained bidders or bidders with liquidity constraints are used synonymously.

Research paper thumbnail of Optimal Pricing Is Hard

Lecture Notes in Computer Science, 2012

We show that computing the revenue-optimal deterministic auction in unit-demand single-buyer Baye... more We show that computing the revenue-optimal deterministic auction in unit-demand single-buyer Bayesian settings, i.e. the optimal itempricing, is computationally hard even in single-item settings where the buyer's value distribution is a sum of independently distributed attributes, or multi-item settings where the buyer's values for the items are independent. We also show that it is intractable to optimally price the grand bundle of multiple items for an additive bidder whose values for the items are independent. These difficulties stem from implicit definitions of a value distribution. We provide three instances of how different properties of implicit distributions can lead to intractability: the first is a #P-hardness proof, while the remaining two are reductions from the SQRT-SUM problem of Garey, Graham, and Johnson [14]. While simple pricing schemes can oftentimes approximate the best scheme in revenue, they can have drastically different underlying structure. We argue therefore that either the specification of the input distribution must be highly restricted in format, or it is necessary for the goal to be mere approximation to the optimal scheme's revenue instead of computing properties of the scheme itself.

Research paper thumbnail of Testing Poisson Binomial Distributions

Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2014

A Poisson Binomial distribution over n variables is the distribution of the sum of n independent ... more A Poisson Binomial distribution over n variables is the distribution of the sum of n independent Bernoullis. We provide a sample near-optimal algorithm for testing whether a distribution P supported on {0,. .. , n} to which we have sample access is a Poisson Binomial distribution, or far from all Poisson Binomial distributions. The sample complexity of our algorithm is O(n 1/4) to which we provide a matching lower bound. We note that our sample complexity improves quadratically upon that of the naive "learn followed by tolerant-test" approach, while instance optimal identity testing [VV14] is not applicable since we are looking to simultaneously test against a whole family of distributions. * Supported by grant from MITEI-Shell program. † Supported by a Sloan Foundation Fellowship, a Microsoft Research Faculty Fellowship and NSF Award CCF-0953960 (CAREER) and CCF-1101491. 1 Effective support is the smallest set of contiguous integers where the distribution places all but of its probability mass.

Research paper thumbnail of Sparse covers for sums of indicators

Probability Theory and Related Fields, 2014

For all n, > 0, we show that the set of Poisson Binomial distributions on n variables admits a pr... more For all n, > 0, we show that the set of Poisson Binomial distributions on n variables admits a proper-cover in total variation distance of size n 2 + n • (1/) O(log 2 (1/)) , which can also be computed in polynomial time. We discuss the implications of our construction for approximation algorithms and the computation of approximate Nash equilibria in anonymous games.

Research paper thumbnail of The Complexity of Games on Highly Regular Graphs

Lecture Notes in Computer Science, 2005

We present algorithms and complexity results for the problem of finding equilibria (mixed Nash eq... more We present algorithms and complexity results for the problem of finding equilibria (mixed Nash equilibria, pure Nash equilibria and correlated equilibria) in games with extremely succinct description that are defined on highly regular graphs such as the d-dimensional grid; we argue that such games are of interest in the modelling of large systems of interacting agents. We show that mixed Nash equilibria can be found in time exponential in the succinct representation by quantifier elimination, while correlated equilibria can be found in polynomial time by taking advantage of the game's symmetries. Finally, the complexity of determining whether such a game on the d-dimensional grid has a pure Nash equilibrium depends on d and the dichotomy is remarkably sharp: it is solvable in polynomial time (in fact NLcomplete) when d = 1, but it is NEXP-complete for d ≥ 2.

Research paper thumbnail of A Counter-example to Karlin's Strong Conjecture for Fictitious Play

2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 2014

Fictitious play is a natural dynamic for equilibrium play in zero-sum games, proposed by Brown [6... more Fictitious play is a natural dynamic for equilibrium play in zero-sum games, proposed by Brown [6], and shown to converge by Robinson [33]. Samuel Karlin conjectured in 1959 that fictitious play converges at rate O(t − 1 2) with respect to the number of steps t. We disprove this conjecture by showing that, when the payoff matrix of the row player is the n × n identity matrix, fictitious play may converge (for some tie-breaking) at rate as slow as Ω(t − 1 n).

Research paper thumbnail of Bayesian Truthful Mechanisms for Job Scheduling from Bi-criterion Approximation Algorithms

Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2014

We provide polynomial-time approximately optimal Bayesian mechanisms for makespan minimization on... more We provide polynomial-time approximately optimal Bayesian mechanisms for makespan minimization on unrelated machines as well as for max-min fair allocations of indivisible goods, with approximation factors of 2 and min{m−k+1,Õ(√ k)} respectively, matching the approximation ratios of best known polynomialtime algorithms (for max-min fairness, the latter claim is true for certain ratios of the number of goods m to people k). Our mechanisms are obtained by establishing a polynomial-time approximation-sensitive reduction from the problem of designing approximately optimal mechanisms for some arbitrary objective O to that of designing bi-criterion approximation algorithms for the same objective O plus a linear allocation cost term. Our reduction is itself enabled by extending the celebrated "equivalence of separation and optimization" [27, 32] to also accommodate bi-criterion approximations. Moreover, to apply the reduction to the specific problems of makespan and max-min fairness we develop polynomial-time bi-criterion approximation algorithms for makespan minimization with costs and max-min fairness with costs, adapting the algorithms of [45], [10] and [4] to the type of bi-criterion approximation that is required by the reduction.

Research paper thumbnail of Reducing Revenue to Welfare Maximization: Approximation Algorithms and other Generalizations

Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, 2013

It was recently shown in [12] that revenue optimization can be computationally efficiently reduce... more It was recently shown in [12] that revenue optimization can be computationally efficiently reduced to welfare optimization in all multi-dimensional Bayesian auction problems with arbitrary (possibly combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints. This reduction provides a poly-time solution to the optimal mechanism design problem in all auction settings where welfare optimization can be solved efficiently, but it is fragile to approximation and cannot provide solutions to settings where welfare maximization can only be tractably approximated. In this paper, we extend the reduction to accommodate approximation algorithms, providing an approximation preserving reduction from (truthful) revenue maximization to (not necessarily truthful) welfare maximization. The mechanisms output by our reduction choose allocations via blackbox calls to welfare approximation on randomly selected inputs, thereby generalizing also our earlier structural results on optimal multi-dimensional mechanisms to approximately optimal mechanisms. Unlike [12], our results here are obtained through novel uses of the Ellipsoid algorithm and other optimization techniques over non-convex regions.

Research paper thumbnail of Understanding Incentives: Mechanism Design Becomes Algorithm Design

2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013

We provide a computationally efficient black-box reduction from mechanism design to algorithm des... more We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone submodular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness. This is the first instance of a truthful mechanism that optimizes a non-linear objective.

Research paper thumbnail of Optimal Multi-dimensional Mechanism Design: Reducing Revenue to Welfare Maximization

2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, 2012

We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bay... more We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian auctions with arbitrary (possibly combinatorial) feasibility constraints and independent bidders with arbitrary (possibly combinatorial) demand constraints, appropriately extending Myerson's single-dimensional result [24] to this setting. We also show that every feasible Bayesian auction can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type t i is transformed into a virtual type f i (t i), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. We obtain our reduction from revenue to welfare optimization via two algorithmic results on reduced form auctions in settings with arbitrary feasibility and demand constraints. First, we provide a separation oracle for determining feasibility of a reduced form auction. Second, we provide a geometric algorithm to decompose any feasible reduced form into a distribution over virtual VCG allocation rules. In addition, we show how to execute both algorithms given only black box access to an implementation of the VCG allocation rule. Our results are computationally efficient for all multi-dimensional settings where the bidders are additive (or can be efficiently mapped to be additive). In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This running time is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The running time can be further improved to polynomial in only the number of items and the number of bidders in item-symmetric settings by making use of techniques from [15]. * Supported by NSF Award CCF-0953960 (CAREER) and CCF-1101491. † Supported by a Sloan Foundation Fellowship and NSF Award CCF-0953960 (CAREER) and CCF-1101491. ‡ Supported by a NSF Graduate Research Fellowship and NSF Award CCF-1101491. has oracle access to a bidder's valuation, or impose some structure on the bidders' valuations allowing them to be succinctly described. Indeed, virtually every recent result in revenue-maximizing literature [2, 4, 7, 8, 9, 10, 11, 15, 20] assumes that bidders are capacitated-additive. 1 In fact, most results are for unit-demand bidders. It is easy to see that, if we are allowed to incorporate arbitrary demand constraints into the definition of F, such bidders can be described in our model as simply additive. In fact, far more complex bidders can be modeled as well, as demand constraints could instead be some arbitrary set system. Because F is already an arbitrary set system, we may model bidders as simply additive and still capture virtually every bidder model studied in recent results, and more general ones as well. In fact, we note that every multi-dimensional setting can be mapped to an additive one, albeit not necessarily computationally efficiently. 2 So while we focus our discussion to additive bidders throughout this paper, our results apply to every auction setting, without need for any additivity assumption. In particular, our characterization result (Informal Theorem 2) of feasible allocation rules holds for any multi-dimensional setting, and our reduction from revenue to welfare optimization (Informal Theorem 1) also holds for any setting, and we show that it can be carried out computationally efficiently for any additive setting. Optimal Multi-dimensional Mechanism Design. With the above motivation in mind, we formally state the revenue optimization problem we solve. We remark that virtually every known result in the multi-dimensional mechanism design literature (see references above) tackles a special case of this problem, possibly with budget constraints on the bidders (which can be easily incorporated in all results presented in this paper as discussed in Appendix H), and possibly replacing BIC with IC. We explicitly assume in the definition of the problem that the bidders are additive, recalling that this is not a restriction if computational considerations are not in place. Revenue-Maximizing Multi-Dimensional Mechanism Design Problem (MDMDP): Given as input m distributions (possibly correlated across items) D 1 ,. .. , D m over valuation vectors for n heterogenous items and feasibility constraints F, output a BIC mechanism M whose allocation is in F with probability 1 and whose expected revenue is optimal relative to any other, possibly randomized, BIC mechanism when played by m additive bidders whose valuation vectors are sampled from D = × i D i .

Research paper thumbnail of An algorithmic characterization of multi-dimensional mechanisms

Proceedings of the 44th symposium on Theory of Computing - STOC '12, 2012

We obtain a characterization of feasible, Bayesian, multi-item multi-bidder mechanisms with indep... more We obtain a characterization of feasible, Bayesian, multi-item multi-bidder mechanisms with independent, additive bidders as distributions over hierarchical mechanisms. Combined with cyclic-monotonicity our results provide a complete characterization of feasible, Bayesian Incentive Compatible mechanisms for this setting. Our characterization is enabled by a novel, constructive proof of Border's theorem [5], and a new generalization of this theorem to independent (but not necessarily identically distributed) bidders, improving upon the results of [6, 12]. For a single item and independent (but not necessarily identically distributed) bidders, we show that any feasible reduced form auction can be implemented as a distribution over hierarchical mechanisms. We also give a polynomial-time algorithm for determining feasibility of a reduced form auction, or providing a separation hyperplane from the set of feasible reduced forms. To complete the picture, we provide polynomialtime algorithms to find and exactly sample from a distribution over hierarchical mechanisms consistent with a given feasible reduced form. All these results generalize to multi-item reduced form auctions for independent, additive bidders. Finally, for multiple items, additive bidders with hard demand constraints, and arbitrary value correlation across items or bidders, we give a proper generalization of Border's Theorem, and characterize feasible reduced form auctions as multi-commodity flows in related multi-commodity flow instances. We also show that our generalization holds for a broader class of feasibility constraints, including the intersection of any two matroids. As a corollary of our results we obtain revenue-optimal, Bayesian Incentive Compatible (BIC) mechanisms in multi-item multi-bidder settings, when each bidder has arbitrarily correlated values over the items and additive valuations over bundles of items, and the bidders are independent. Our mechanisms run in time polynomial in the total number of bidder types (and not type profiles). This running time is polynomial in the number of bidders, but potentially exponential in the number of items. We improve the running time to polynomial in both the number of items and the number of bidders by using recent structural results on optimal BIC auctions in item-symmetric settings [14].

Research paper thumbnail of Extreme value theorems for optimal multidimensional pricing

Games and Economic Behavior, 2015

We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing p... more We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all > 0, we obtain a (1 +)-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in n poly(log r) , when sampled from general distributions supported on a set [u min , ru min ]. We also provide an additive PTAS for all bounded distributions. Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all > 0, g(1/) distinct prices suffice to obtain a (1+)-factor approximation to the optimal revenue for MHR distributions, where g(1/) is a quasi-linear function of 1/ that does not depend on the number of items. Similarly, for all > 0 and n > 0, g(1/ • log n) distinct prices suffice for regular distributions, where n is the number of items and g(•) is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of 1/ , a single price suffices to achieve a (1 +)-factor approximation.

Research paper thumbnail of How good is the Chord algorithm?

Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, 2010

The Chord algorithm is a popular, simple method for the succinct approximation of curves, which i... more The Chord algorithm is a popular, simple method for the succinct approximation of curves, which is widely used, under different names, in a variety of areas, such as, multiobjective and parametric optimization, computational geometry, and graphics. We analyze the performance of the chord algorithm, as compared to the optimal approximation that achieves a desired accuracy with the minimum number of points. We prove sharp upper and lower bounds, both in the worst case and average case setting.

Research paper thumbnail of The Complexity of Hex and the Jordan Curve Theorem

The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topolo... more The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topology. We study their computational relationship, showing that a stylized computational version of Jordan’s theorem is PPAD-complete, and therefore in a sense computationally equivalent to Brouwer’s theorem. As a corollary, our computational result implies that these two theorems directly imply each other mathematically, complementing Maehara's proof that Brouwer implies Jordan [Maehara, 1984]. We then turn to the combinatorial game of Hex which is related to Jordan's theorem, and where the existence of a winner can be used to show Brouwer's theorem [Gale,1979]. We establish that determining who won an (implicitly encoded) play of Hex is PSPACE-complete by adapting a reduction (due to Goldberg [Goldberg,2015]) from Quantified Boolean Formula (QBF). As this problem is analogous to evaluating the output of a canonical path-following algorithm for finding a Brouwer fixed point - an...

Research paper thumbnail of A size-free CLT for poisson multinomials and its applications

Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, 2016

An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent ... more An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set B k = {e1,. .. , e k } of standard basis vectors in R k. We show that any (n, k)-PMD is poly(k σ)close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on n from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on n and 1/ε of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n, k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman,

Research paper thumbnail of On the Structure, Covering, and Learning of Poisson Multinomial Distributions

2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 2015

An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent ... more An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set B k = {e 1 ,. .. , e k } of standard basis vectors in R k. We prove a structural characterization of these distributions, showing that, for all ε > 0, any (n, k)-Poisson multinomial random vector is ε-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent (poly(k/ε), k)-Poisson multinomial random vector. Our structural characterization extends the multi-dimensional CLT of [VV11], by simultaneously applying to all approximation requirements ε. In particular, it overcomes factors depending on log n and, importantly, the minimum eigenvalue of the PMD's covariance matrix. We use our structural characterization to obtain an ε-cover, in total variation distance, of the set of all (n, k)-PMDs, significantly improving the cover size of [DP08, DP15], and obtaining the same qualitative dependence of the cover size on n and ε as the k = 2 cover of [DP09, DP14]. We further exploit this structure to show that (n, k)-PMDs can be learned to within ε in total variation distance fromÕ k (1/ε 2) samples, which is near-optimal in terms of dependence on ε and independent of n. In particular, our result generalizes the single-dimensional result of [DDS12] for Poisson binomials to arbitrary dimension. Finally, as a corollary of our results on PMDs, we give aÕ k (1/ε 2) sample algorithm for learning (n, k)-sums of independent integer random variables (SIIRVs), which is near-optimal for constant k. * Supported by a Sloan Foundation Fellowship, a Microsoft Research Faculty Fellowship, and NSF Award CCF-0953960 (CAREER) and CCF-1101491. † Supported by NSF Award CCF-0953960 (CAREER). ‡ Supported by NSF Award CCF-0953960 (CAREER) and a Simons Award for Graduate Students in Theoretical Computer Science. Theorem 2 (PMD Covers). For all n, k ∈ N, and ε > 0, there exists an ε-cover, in total variation distance, of the set of all (n, k)-PMDs whose size is n k 2 • min 2 poly(k/ε) , 2 O(k 5k •log k+2 (1/ε)). 3 An ε-cover Fε of a set of distributions F is called proper iff Fε ⊆ F. We make a few remarks about our cover. First, the cover is non-proper, containing distributions that are of the form specified in Theorem 1, i.e. are convolutions of a discretized Gaussian and a PMD. Moreover, it is straightforward to see that any cover has size at least n Ω(k) and at least (1/ε) Ω(k). For the first lower bound, count the number of (n, k)-PMDs whose summands are deterministic. For the second, count the number of (1, k)-PMDs whose probabilities are integer multiples of ε. So, for fixed k, our bound has the right qualitative dependence on n (namely polynomial), and a near-right dependence on 1/ε (namely quasi-polynomial rather than polynomial). Moreover, it obtains the same qualitative dependence on n and ε as the k = 2 cover of [DP09, DP14], namely polynomial in n and quasi-polynomial in 1/ε. Learning PMDs In view of tools for hypothesis selection from a cover (see, i.e., Theorem 7), our cover theorem directly implies that (n, k)-PMDs can be learned from O(k 5k • log n • log k+2 (1/ε)/ε 2) samples. These are near-optimal in terms of ε, as Ω(k/ε 2) samples are necessary even for learning a (1, k)-PMD. We show that the dependence on n can be completely removed from the learner, generalizing the results on Poisson Binomial Distributions [DDS12]. Theorem 3 (PMD Learning). For all n, k ∈ N and ε > 0, there is a learning algorithm for (n, k)-PMDs with the following properties: Let X = n i=1 X i be any (n, k)-Poisson multinomial random vector. The algorithm uses min O(k 5k • log k+2 (1/ε)/ε 2), poly(k/ε) samples from X, runs in time 4 min 2 O(k 5k •log k+2 (1/ε)) , 2 poly(k/ε) , and with probability at least 9/10 outputs a (succinct description of a) random vectorX such that d TV (X,X) ≤ ε. Additional Results: Learning k-SIIRVs A (n, k)-SIIRV is the sum of n independent (singledimensional) random variables supported on {0,. .. , k − 1}. SIIRVs generalize Poisson Binomial distributions, which correspond to the case k = 2. At the same time, SIIRVs can be viewed as projections of PMDs onto the vector (0, 1,. .. , k − 1). In particular, if X is a (n, k)-SIIRV, there exists a (n, k)-Poisson multinomial random vector Y , such that X = (0, 1,. .. , k − 1) T • Y. Recent work has established that (n, k)-SIIRVs can be learned from poly(k/ε) samples, independent of n, when even learning a (1, k)-SIIRV already requires Ω(k/ε 2) samples [DDO + 13]. A question arising from this work is finding the optimal dependence of the sample complexity on ε. Demonstrating the expressive power of PMDs, as a corollary of our cover result, we show that the optimal dependence is actuallyÕ k (1/ε 2). Theorem 4 (SIIRV Learning). For all n, k ∈ N and ε > 0, there is a learning algorithm for (n, k)-SIIRVs with the following properties: Let X = n i=1 X i be any (n, k)-SIIRV. The algorithm uses k 5k • O(log k+2 (1/ε)/ε 2) samples from X, runs in time 2 O(k 5k •log k+2 (1/ε)) , and with probability at least 9/10 outputs a random vectorX such that d TV (X,X) ≤ ε. Simultaneous work by Diakonikolas, Kane and Stewart [DKS15] takes a direct approach to solving this problem. Using Fourier-based methods, they give a polynomial-time algorithm which requiresÕ(k/ε 2) samples, obtaining near-optimal dependence on both k and ε.

Research paper thumbnail of Zero-Sum Polymatrix Games: A Generalization of Minmax

Mathematics of Operations Research, 2016

We show that in zero-sum polymatrix games, a multiplayer generalization of two-person zero-sum ga... more We show that in zero-sum polymatrix games, a multiplayer generalization of two-person zero-sum games, Nash equilibria can be found efficiently with linear programming. We also show that the set of coarse correlated equilibria collapses to the set of Nash equilibria. In contrast, other important properties of two-person zero-sum games are not preserved: Nash equilibrium payoffs need not be unique, and Nash equilibrium strategies need not be exchangeable or max-min.

Research paper thumbnail of Learning in Auctions: Regret is Hard, Envy is Easy

2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), 2016

A large line of recent work studies the welfare guarantees of simple and prevalent combinatorial ... more A large line of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiS-PAs) [CKS08, BR11, FFGL13]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to each player is exponential. We show that this obstacle is insurmountable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "noenvy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our result for XOS valuations can be viewed as the first instantiation of approximate welfare maximization in combinatorial auctions with XOS valuations, where both the designer and the agents are computationally bounded and agents are strategic. Our positive and negative results extend to many other simple auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing big gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.

Research paper thumbnail of The Complexity of Optimal Mechanism Design

Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, 2013

Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling o... more Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders [18]. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenue-optimal auction in multi-item settings cannot be found and implemented computationally efficiently, unless ZPP ⊇ P #P. This is true even for a single additive bidder whose values for the items are independently distributed on two rational numbers with rational probabilities. Our result is very general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or non-truthful) that can be implemented in expected polynomial time. In particular, under well-believed complexity-theoretic assumptions, revenue-optimization in very simple multi-item settings can only be tractably approximated. We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flow-interpretation of the solutions of an exponential-size linear program for revenue maximization with an additional supermodularity constraint.

Research paper thumbnail of Strong Duality for a Multiple-Good Monopolist

Proceedings of the Sixteenth ACM Conference on Economics and Computation - EC '15, 2015

We provide a duality-based framework for revenue maximization in a multiple-good monopoly. Our fr... more We provide a duality-based framework for revenue maximization in a multiple-good monopoly. Our framework shows that every optimal mechanism has a certificate of optimality, taking the form of an optimal transportation map between measures. Using our framework, we prove that grand-bundling mechanisms are optimal if and only if two stochastic dominance conditions hold between specific measures induced by the buyer's type distribution. This result strengthens several results in the literature, where only sufficient conditions for grand-bundling optimality have been provided. As a corollary of our tight characterization of grand-bundling optimality, we show that the optimal mechanism for n independent uniform items each supported on [c, c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for 2 items [Pav11]. In contrast, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c, c + 1] is not a grand bundling mechanism. The necessary and sufficient condition for grand bundling optimality is a special case of our more general characterization result that provides necessary and sufficient conditions for the optimality of an arbitrary mechanism (with a finite menu size) for an arbitrary type distribution.

Research paper thumbnail of Revenue Maximization and Ex-Post Budget Constraints

Proceedings of the Sixteenth ACM Conference on Economics and Computation, 2015

We consider the problem of a revenue-maximizing seller with m items for sale to n additive bidder... more We consider the problem of a revenue-maximizing seller with m items for sale to n additive bidders with hard budget constraints, assuming that the seller has some prior distribution over bidder values and budgets. The prior may be correlated across items and budgets of the same bidder, but is assumed independent across bidders. We target mechanisms that are Bayesian Incentive Compatible, but that are ex-post Individually Rational and ex-post budget respecting. Virtually no such mechanisms are known that satisfy all these conditions and guarantee any revenue approximation, even with just a single item. We provide a computationally efficient mechanism that is a 3-approximation with respect to all BIC, ex-post IR, and ex-post budget respecting mechanisms. Note that the problem is NP-hard to approximate better than a factor of 16/15, even in the case where the prior is a point mass [Chakrabarty and Goel 2010]. We further characterize the optimal mechanism in this setting, showing that it can be interpreted as a distribution over virtual welfare maximizers. We prove our results by making use of a black-box reduction from mechanism to algorithm design developed by [Cai et al. 2013]. Our main technical contribution is a computationally efficient 3-approximation algorithm for the algorithmic problem that results by an application of their framework to this problem. The algorithmic problem has a mixed-sign objective and is NP-hard to optimize exactly, so it is surprising that a computationally efficient approximation is possible at all. In the case of a single item (m = 1), the algorithmic problem can be solved exactly via exhaustive search, leading to a computationally efficient exact algorithm and a stronger characterization of the optimal mechanism as a distribution over virtual value maximizers. 1 The terms financially constrained bidders or bidders with liquidity constraints are used synonymously.

Research paper thumbnail of Optimal Pricing Is Hard

Lecture Notes in Computer Science, 2012

We show that computing the revenue-optimal deterministic auction in unit-demand single-buyer Baye... more We show that computing the revenue-optimal deterministic auction in unit-demand single-buyer Bayesian settings, i.e. the optimal itempricing, is computationally hard even in single-item settings where the buyer's value distribution is a sum of independently distributed attributes, or multi-item settings where the buyer's values for the items are independent. We also show that it is intractable to optimally price the grand bundle of multiple items for an additive bidder whose values for the items are independent. These difficulties stem from implicit definitions of a value distribution. We provide three instances of how different properties of implicit distributions can lead to intractability: the first is a #P-hardness proof, while the remaining two are reductions from the SQRT-SUM problem of Garey, Graham, and Johnson [14]. While simple pricing schemes can oftentimes approximate the best scheme in revenue, they can have drastically different underlying structure. We argue therefore that either the specification of the input distribution must be highly restricted in format, or it is necessary for the goal to be mere approximation to the optimal scheme's revenue instead of computing properties of the scheme itself.

Research paper thumbnail of Testing Poisson Binomial Distributions

Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2014

A Poisson Binomial distribution over n variables is the distribution of the sum of n independent ... more A Poisson Binomial distribution over n variables is the distribution of the sum of n independent Bernoullis. We provide a sample near-optimal algorithm for testing whether a distribution P supported on {0,. .. , n} to which we have sample access is a Poisson Binomial distribution, or far from all Poisson Binomial distributions. The sample complexity of our algorithm is O(n 1/4) to which we provide a matching lower bound. We note that our sample complexity improves quadratically upon that of the naive "learn followed by tolerant-test" approach, while instance optimal identity testing [VV14] is not applicable since we are looking to simultaneously test against a whole family of distributions. * Supported by grant from MITEI-Shell program. † Supported by a Sloan Foundation Fellowship, a Microsoft Research Faculty Fellowship and NSF Award CCF-0953960 (CAREER) and CCF-1101491. 1 Effective support is the smallest set of contiguous integers where the distribution places all but of its probability mass.

Research paper thumbnail of Sparse covers for sums of indicators

Probability Theory and Related Fields, 2014

For all n, > 0, we show that the set of Poisson Binomial distributions on n variables admits a pr... more For all n, > 0, we show that the set of Poisson Binomial distributions on n variables admits a proper-cover in total variation distance of size n 2 + n • (1/) O(log 2 (1/)) , which can also be computed in polynomial time. We discuss the implications of our construction for approximation algorithms and the computation of approximate Nash equilibria in anonymous games.

Research paper thumbnail of The Complexity of Games on Highly Regular Graphs

Lecture Notes in Computer Science, 2005

We present algorithms and complexity results for the problem of finding equilibria (mixed Nash eq... more We present algorithms and complexity results for the problem of finding equilibria (mixed Nash equilibria, pure Nash equilibria and correlated equilibria) in games with extremely succinct description that are defined on highly regular graphs such as the d-dimensional grid; we argue that such games are of interest in the modelling of large systems of interacting agents. We show that mixed Nash equilibria can be found in time exponential in the succinct representation by quantifier elimination, while correlated equilibria can be found in polynomial time by taking advantage of the game's symmetries. Finally, the complexity of determining whether such a game on the d-dimensional grid has a pure Nash equilibrium depends on d and the dichotomy is remarkably sharp: it is solvable in polynomial time (in fact NLcomplete) when d = 1, but it is NEXP-complete for d ≥ 2.

Research paper thumbnail of A Counter-example to Karlin's Strong Conjecture for Fictitious Play

2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 2014

Fictitious play is a natural dynamic for equilibrium play in zero-sum games, proposed by Brown [6... more Fictitious play is a natural dynamic for equilibrium play in zero-sum games, proposed by Brown [6], and shown to converge by Robinson [33]. Samuel Karlin conjectured in 1959 that fictitious play converges at rate O(t − 1 2) with respect to the number of steps t. We disprove this conjecture by showing that, when the payoff matrix of the row player is the n × n identity matrix, fictitious play may converge (for some tie-breaking) at rate as slow as Ω(t − 1 n).

Research paper thumbnail of Bayesian Truthful Mechanisms for Job Scheduling from Bi-criterion Approximation Algorithms

Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2014

We provide polynomial-time approximately optimal Bayesian mechanisms for makespan minimization on... more We provide polynomial-time approximately optimal Bayesian mechanisms for makespan minimization on unrelated machines as well as for max-min fair allocations of indivisible goods, with approximation factors of 2 and min{m−k+1,Õ(√ k)} respectively, matching the approximation ratios of best known polynomialtime algorithms (for max-min fairness, the latter claim is true for certain ratios of the number of goods m to people k). Our mechanisms are obtained by establishing a polynomial-time approximation-sensitive reduction from the problem of designing approximately optimal mechanisms for some arbitrary objective O to that of designing bi-criterion approximation algorithms for the same objective O plus a linear allocation cost term. Our reduction is itself enabled by extending the celebrated "equivalence of separation and optimization" [27, 32] to also accommodate bi-criterion approximations. Moreover, to apply the reduction to the specific problems of makespan and max-min fairness we develop polynomial-time bi-criterion approximation algorithms for makespan minimization with costs and max-min fairness with costs, adapting the algorithms of [45], [10] and [4] to the type of bi-criterion approximation that is required by the reduction.

Research paper thumbnail of Reducing Revenue to Welfare Maximization: Approximation Algorithms and other Generalizations

Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, 2013

It was recently shown in [12] that revenue optimization can be computationally efficiently reduce... more It was recently shown in [12] that revenue optimization can be computationally efficiently reduced to welfare optimization in all multi-dimensional Bayesian auction problems with arbitrary (possibly combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints. This reduction provides a poly-time solution to the optimal mechanism design problem in all auction settings where welfare optimization can be solved efficiently, but it is fragile to approximation and cannot provide solutions to settings where welfare maximization can only be tractably approximated. In this paper, we extend the reduction to accommodate approximation algorithms, providing an approximation preserving reduction from (truthful) revenue maximization to (not necessarily truthful) welfare maximization. The mechanisms output by our reduction choose allocations via blackbox calls to welfare approximation on randomly selected inputs, thereby generalizing also our earlier structural results on optimal multi-dimensional mechanisms to approximately optimal mechanisms. Unlike [12], our results here are obtained through novel uses of the Ellipsoid algorithm and other optimization techniques over non-convex regions.

Research paper thumbnail of Understanding Incentives: Mechanism Design Becomes Algorithm Design

2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013

We provide a computationally efficient black-box reduction from mechanism design to algorithm des... more We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone submodular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness. This is the first instance of a truthful mechanism that optimizes a non-linear objective.

Research paper thumbnail of Optimal Multi-dimensional Mechanism Design: Reducing Revenue to Welfare Maximization

2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, 2012

We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bay... more We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian auctions with arbitrary (possibly combinatorial) feasibility constraints and independent bidders with arbitrary (possibly combinatorial) demand constraints, appropriately extending Myerson's single-dimensional result [24] to this setting. We also show that every feasible Bayesian auction can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder's type t i is transformed into a virtual type f i (t i), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule. We obtain our reduction from revenue to welfare optimization via two algorithmic results on reduced form auctions in settings with arbitrary feasibility and demand constraints. First, we provide a separation oracle for determining feasibility of a reduced form auction. Second, we provide a geometric algorithm to decompose any feasible reduced form into a distribution over virtual VCG allocation rules. In addition, we show how to execute both algorithms given only black box access to an implementation of the VCG allocation rule. Our results are computationally efficient for all multi-dimensional settings where the bidders are additive (or can be efficiently mapped to be additive). In this case, our mechanisms run in time polynomial in the number of items and the total number of bidder types, but not type profiles. This running time is polynomial in the number of items, the number of bidders, and the cardinality of the support of each bidder's value distribution. For generic correlated distributions, this is the natural description complexity of the problem. The running time can be further improved to polynomial in only the number of items and the number of bidders in item-symmetric settings by making use of techniques from [15]. * Supported by NSF Award CCF-0953960 (CAREER) and CCF-1101491. † Supported by a Sloan Foundation Fellowship and NSF Award CCF-0953960 (CAREER) and CCF-1101491. ‡ Supported by a NSF Graduate Research Fellowship and NSF Award CCF-1101491. has oracle access to a bidder's valuation, or impose some structure on the bidders' valuations allowing them to be succinctly described. Indeed, virtually every recent result in revenue-maximizing literature [2, 4, 7, 8, 9, 10, 11, 15, 20] assumes that bidders are capacitated-additive. 1 In fact, most results are for unit-demand bidders. It is easy to see that, if we are allowed to incorporate arbitrary demand constraints into the definition of F, such bidders can be described in our model as simply additive. In fact, far more complex bidders can be modeled as well, as demand constraints could instead be some arbitrary set system. Because F is already an arbitrary set system, we may model bidders as simply additive and still capture virtually every bidder model studied in recent results, and more general ones as well. In fact, we note that every multi-dimensional setting can be mapped to an additive one, albeit not necessarily computationally efficiently. 2 So while we focus our discussion to additive bidders throughout this paper, our results apply to every auction setting, without need for any additivity assumption. In particular, our characterization result (Informal Theorem 2) of feasible allocation rules holds for any multi-dimensional setting, and our reduction from revenue to welfare optimization (Informal Theorem 1) also holds for any setting, and we show that it can be carried out computationally efficiently for any additive setting. Optimal Multi-dimensional Mechanism Design. With the above motivation in mind, we formally state the revenue optimization problem we solve. We remark that virtually every known result in the multi-dimensional mechanism design literature (see references above) tackles a special case of this problem, possibly with budget constraints on the bidders (which can be easily incorporated in all results presented in this paper as discussed in Appendix H), and possibly replacing BIC with IC. We explicitly assume in the definition of the problem that the bidders are additive, recalling that this is not a restriction if computational considerations are not in place. Revenue-Maximizing Multi-Dimensional Mechanism Design Problem (MDMDP): Given as input m distributions (possibly correlated across items) D 1 ,. .. , D m over valuation vectors for n heterogenous items and feasibility constraints F, output a BIC mechanism M whose allocation is in F with probability 1 and whose expected revenue is optimal relative to any other, possibly randomized, BIC mechanism when played by m additive bidders whose valuation vectors are sampled from D = × i D i .

Research paper thumbnail of An algorithmic characterization of multi-dimensional mechanisms

Proceedings of the 44th symposium on Theory of Computing - STOC '12, 2012

We obtain a characterization of feasible, Bayesian, multi-item multi-bidder mechanisms with indep... more We obtain a characterization of feasible, Bayesian, multi-item multi-bidder mechanisms with independent, additive bidders as distributions over hierarchical mechanisms. Combined with cyclic-monotonicity our results provide a complete characterization of feasible, Bayesian Incentive Compatible mechanisms for this setting. Our characterization is enabled by a novel, constructive proof of Border's theorem [5], and a new generalization of this theorem to independent (but not necessarily identically distributed) bidders, improving upon the results of [6, 12]. For a single item and independent (but not necessarily identically distributed) bidders, we show that any feasible reduced form auction can be implemented as a distribution over hierarchical mechanisms. We also give a polynomial-time algorithm for determining feasibility of a reduced form auction, or providing a separation hyperplane from the set of feasible reduced forms. To complete the picture, we provide polynomialtime algorithms to find and exactly sample from a distribution over hierarchical mechanisms consistent with a given feasible reduced form. All these results generalize to multi-item reduced form auctions for independent, additive bidders. Finally, for multiple items, additive bidders with hard demand constraints, and arbitrary value correlation across items or bidders, we give a proper generalization of Border's Theorem, and characterize feasible reduced form auctions as multi-commodity flows in related multi-commodity flow instances. We also show that our generalization holds for a broader class of feasibility constraints, including the intersection of any two matroids. As a corollary of our results we obtain revenue-optimal, Bayesian Incentive Compatible (BIC) mechanisms in multi-item multi-bidder settings, when each bidder has arbitrarily correlated values over the items and additive valuations over bundles of items, and the bidders are independent. Our mechanisms run in time polynomial in the total number of bidder types (and not type profiles). This running time is polynomial in the number of bidders, but potentially exponential in the number of items. We improve the running time to polynomial in both the number of items and the number of bidders by using recent structural results on optimal BIC auctions in item-symmetric settings [14].

Research paper thumbnail of Extreme value theorems for optimal multidimensional pricing

Games and Economic Behavior, 2015

We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing p... more We provide a Polynomial Time Approximation Scheme for the multi-dimensional unit-demand pricing problem, when the buyer's values are independent (but not necessarily identically distributed.) For all > 0, we obtain a (1 +)-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in n poly(log r) , when sampled from general distributions supported on a set [u min , ru min ]. We also provide an additive PTAS for all bounded distributions. Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all > 0, g(1/) distinct prices suffice to obtain a (1+)-factor approximation to the optimal revenue for MHR distributions, where g(1/) is a quasi-linear function of 1/ that does not depend on the number of items. Similarly, for all > 0 and n > 0, g(1/ • log n) distinct prices suffice for regular distributions, where n is the number of items and g(•) is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of 1/ , a single price suffices to achieve a (1 +)-factor approximation.

Research paper thumbnail of How good is the Chord algorithm?

Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, 2010

The Chord algorithm is a popular, simple method for the succinct approximation of curves, which i... more The Chord algorithm is a popular, simple method for the succinct approximation of curves, which is widely used, under different names, in a variety of areas, such as, multiobjective and parametric optimization, computational geometry, and graphics. We analyze the performance of the chord algorithm, as compared to the optimal approximation that achieves a desired accuracy with the minimum number of points. We prove sharp upper and lower bounds, both in the worst case and average case setting.