Robert Kleinberg | Cornell University (original) (raw)
Papers by Robert Kleinberg
We initiate the study of online pricing problems in markets with "buyback," i.e., marke... more We initiate the study of online pricing problems in markets with "buyback," i.e., markets in which prior allocation decisions can be revoked, but at a cost. In our model, a seller receives requests online and chooses which requests to accept, subject to constraints on the subsets of requests which may be accepted simultaneously. A request, once accepted, can be canceled
IEEE Symposium on Foundations of Computer Science, 2000
We consider the revenue-maximization problem for a seller with an unlimited supply of identical g... more We consider the revenue-maximization problem for a seller with an unlimited supply of identical goods, interacting sequentially with a popu- lation of n buyers through an on-line posted-price auction mechanism, a paradigm which is frequently available to vendors selling goods over the Internet. For each buyer, the seller names a price between 0 and 1; the buyer decides whether or
Proceedings of the 43rd annual ACM symposium on Theory of computing - STOC '11, 2011
We consider the problem of designing a revenue-maximizing auction for a single item, when the val... more We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a ( 3 2 + ǫ)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
Contemporary Mathematics, 2009
Proceedings of the fourteenth ACM conference on Electronic commerce - EC '13, 2013
What fraction of the potential social surplus in an environment can be extracted by a revenuemaxi... more What fraction of the potential social surplus in an environment can be extracted by a revenuemaximizing monopolist? We investigate this problem in Bayesian single-parameter environments with independent private values. The precise answer to the question obviously depends on the particulars of the environment: the feasibility constraint and the distributions from which the bidders' private values are sampled. Rather than solving the problem in particular special cases, our work aims to provide universal lower bounds on the revenue-to-welfare ratio that hold under the most general hypotheses that allow for non-trivial such bounds.
Lecture Notes in Computer Science, 2010
We consider two partial-information generalizations of the metric traveling salesman problem (TSP... more We consider two partial-information generalizations of the metric traveling salesman problem (TSP) in which the task is to produce a total ordering of a given metric space that performs well for a subset of the space that is not known in advance. In the universal TSP, the subset is chosen adversarially, and in the TSP it is chosen probabilistically. Both
We study the extent to which combinatorial cut conditions determine the maxi- mum network coding ... more We study the extent to which combinatorial cut conditions determine the maxi- mum network coding rate of k-pairs communication problems. We seek a combina- torial parameter of directed networks which constitutes a valid upper bound on the network coding rate but exceeds this rate by only a small factor in the worst case. (This worst-case ratio is called the gap
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC '15, 2015
Lecture Notes in Computer Science, 2014
ACM-SIAM Symposium on Discrete Algorithms, 2010
The existence of incentive-compatible, computationally- efficient mechanisms for combinatorial au... more The existence of incentive-compatible, computationally- efficient mechanisms for combinatorial auctions with good approximation ratios is the paradigmatic problem in algorithmic mechanism design. It is believed that, in many cases, good approximations for combinatorial auc- tions may be unattainable due to an inherent clash be- tween truthfulness and computational efficiency. In this paper, we prove the first computational-complexity in- approximability results
Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, 2010
The existence of incentive-compatible, computationallyefficient mechanisms for combinatorial auct... more The existence of incentive-compatible, computationallyefficient mechanisms for combinatorial auctions with good approximation ratios is the paradigmatic problem in algorithmic mechanism design. It is believed that, in many cases, good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. In this paper, we prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. Our results are tight, hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits. We show two different techniques to obtain such lower bounds: one for deter- * ministic mechanisms that attains optimal dependence on the number of players and number of items, and one that also applies to a class of randomized mechanisms and attains optimal dependence on the number of players. Both techniques are based on novel VC dimension machinery.
Computing Research Repository, 2010
Index Coding has received considerable attention recently motivated in part by applications such ... more Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate beta\betabeta, the average communication
Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, 2004
A flow of a commodity is said to be confluent if at any node all the flow of the commodity leaves... more A flow of a commodity is said to be confluent if at any node all the flow of the commodity leaves along a single edge. In this paper we study single-commodity confluent flow problems, where we need to route given node demands to a single destination using a confluent flow. Single-and multi-commodity confluent flows arise in a variety of application areas, most notably in networking; in fact, most flows in the Internet are (multi-commodity) confluent flows since Internet routing is destination based.
IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009
In this paper, we study an online bipartite matching problem, motivated by applications in wirele... more In this paper, we study an online bipartite matching problem, motivated by applications in wireless communication, content delivery, and job scheduling. In our problem, we have a bipartite graph G between n clients and n servers, which represents the servers to which each client can connect. Although the edges of G are unknown at the start, we learn the graph over time, as each client arrives and requests to be matched to a server. As each client arrives, she reveals the servers to which she can connect, and the goal of the algorithm is to maintain a matching between the clients who have arrived and the servers. Assuming that G has a perfect matching which allows all clients to be matched to servers, the goal of the online algorithm is to minimize the switching cost, the total number of times a client needs to switch servers in order to maintain a matching at all times.
Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC '09, 2009
We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion... more We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicative-weights learning algorithm results in convergence to pure equilibria. Our results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game introduced by Koutsoupias and Papadimitriou, which has super-constant price of anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium.
Lecture Notes in Computer Science, 2010
Network coding theory studies the transmission of information in networks whose vertices may perf... more Network coding theory studies the transmission of information in networks whose vertices may perform nontrivial encoding and decoding operations on data as it passes through the network. The main approach to deciding the feasibility of network coding problems aims to reduce the problem to optimization over a polytope of "entropic vectors" subject to constraints imposed by the network structure. In the case of directed acyclic graphs, these constraints are completely understood, but for general graphs the problem of enumerating them remains open: it is not known how to classify the constraints implied by a property that we call serializability, which refers to the absence of paradoxical circular dependencies in a network code.
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, 2011
We introduce a technique for establishing and amplifying gaps between parameters of network codin... more We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hypergraph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13, 2013
ABSTRACT The network inference problem consists of reconstructing the edge set of a network given... more ABSTRACT The network inference problem consists of reconstructing the edge set of a network given traces representing the chronology of infection times as epidemics spread through the network. This problem is a paradigmatic representative of prediction tasks in machine learning that require deducing a latent structure from observed patterns of activity in a network, which often require an unrealistically large number of resources (e.g., amount of available data, or computational time). A fundamental question is to understand which properties we can predict with a reasonable degree of accuracy with the available resources, and which we cannot. We define the trace complexity as the number of distinct traces required to achieve high fidelity in reconstructing the topology of the unobserved network or, more generally, some of its properties. We give algorithms that are competitive with, while being simpler and more efficient than, existing network inference approaches. Moreover, we prove that our algorithms are nearly optimal, by proving an information-theoretic lower bound on the number of traces that an optimal inference algorithm requires for performing this task in the general case. Given these strong lower bounds, we turn our attention to special cases, such as trees and bounded-degree graphs, and to property recovery tasks, such as reconstructing the degree distribution without inferring the network. We show that these problems require a much smaller (and more realistic) number of traces, making them potentially solvable in practice.
Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks - HotNets-XII, 2013
ABSTRACT This paper presents the Merlin network management framework. With Merlin, administrators... more ABSTRACT This paper presents the Merlin network management framework. With Merlin, administrators express network policy using programs in a declarative language based on logical predicates and regular expressions. The Merlin compiler automatically partitions these programs into components that can be placed on a variety of devices including switches, middleboxes, and end hosts. It uses a constraint solver and parameterizable heuristics to allocate resources such as paths and bandwidth. To ease the administration of federated networks, Merlin provides mechanisms for delegating management of sub-policies to tenants, along with tools for verifying that delegated sub-policies do not violate global constraints. Overall, Merlin simplifies the task of network administration by providing high-level abstractions for directly specifying network policy.
Proceedings of the 13th ACM Conference on Electronic Commerce - EC '12, 2012
We initiate the study of online pricing problems in markets with "buyback," i.e., marke... more We initiate the study of online pricing problems in markets with "buyback," i.e., markets in which prior allocation decisions can be revoked, but at a cost. In our model, a seller receives requests online and chooses which requests to accept, subject to constraints on the subsets of requests which may be accepted simultaneously. A request, once accepted, can be canceled
IEEE Symposium on Foundations of Computer Science, 2000
We consider the revenue-maximization problem for a seller with an unlimited supply of identical g... more We consider the revenue-maximization problem for a seller with an unlimited supply of identical goods, interacting sequentially with a popu- lation of n buyers through an on-line posted-price auction mechanism, a paradigm which is frequently available to vendors selling goods over the Internet. For each buyer, the seller names a price between 0 and 1; the buyer decides whether or
Proceedings of the 43rd annual ACM symposium on Theory of computing - STOC '11, 2011
We consider the problem of designing a revenue-maximizing auction for a single item, when the val... more We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a ( 3 2 + ǫ)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism.
Contemporary Mathematics, 2009
Proceedings of the fourteenth ACM conference on Electronic commerce - EC '13, 2013
What fraction of the potential social surplus in an environment can be extracted by a revenuemaxi... more What fraction of the potential social surplus in an environment can be extracted by a revenuemaximizing monopolist? We investigate this problem in Bayesian single-parameter environments with independent private values. The precise answer to the question obviously depends on the particulars of the environment: the feasibility constraint and the distributions from which the bidders' private values are sampled. Rather than solving the problem in particular special cases, our work aims to provide universal lower bounds on the revenue-to-welfare ratio that hold under the most general hypotheses that allow for non-trivial such bounds.
Lecture Notes in Computer Science, 2010
We consider two partial-information generalizations of the metric traveling salesman problem (TSP... more We consider two partial-information generalizations of the metric traveling salesman problem (TSP) in which the task is to produce a total ordering of a given metric space that performs well for a subset of the space that is not known in advance. In the universal TSP, the subset is chosen adversarially, and in the TSP it is chosen probabilistically. Both
We study the extent to which combinatorial cut conditions determine the maxi- mum network coding ... more We study the extent to which combinatorial cut conditions determine the maxi- mum network coding rate of k-pairs communication problems. We seek a combina- torial parameter of directed networks which constitutes a valid upper bound on the network coding rate but exceeds this rate by only a small factor in the worst case. (This worst-case ratio is called the gap
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC '15, 2015
Lecture Notes in Computer Science, 2014
ACM-SIAM Symposium on Discrete Algorithms, 2010
The existence of incentive-compatible, computationally- efficient mechanisms for combinatorial au... more The existence of incentive-compatible, computationally- efficient mechanisms for combinatorial auctions with good approximation ratios is the paradigmatic problem in algorithmic mechanism design. It is believed that, in many cases, good approximations for combinatorial auc- tions may be unattainable due to an inherent clash be- tween truthfulness and computational efficiency. In this paper, we prove the first computational-complexity in- approximability results
Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, 2010
The existence of incentive-compatible, computationallyefficient mechanisms for combinatorial auct... more The existence of incentive-compatible, computationallyefficient mechanisms for combinatorial auctions with good approximation ratios is the paradigmatic problem in algorithmic mechanism design. It is believed that, in many cases, good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. In this paper, we prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. Our results are tight, hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits. We show two different techniques to obtain such lower bounds: one for deter- * ministic mechanisms that attains optimal dependence on the number of players and number of items, and one that also applies to a class of randomized mechanisms and attains optimal dependence on the number of players. Both techniques are based on novel VC dimension machinery.
Computing Research Repository, 2010
Index Coding has received considerable attention recently motivated in part by applications such ... more Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate beta\betabeta, the average communication
Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, 2004
A flow of a commodity is said to be confluent if at any node all the flow of the commodity leaves... more A flow of a commodity is said to be confluent if at any node all the flow of the commodity leaves along a single edge. In this paper we study single-commodity confluent flow problems, where we need to route given node demands to a single destination using a confluent flow. Single-and multi-commodity confluent flows arise in a variety of application areas, most notably in networking; in fact, most flows in the Internet are (multi-commodity) confluent flows since Internet routing is destination based.
IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009
In this paper, we study an online bipartite matching problem, motivated by applications in wirele... more In this paper, we study an online bipartite matching problem, motivated by applications in wireless communication, content delivery, and job scheduling. In our problem, we have a bipartite graph G between n clients and n servers, which represents the servers to which each client can connect. Although the edges of G are unknown at the start, we learn the graph over time, as each client arrives and requests to be matched to a server. As each client arrives, she reveals the servers to which she can connect, and the goal of the algorithm is to maintain a matching between the clients who have arrived and the servers. Assuming that G has a perfect matching which allows all clients to be matched to servers, the goal of the online algorithm is to minimize the switching cost, the total number of times a client needs to switch servers in order to maintain a matching at all times.
Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC '09, 2009
We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion... more We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicative-weights learning algorithm results in convergence to pure equilibria. Our results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game introduced by Koutsoupias and Papadimitriou, which has super-constant price of anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium.
Lecture Notes in Computer Science, 2010
Network coding theory studies the transmission of information in networks whose vertices may perf... more Network coding theory studies the transmission of information in networks whose vertices may perform nontrivial encoding and decoding operations on data as it passes through the network. The main approach to deciding the feasibility of network coding problems aims to reduce the problem to optimization over a polytope of "entropic vectors" subject to constraints imposed by the network structure. In the case of directed acyclic graphs, these constraints are completely understood, but for general graphs the problem of enumerating them remains open: it is not known how to classify the constraints implied by a property that we call serializability, which refers to the absence of paradoxical circular dependencies in a network code.
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, 2011
We introduce a technique for establishing and amplifying gaps between parameters of network codin... more We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hypergraph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13, 2013
ABSTRACT The network inference problem consists of reconstructing the edge set of a network given... more ABSTRACT The network inference problem consists of reconstructing the edge set of a network given traces representing the chronology of infection times as epidemics spread through the network. This problem is a paradigmatic representative of prediction tasks in machine learning that require deducing a latent structure from observed patterns of activity in a network, which often require an unrealistically large number of resources (e.g., amount of available data, or computational time). A fundamental question is to understand which properties we can predict with a reasonable degree of accuracy with the available resources, and which we cannot. We define the trace complexity as the number of distinct traces required to achieve high fidelity in reconstructing the topology of the unobserved network or, more generally, some of its properties. We give algorithms that are competitive with, while being simpler and more efficient than, existing network inference approaches. Moreover, we prove that our algorithms are nearly optimal, by proving an information-theoretic lower bound on the number of traces that an optimal inference algorithm requires for performing this task in the general case. Given these strong lower bounds, we turn our attention to special cases, such as trees and bounded-degree graphs, and to property recovery tasks, such as reconstructing the degree distribution without inferring the network. We show that these problems require a much smaller (and more realistic) number of traces, making them potentially solvable in practice.
Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks - HotNets-XII, 2013
ABSTRACT This paper presents the Merlin network management framework. With Merlin, administrators... more ABSTRACT This paper presents the Merlin network management framework. With Merlin, administrators express network policy using programs in a declarative language based on logical predicates and regular expressions. The Merlin compiler automatically partitions these programs into components that can be placed on a variety of devices including switches, middleboxes, and end hosts. It uses a constraint solver and parameterizable heuristics to allocate resources such as paths and bandwidth. To ease the administration of federated networks, Merlin provides mechanisms for delegating management of sub-policies to tenants, along with tools for verifying that delegated sub-policies do not violate global constraints. Overall, Merlin simplifies the task of network administration by providing high-level abstractions for directly specifying network policy.
Proceedings of the 13th ACM Conference on Electronic Commerce - EC '12, 2012