Friedhelm Meyer auf der Heide - Profile on Academia.edu (original) (raw)
Papers by Friedhelm Meyer auf der Heide
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, all... more The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worst-case time for lookups and O(1) amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashing-based schemes that use linear space. Such algorithms have amortized worst-case time complexity (log n) for a sequence of n
Algorithmic Aspects of Large and Complex Networks (Dagstuhl Seminar 03361)
... Keywords: Distortion, Mapping Point Sets, Inapproximability, Approximation Algorithm Joint wo... more ... Keywords: Distortion, Mapping Point Sets, Inapproximability, Approximation Algorithm Joint work of: Hall, Alexander; Papadimitriou, Christos ... Phase Transitions in Satisfiability and Coloring Lefteris M. Kirousis (University of Patras, GR) ...
Theory of Computing Systems: Guest Editors' Foreword
Theory of Computing Systems / Mathematical Systems Theory, Sep 1, 2003
Visibility‐Aware Progressive Farthest Point Sampling on the GPU
Computer Graphics Forum, Oct 1, 2019
In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue n... more In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state‐of‐the‐art GPU Poisson‐disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility‐aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point‐based level‐of‐detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state‐of‐the‐art sampling methods.
Springer eBooks, 2001
Papers were solicited in all areas of algorithmic research, including approximations algorithms, ... more Papers were solicited in all areas of algorithmic research, including approximations algorithms, combinatorial optimization, computational biology, computational geometry, databases and information retrieval, external-memory algorithms, graph and network algorithms, machine learning, online algorithms, parallel and distributed computing, pattern matching and data compression, randomized algorithms, and symbolic computation. Algorithms could be sequential, distributed, or parallel, and should be analyzed either mathematically or by rigorous computational experiments. Experimental and applied research were especially encouraged. Each extended abstract submitted was read by at least three referees, and evaluated on its quality, originality, and relevance to the symposium. The entire Program Committee met at Paderborn University on 12-13 May 2001 and selected 41 papers for presentation from the 102 submissions. These, together with three invited papers by Susanne Albers, Lars Arge, and Uri Zwick, are included in this volume. Referees VII
arXiv (Cornell University), Jul 23, 2019
We extend the Mobile Server Problem introduced in [8] to a model where k identical mobile resourc... more We extend the Mobile Server Problem introduced in [8] to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+δ)m s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m c . We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m s and m c ). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m c and m s , the inverse of the augmentation factor 1 /δ and the competitive ratio of the simulated k-Page Migration algorithm.
Wissenschaftsforum Intelligente Technische Systeme (WInTeSys)
Proceedings of the 23rd International Colloquium on Automata, Languages and Programming
Proceedings of the 9th Annual European Symposium on Algorithms
ACM Transactions on Parallel Computing, Jun 28, 2016
This special issue contains revised and extended versions of selected papers presented at the 26t... more This special issue contains revised and extended versions of selected papers presented at the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'14), held June 23 to 25, 2014, in Prague, Czech Republic. After the conference, the authors of several papers who received excellent reviews were invited to submit extended versions of their papers for consideration for this special issue. After a rigorous peer review process, the following seven papers were selected for publication in this special issue. In "Executing Dynamic Data-Graph Computations Deterministically Using Chromatic Scheduling," Kaler, Hasenplaugh, Schardl, and Leiserson extend chromatic scheduling from static to dynamic data-graph computations. Their technique can be adopted to support deterministic execution of work-efficient, dynamic data-graph computations on the existing frameworks, such as GraphLab, Pregel, Galois, PowerGraph, Ligra, or GraphChi. Computation, communication, and synchronizations are three complexity metrics that typically affect the overall running time of distributed processing the most. In "TradeOffs Between Synchronization, Communication, and Computation in Parallel Linear Algebra Computations," Solomonik, Carson, Knight, and Demmel derive tradeoff bounds between these metrics for several numerical linear algebra problems. In "Competitively Scheduling Tasks with Intermediate Parallelizability," Im, Moseley, Pruhs, and Torng study scheduling a batch of jobs consisting of a mix of both fully parallelizable and inherently sequential jobs. They show lower bound for the competitive ratio of scheduling such a mix of jobs and present an algorithm that achieves this competitive ratio. Finding a maximal independent set in hypergraphs is a fundamental problem in parallel computing. In "On Computing Maximal Independent Sets of Hypergraphs in Parallel," Bercea, Goyal, Harris, and Srinivasan present a randomized EREW PRAM algorithm for finding a maximal independent set on hypergraphs with a restricted number of edges in n o(1) time and a nearlinear number of processors. The goal of network creation games is to generate a network by having each node of the network activate links to other nodes, subject to the cost of activating the links and the cost of routing on the final network. In "Locality-Based Network Creation Games," Bilò, Gual à, Leucci, and Proietti present upper and lower bounds for the price of anarchy in network creation games, where each node has knowledge of the network limited to a fixed radius from that node rather than the global knowledge of the whole network. "Parallel Peeling Algorithms" by Jiang, Mitzenmacher, and Thaler received the Best Paper Award at SPAA 2014. In this article, the authors study the number of rounds required when peeling random regular hypergraphs. They show a tight bound of O(log log n) rounds for hypergraphs that are highly likely to have an empty k-core. In contrast, they also present (log n) lower bound for hypergraphs that are highly likely to contain a nonempty k-core. Scheduling threads that take into account the effects of memory hierarchies on modern multicore processors has been a subject of several papers recently. In "Experimental Analysis of Space-Bounded Schedulers," Simhadri, Blelloch, Fineman, Gibbons,
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
Lecture Notes in Computer Science, 2017
Current theoretical attempts towards understanding real-life leasing scenarios assume the followi... more Current theoretical attempts towards understanding real-life leasing scenarios assume the following leasing model. Demands arrive with time and need to be served by leased resources. Different types of leases are available, each with a fixed duration and price, respecting economy of scale (longer leases cost less per unit time). An online algorithm is to serve each arriving demand while minimizing the total leasing costs and without knowing future demands. In this paper, we generalize this model into one in which lease prices fluctuate with time and are not known to the algorithm in advance. Hence, an online algorithm is to perform under the uncertainty of both demands and lease prices. We consider different adversarial models and provide online algorithms, evaluated using standard competitive analysis. For each of these models, we give deterministic matching upper and lower bounds.
Theoretical Computer Science, May 1, 2020
We consider a swarm of n autonomous mobile robots, distributed on a 2-dimensional grid. A basic t... more We consider a swarm of n autonomous mobile robots, distributed on a 2-dimensional grid. A basic task for such a swarm is the gathering process: all robots have to gather at one (not predefined) place. The work in this paper is motivated by the following insight: On one side, for swarms of robots distributed in the 2-dimensional Euclidean space, several gathering algorithms are known for extremely simple robots that are oblivious, have bounded viewing radius, no compass, and no "flags" to communicate a status to others. On the other side, in case of the 2-dimensional grid, the only known gathering algorithms for robots with bounded viewing radius without compass, need to memorize a constant number of rounds and need flags. In this paper we contribute the, to the best of our knowledge, first gathering algorithm on the grid, which works for anonymous, oblivious robots with bounded viewing range, without any further means of communication and without any memory. We prove its correctness and an O(n 2 ) time bound. This time bound matches those of the best known algorithms for the Euclidean plane mentioned above.
Journal of Combinatorial Optimization, Jun 14, 2015
We consider online optimization problems in which certain goods have to be acquired in order to p... more We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization:
Gathering a Euclidean closed chain of robots in linear time and improved algorithms for chain-formation
Theoretical Computer Science, 2023
arXiv (Cornell University), Nov 28, 2013
We study the complexity theory for the local distributed setting introduced by Korman, Peleg and ... more We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud in their seminal paper . They have defined three complexity classes LD (Local Decision), N LD (Nondeterministic Local Decision) and N LD #n . The class LD consists of all languages which can be decided with a constant number of communication rounds. The class N LD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class N LD #n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD N LD N LD #n . Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class N LD(O(f )) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f ). Our main result is the following hierarchy within the nondeterministic classes: In order to prove this hierarchy, we give several lower bounds on the sizes of certificates that are needed to verify some languages from N LD. For the deterministic classes we prove the following hierarchy: LD(1) LD(2) LD(3) . . . LD.
Online facility location with mobile facilities
Theoretical Computer Science, Mar 1, 2022
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, all... more The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worst-case time for lookups and O(1) amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashing-based schemes that use linear space. Such algorithms have amortized worst-case time complexity (log n) for a sequence of n
Algorithmic Aspects of Large and Complex Networks (Dagstuhl Seminar 03361)
... Keywords: Distortion, Mapping Point Sets, Inapproximability, Approximation Algorithm Joint wo... more ... Keywords: Distortion, Mapping Point Sets, Inapproximability, Approximation Algorithm Joint work of: Hall, Alexander; Papadimitriou, Christos ... Phase Transitions in Satisfiability and Coloring Lefteris M. Kirousis (University of Patras, GR) ...
Theory of Computing Systems: Guest Editors' Foreword
Theory of Computing Systems / Mathematical Systems Theory, Sep 1, 2003
Visibility‐Aware Progressive Farthest Point Sampling on the GPU
Computer Graphics Forum, Oct 1, 2019
In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue n... more In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state‐of‐the‐art GPU Poisson‐disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility‐aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point‐based level‐of‐detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state‐of‐the‐art sampling methods.
Springer eBooks, 2001
Papers were solicited in all areas of algorithmic research, including approximations algorithms, ... more Papers were solicited in all areas of algorithmic research, including approximations algorithms, combinatorial optimization, computational biology, computational geometry, databases and information retrieval, external-memory algorithms, graph and network algorithms, machine learning, online algorithms, parallel and distributed computing, pattern matching and data compression, randomized algorithms, and symbolic computation. Algorithms could be sequential, distributed, or parallel, and should be analyzed either mathematically or by rigorous computational experiments. Experimental and applied research were especially encouraged. Each extended abstract submitted was read by at least three referees, and evaluated on its quality, originality, and relevance to the symposium. The entire Program Committee met at Paderborn University on 12-13 May 2001 and selected 41 papers for presentation from the 102 submissions. These, together with three invited papers by Susanne Albers, Lars Arge, and Uri Zwick, are included in this volume. Referees VII
arXiv (Cornell University), Jul 23, 2019
We extend the Mobile Server Problem introduced in [8] to a model where k identical mobile resourc... more We extend the Mobile Server Problem introduced in [8] to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+δ)m s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m c . We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m s and m c ). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m c and m s , the inverse of the augmentation factor 1 /δ and the competitive ratio of the simulated k-Page Migration algorithm.
Wissenschaftsforum Intelligente Technische Systeme (WInTeSys)
Proceedings of the 23rd International Colloquium on Automata, Languages and Programming
Proceedings of the 9th Annual European Symposium on Algorithms
ACM Transactions on Parallel Computing, Jun 28, 2016
This special issue contains revised and extended versions of selected papers presented at the 26t... more This special issue contains revised and extended versions of selected papers presented at the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'14), held June 23 to 25, 2014, in Prague, Czech Republic. After the conference, the authors of several papers who received excellent reviews were invited to submit extended versions of their papers for consideration for this special issue. After a rigorous peer review process, the following seven papers were selected for publication in this special issue. In "Executing Dynamic Data-Graph Computations Deterministically Using Chromatic Scheduling," Kaler, Hasenplaugh, Schardl, and Leiserson extend chromatic scheduling from static to dynamic data-graph computations. Their technique can be adopted to support deterministic execution of work-efficient, dynamic data-graph computations on the existing frameworks, such as GraphLab, Pregel, Galois, PowerGraph, Ligra, or GraphChi. Computation, communication, and synchronizations are three complexity metrics that typically affect the overall running time of distributed processing the most. In "TradeOffs Between Synchronization, Communication, and Computation in Parallel Linear Algebra Computations," Solomonik, Carson, Knight, and Demmel derive tradeoff bounds between these metrics for several numerical linear algebra problems. In "Competitively Scheduling Tasks with Intermediate Parallelizability," Im, Moseley, Pruhs, and Torng study scheduling a batch of jobs consisting of a mix of both fully parallelizable and inherently sequential jobs. They show lower bound for the competitive ratio of scheduling such a mix of jobs and present an algorithm that achieves this competitive ratio. Finding a maximal independent set in hypergraphs is a fundamental problem in parallel computing. In "On Computing Maximal Independent Sets of Hypergraphs in Parallel," Bercea, Goyal, Harris, and Srinivasan present a randomized EREW PRAM algorithm for finding a maximal independent set on hypergraphs with a restricted number of edges in n o(1) time and a nearlinear number of processors. The goal of network creation games is to generate a network by having each node of the network activate links to other nodes, subject to the cost of activating the links and the cost of routing on the final network. In "Locality-Based Network Creation Games," Bilò, Gual à, Leucci, and Proietti present upper and lower bounds for the price of anarchy in network creation games, where each node has knowledge of the network limited to a fixed radius from that node rather than the global knowledge of the whole network. "Parallel Peeling Algorithms" by Jiang, Mitzenmacher, and Thaler received the Best Paper Award at SPAA 2014. In this article, the authors study the number of rounds required when peeling random regular hypergraphs. They show a tight bound of O(log log n) rounds for hypergraphs that are highly likely to have an empty k-core. In contrast, they also present (log n) lower bound for hypergraphs that are highly likely to contain a nonempty k-core. Scheduling threads that take into account the effects of memory hierarchies on modern multicore processors has been a subject of several papers recently. In "Experimental Analysis of Space-Bounded Schedulers," Simhadri, Blelloch, Fineman, Gibbons,
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
Lecture Notes in Computer Science, 2017
Current theoretical attempts towards understanding real-life leasing scenarios assume the followi... more Current theoretical attempts towards understanding real-life leasing scenarios assume the following leasing model. Demands arrive with time and need to be served by leased resources. Different types of leases are available, each with a fixed duration and price, respecting economy of scale (longer leases cost less per unit time). An online algorithm is to serve each arriving demand while minimizing the total leasing costs and without knowing future demands. In this paper, we generalize this model into one in which lease prices fluctuate with time and are not known to the algorithm in advance. Hence, an online algorithm is to perform under the uncertainty of both demands and lease prices. We consider different adversarial models and provide online algorithms, evaluated using standard competitive analysis. For each of these models, we give deterministic matching upper and lower bounds.
Theoretical Computer Science, May 1, 2020
We consider a swarm of n autonomous mobile robots, distributed on a 2-dimensional grid. A basic t... more We consider a swarm of n autonomous mobile robots, distributed on a 2-dimensional grid. A basic task for such a swarm is the gathering process: all robots have to gather at one (not predefined) place. The work in this paper is motivated by the following insight: On one side, for swarms of robots distributed in the 2-dimensional Euclidean space, several gathering algorithms are known for extremely simple robots that are oblivious, have bounded viewing radius, no compass, and no "flags" to communicate a status to others. On the other side, in case of the 2-dimensional grid, the only known gathering algorithms for robots with bounded viewing radius without compass, need to memorize a constant number of rounds and need flags. In this paper we contribute the, to the best of our knowledge, first gathering algorithm on the grid, which works for anonymous, oblivious robots with bounded viewing range, without any further means of communication and without any memory. We prove its correctness and an O(n 2 ) time bound. This time bound matches those of the best known algorithms for the Euclidean plane mentioned above.
Journal of Combinatorial Optimization, Jun 14, 2015
We consider online optimization problems in which certain goods have to be acquired in order to p... more We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization:
Gathering a Euclidean closed chain of robots in linear time and improved algorithms for chain-formation
Theoretical Computer Science, 2023
arXiv (Cornell University), Nov 28, 2013
We study the complexity theory for the local distributed setting introduced by Korman, Peleg and ... more We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud in their seminal paper . They have defined three complexity classes LD (Local Decision), N LD (Nondeterministic Local Decision) and N LD #n . The class LD consists of all languages which can be decided with a constant number of communication rounds. The class N LD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class N LD #n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD N LD N LD #n . Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class N LD(O(f )) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f ). Our main result is the following hierarchy within the nondeterministic classes: In order to prove this hierarchy, we give several lower bounds on the sizes of certificates that are needed to verify some languages from N LD. For the deterministic classes we prove the following hierarchy: LD(1) LD(2) LD(3) . . . LD.
Online facility location with mobile facilities
Theoretical Computer Science, Mar 1, 2022