Faith Ellen | University of Toronto (original) (raw)

Papers by Faith Ellen

Research paper thumbnail of Short Headers Suffice for Communication in a DAG with Link Failures

Lecture Notes in Computer Science, 2000

This paper considers a number of communication problems, including end-to-end communication and m... more This paper considers a number of communication problems, including end-to-end communication and multicasts, in networks whose underlying graphs are directed and acyclic and whose links are subject to permanent failures. In the case that each processor has separate input queues for each in-edge, we present protocols for these problems that use single bit headers. In the case that each processor

Research paper thumbnail of Lower Bounds in Distributed Computing

Lecture Notes in Computer Science, 2000

Distributed computing is the study of achieving cooperative behavior between independent computin... more Distributed computing is the study of achieving cooperative behavior between independent computing processes with possibly conflicting goals. Distributed computing is ubiquitous in the Internet, wireless networks, multi-core and multi-processor computers, teams of mobile robots, etc. In this thesis, we study two fundamental distributed computing problems, clock synchronization and mutual exclusion. Our contributions are as follows. 4

Research paper thumbnail of Tight Size Bounds for Packet Headers in Narrow Meshes

Lecture Notes in Computer Science, 2000

Research paper thumbnail of On the space complexity of randomized synchronization

Journal of the ACM, 1998

The "wait-free hierarchy" provides a classification of multiprocessor synchronization primitives ... more The "wait-free hierarchy" provides a classification of multiprocessor synchronization primitives based on the values of n for which there are deterministic wait-free implementations of n-process consensus using instances of these objects and read-write registers. In a randomized wait-free setting, this classification is degenerate, since n-process consensus can be solved using only O(n) read-write registers.

Research paper thumbnail of Estimating the maximum

Journal of Algorithms, 2005

Estimating the maximum of a sampled dataset is an important and daunting task. We give a sampling... more Estimating the maximum of a sampled dataset is an important and daunting task. We give a sampling algorithm for general datasets which gives estimates strictly better than the largest sample for an infinite family of datasets. Our algorithm overshoots the true maximum of the worst case dataset with probability at most (1/e)+O(1/k), where k is the size of our sample, which is much smaller than the size of the dataset. Our proof is the result of a new extremal graph coloring theorem: given any red/green coloring of the edges of a complete graph of n vertices, the probability that the edges among k randomly sampled vertices have a certain property is at most (1/e)+O(1/k). In addition, we show that if an algorithm gives an estimate strictly better than the largest sample for some dataset, then the algorithm overshoots the maximum on some other dataset with probability at least (1/e)−O(1/k).

Research paper thumbnail of Hundreds of impossibility results for distributed computing

Distributed Computing, 2003

We survey results from distributed computing that show tasks to be impossible, either outright or... more We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.

Research paper thumbnail of Relationships between broadcast and shared memory in reliable anonymous distributed systems

Distributed Computing, 2006

We study the power of reliable anonymous distributed systems, where processes do not fail, do not... more We study the power of reliable anonymous distributed systems, where processes do not fail, do not have identifiers, and run identical programmes. We are interested specifically in the relative powers of systems with different communication mechanisms: anonymous broadcast, read-write registers, or registers supplemented with additional shared-memory objects. We show that a system with anonymous broadcast can simulate a system of shared-memory objects if and only if the objects satisfy a property we call idemdicence; this result holds regardless of whether either system is synchronous or asynchronous. Conversely, the key to simulating anonymous broadcast in anonymous shared memory is the ability to count: broadcast can be simulated by an asynchronous shared-memory system that uses only counters, but registers by themselves are not enough. We further examine the relative power of different types and sizes of bounded counters and conclude with a non-robustness result.

Research paper thumbnail of Retrieval of scattered information by EREW, CREW, and CRCW PRAMs

Computational Complexity, 1995

The k-compaction problem arises when k out of n cells in an array are non-empty and the contents ... more The k-compaction problem arises when k out of n cells in an array are non-empty and the contents of these cells must be moved to the rst k locations in the array. Parallel algorithms for k-compaction have obvious applications in processor allocation and load balancing; k-compaction is also an important subroutine in many recently developed parallel algorithms. We show that any EREW PRAM that solves the k-compaction problem requires ( p log n) time, even if the number of processors is arbitrarily large and k = 2. On the CREW PRAM, we show that every n-processor algorithm for k-compaction problem requires (log log n) time, even if k = 2. Finally, we show that O(log k) time can be achieved on the ROBUST PRAM, a very weak CRCW PRAM model.

Research paper thumbnail of Infrastructure issues related to theory of computing research

ACM Computing Surveys, 1996

Theory of Computing is a eld of Computer Science that uses mathematical techniques to understand ... more Theory of Computing is a eld of Computer Science that uses mathematical techniques to understand the nature of computation and to design and analyze algorithms for important and fundamental problems. It provides important new ways of thinking about computation and lasting insights that are applicable to a wide variety of systems.

Research paper thumbnail of Strategic directions in research in theory of computing

ACM Computing Surveys, 1996

... Research in theory of computing began in the 1930s, when investigations by logicians produced... more ... Research in theory of computing began in the 1930s, when investigations by logicians producedcomputability theory. With the emergence of higher level languages in the 1950s, attention turned to the parsing and compiling of programs, leading to studies of automata and ...

Research paper thumbnail of Corporate Sponsors

Martin Abadi Bill Aiello Noga Alon Rajeev Alur Andris Ambainis Arne Andersson Dana Angluin Esther... more Martin Abadi Bill Aiello Noga Alon Rajeev Alur Andris Ambainis Arne Andersson Dana Angluin Esther Arkin Yossi Azar Richard Beigel Mihir Bellare Josh Benaloh Shai Ben-David Michael Bender Michael Ben-Or Johannes Blömer Graham Brightwell Adam Buchsbaum Harry Buhrman Jonathan Buss Jin-Yi Cai William Chan Paul Chew Fan Chung Richard Cleve Edith Cohen Claude Crepeau Philipe Cucker Drew Dean Tamal Dey Wayne Eberly Alon Efrat ... Nadav Eiron Ran El-Yaniv Jeff Erickson Sandor Fekete Paolo Ferragina Amos Fiat Faith Fich James Fill ...

Research paper thumbnail of The papers included in this proceedings are preliminary reports on recent research and it is expected that most of these papers will appear in a more complete and polished form in scientific journals. The proceedings also contains one invited paper by Pave1 Pevzner and Michael Waterman. The progr...

This volume contains the papers presented at the Third Israel Symposium on the Theory of Computin... more This volume contains the papers presented at the Third Israel Symposium on the Theory of Computing and Systems (ISTCS), held in Tel Aviv, Israel, on January 4-6, 1995. Fifty five papers were submitted in response to the Call for Papers, and twenty seven of them were selected for presentation. The selection was based on originality, quality and relevance to the field. The program committee is pleased with the overall quality of the accepted papers and, furthermore, feels that many of the papers not used were also of fine quality. ... The papers included in this proceedings ...

Research paper thumbnail of Pointers versus Arithmetic in PRAMs

Journal of Computer and System Sciences, 1996

Manipulation of pointers in shared data structures is an important communication mechanism used i... more Manipulation of pointers in shared data structures is an important communication mechanism used in many parallel algorithms. Indeed, many fundamental algorithms do essentially nothing else. A parallel pointer machine (or PPM) is a parallel model having pointers as its principal data type. PPMs have been characterized as PRAMs obeying two restrictions first, restricted arithmetic capabilities and, second, the CROW memory access restriction (concurrent read, owner write, a commonly occurring special case of CREW). We present results concerning the relative power of PPMs (and other arithmetically restricted PRAMs) versus CROW PRAMs having ordinary arithmetic capabilities. First, we prove lower bounds separating PPMs from CROW PRAMs. For example, any step-by-step simulation of an n-processor CROW PRAM by a PPM requires time 0(log log n) per step. Second, we show that this lower bound is tight we give such a step-by-step simulation using O(log log n) time per step. As a corollary, we obtain sharply improved PPM algorithms for a variety of problems, including deterministic context-free language recognition. ] bounds separating PPMs from CROW PRAMs. In particular, any step-by-step simulation of a CROW PRAM by a PPM requires time 0(log log n) per step. Second (to our surprise), this lower bound is tight. We give such a step-bystep simulation using O(log log n) time per step. The lower bound holds even for strong, nonuniform PPMs, while the upper bound proof yields a simple uniform PPM algorithm. As a corollary, any problem solvable by a CROW PRAM in time O(log n) is also solvable by a PPM in time O(log n log log n) with a polynomial number of processors. Deterministic context-free language recognition is an example of such a problem. Previously this problem was not known to be solvable by a PPM in less than O(log 2 n) time.

Research paper thumbnail of Short Headers Suffice for Communication in a DAG with Link Failures

Lecture Notes in Computer Science, 2000

This paper considers a number of communication problems, including end-to-end communication and m... more This paper considers a number of communication problems, including end-to-end communication and multicasts, in networks whose underlying graphs are directed and acyclic and whose links are subject to permanent failures. In the case that each processor has separate input queues for each in-edge, we present protocols for these problems that use single bit headers. In the case that each processor

Research paper thumbnail of Lower Bounds in Distributed Computing

Lecture Notes in Computer Science, 2000

Distributed computing is the study of achieving cooperative behavior between independent computin... more Distributed computing is the study of achieving cooperative behavior between independent computing processes with possibly conflicting goals. Distributed computing is ubiquitous in the Internet, wireless networks, multi-core and multi-processor computers, teams of mobile robots, etc. In this thesis, we study two fundamental distributed computing problems, clock synchronization and mutual exclusion. Our contributions are as follows. 4

Research paper thumbnail of Tight Size Bounds for Packet Headers in Narrow Meshes

Lecture Notes in Computer Science, 2000

Research paper thumbnail of On the space complexity of randomized synchronization

Journal of the ACM, 1998

The "wait-free hierarchy" provides a classification of multiprocessor synchronization primitives ... more The "wait-free hierarchy" provides a classification of multiprocessor synchronization primitives based on the values of n for which there are deterministic wait-free implementations of n-process consensus using instances of these objects and read-write registers. In a randomized wait-free setting, this classification is degenerate, since n-process consensus can be solved using only O(n) read-write registers.

Research paper thumbnail of Estimating the maximum

Journal of Algorithms, 2005

Estimating the maximum of a sampled dataset is an important and daunting task. We give a sampling... more Estimating the maximum of a sampled dataset is an important and daunting task. We give a sampling algorithm for general datasets which gives estimates strictly better than the largest sample for an infinite family of datasets. Our algorithm overshoots the true maximum of the worst case dataset with probability at most (1/e)+O(1/k), where k is the size of our sample, which is much smaller than the size of the dataset. Our proof is the result of a new extremal graph coloring theorem: given any red/green coloring of the edges of a complete graph of n vertices, the probability that the edges among k randomly sampled vertices have a certain property is at most (1/e)+O(1/k). In addition, we show that if an algorithm gives an estimate strictly better than the largest sample for some dataset, then the algorithm overshoots the maximum on some other dataset with probability at least (1/e)−O(1/k).

Research paper thumbnail of Hundreds of impossibility results for distributed computing

Distributed Computing, 2003

We survey results from distributed computing that show tasks to be impossible, either outright or... more We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.

Research paper thumbnail of Relationships between broadcast and shared memory in reliable anonymous distributed systems

Distributed Computing, 2006

We study the power of reliable anonymous distributed systems, where processes do not fail, do not... more We study the power of reliable anonymous distributed systems, where processes do not fail, do not have identifiers, and run identical programmes. We are interested specifically in the relative powers of systems with different communication mechanisms: anonymous broadcast, read-write registers, or registers supplemented with additional shared-memory objects. We show that a system with anonymous broadcast can simulate a system of shared-memory objects if and only if the objects satisfy a property we call idemdicence; this result holds regardless of whether either system is synchronous or asynchronous. Conversely, the key to simulating anonymous broadcast in anonymous shared memory is the ability to count: broadcast can be simulated by an asynchronous shared-memory system that uses only counters, but registers by themselves are not enough. We further examine the relative power of different types and sizes of bounded counters and conclude with a non-robustness result.

Research paper thumbnail of Retrieval of scattered information by EREW, CREW, and CRCW PRAMs

Computational Complexity, 1995

The k-compaction problem arises when k out of n cells in an array are non-empty and the contents ... more The k-compaction problem arises when k out of n cells in an array are non-empty and the contents of these cells must be moved to the rst k locations in the array. Parallel algorithms for k-compaction have obvious applications in processor allocation and load balancing; k-compaction is also an important subroutine in many recently developed parallel algorithms. We show that any EREW PRAM that solves the k-compaction problem requires ( p log n) time, even if the number of processors is arbitrarily large and k = 2. On the CREW PRAM, we show that every n-processor algorithm for k-compaction problem requires (log log n) time, even if k = 2. Finally, we show that O(log k) time can be achieved on the ROBUST PRAM, a very weak CRCW PRAM model.

Research paper thumbnail of Infrastructure issues related to theory of computing research

ACM Computing Surveys, 1996

Theory of Computing is a eld of Computer Science that uses mathematical techniques to understand ... more Theory of Computing is a eld of Computer Science that uses mathematical techniques to understand the nature of computation and to design and analyze algorithms for important and fundamental problems. It provides important new ways of thinking about computation and lasting insights that are applicable to a wide variety of systems.

Research paper thumbnail of Strategic directions in research in theory of computing

ACM Computing Surveys, 1996

... Research in theory of computing began in the 1930s, when investigations by logicians produced... more ... Research in theory of computing began in the 1930s, when investigations by logicians producedcomputability theory. With the emergence of higher level languages in the 1950s, attention turned to the parsing and compiling of programs, leading to studies of automata and ...

Research paper thumbnail of Corporate Sponsors

Martin Abadi Bill Aiello Noga Alon Rajeev Alur Andris Ambainis Arne Andersson Dana Angluin Esther... more Martin Abadi Bill Aiello Noga Alon Rajeev Alur Andris Ambainis Arne Andersson Dana Angluin Esther Arkin Yossi Azar Richard Beigel Mihir Bellare Josh Benaloh Shai Ben-David Michael Bender Michael Ben-Or Johannes Blömer Graham Brightwell Adam Buchsbaum Harry Buhrman Jonathan Buss Jin-Yi Cai William Chan Paul Chew Fan Chung Richard Cleve Edith Cohen Claude Crepeau Philipe Cucker Drew Dean Tamal Dey Wayne Eberly Alon Efrat ... Nadav Eiron Ran El-Yaniv Jeff Erickson Sandor Fekete Paolo Ferragina Amos Fiat Faith Fich James Fill ...

Research paper thumbnail of The papers included in this proceedings are preliminary reports on recent research and it is expected that most of these papers will appear in a more complete and polished form in scientific journals. The proceedings also contains one invited paper by Pave1 Pevzner and Michael Waterman. The progr...

This volume contains the papers presented at the Third Israel Symposium on the Theory of Computin... more This volume contains the papers presented at the Third Israel Symposium on the Theory of Computing and Systems (ISTCS), held in Tel Aviv, Israel, on January 4-6, 1995. Fifty five papers were submitted in response to the Call for Papers, and twenty seven of them were selected for presentation. The selection was based on originality, quality and relevance to the field. The program committee is pleased with the overall quality of the accepted papers and, furthermore, feels that many of the papers not used were also of fine quality. ... The papers included in this proceedings ...

Research paper thumbnail of Pointers versus Arithmetic in PRAMs

Journal of Computer and System Sciences, 1996

Manipulation of pointers in shared data structures is an important communication mechanism used i... more Manipulation of pointers in shared data structures is an important communication mechanism used in many parallel algorithms. Indeed, many fundamental algorithms do essentially nothing else. A parallel pointer machine (or PPM) is a parallel model having pointers as its principal data type. PPMs have been characterized as PRAMs obeying two restrictions first, restricted arithmetic capabilities and, second, the CROW memory access restriction (concurrent read, owner write, a commonly occurring special case of CREW). We present results concerning the relative power of PPMs (and other arithmetically restricted PRAMs) versus CROW PRAMs having ordinary arithmetic capabilities. First, we prove lower bounds separating PPMs from CROW PRAMs. For example, any step-by-step simulation of an n-processor CROW PRAM by a PPM requires time 0(log log n) per step. Second, we show that this lower bound is tight we give such a step-by-step simulation using O(log log n) time per step. As a corollary, we obtain sharply improved PPM algorithms for a variety of problems, including deterministic context-free language recognition. ] bounds separating PPMs from CROW PRAMs. In particular, any step-by-step simulation of a CROW PRAM by a PPM requires time 0(log log n) per step. Second (to our surprise), this lower bound is tight. We give such a step-bystep simulation using O(log log n) time per step. The lower bound holds even for strong, nonuniform PPMs, while the upper bound proof yields a simple uniform PPM algorithm. As a corollary, any problem solvable by a CROW PRAM in time O(log n) is also solvable by a PPM in time O(log n log log n) with a polynomial number of processors. Deterministic context-free language recognition is an example of such a problem. Previously this problem was not known to be solvable by a PPM in less than O(log 2 n) time.