Joan Feigenbaum | Yale University (original) (raw)

Papers by Joan Feigenbaum

Research paper thumbnail of Towards an Economic Analysis of Trusted Systems

Research paper thumbnail of Cryptographic protection of databases and software

ABSTRACT. We describe experimental work on cryptographic protection of databases and software. Th... more ABSTRACT. We describe experimental work on cryptographic protection of databases and software. The database in our experiment is a natural language dictionary of over 4000 Spanish verbs. Our tentative conclusion is that the overhead cost of computing with encrypted data is fairly small.

Research paper thumbnail of Using Intel Software Guard Extensions for Efficient Two-Party Secure Function Evaluation

Recent developments have made two-party secure function evaluation (2P-SFE) vastly more efficient... more Recent developments have made two-party secure function evaluation (2P-SFE) vastly more efficient. However, due to extensive use of cryptographic operations, these protocols remain too slow for practical use by most applications. The introduction of Intel's Software Guard Extensions (SGX), which provide an environment for the isolated execution of code and handling of data, offers an opportunity to overcome such performance concerns. In this paper, we explore the challenges of achieving security guarantees similar to those found in traditional 2P-SFE systems. After demonstrating a number of critical concerns, we develop two protocols for secure computation in the semi-honest model on this platform: one in which both parties are SGX-enabled and a second in which only one party has direct access to this hardware. We then show how these protocols can be made secure in the malicious model. We conclude that implementing 2P-SFE on SGX-enabled devices can render it more practical for a wide range of applications.

Research paper thumbnail of A New Approach to Interdomain Routing Based on Secure Multi-Party Computation

11th ACM Hot Topics in Networks, Oct 2012

Interdomain routing involves coordination among mutually distrustful parties, leading to the requ... more Interdomain routing involves coordination among mutually distrustful parties, leading to the requirements that BGP provide policy autonomy, flexibility, and privacy. BGP provides these properties via the distributed execution of policy-based decisions during the iterative route computation process. This approach has poor convergence properties, makes planning and failover difficult, and is extremely difficult to change. To rectify these and other problems, we propose a radically different approach to interdomain-route computation, based on secure multi-party computation (SMPC). Our approach provides stronger privacy guarantees than BGP and enables the deployment of new policy paradigms. We report on an initial exploration of this idea and outline future directions for research.

Research paper thumbnail of Reuse It Or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values

21st ACM Conference on Computer and Communications Security, Nov 2014

Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resour... more Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input-validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again. This paper presents PartialGC, an SFE system that allows the reuse of encrypted values generated during a garbled-circuit computation. We show that using PartialGC can reduce computation time by as much as 96% and bandwidth by as much as 98% in comparison with previous outsourcing schemes for secure computation. We demonstrate the feasibility of our approach with two sets of experiments, one in which the garbled circuit is evaluated on a mobile device and one in which it is evaluated on a server. We also use PartialGC to build a privacy-preserving "friend-finder" application for Android. The reuse of previous inputs to allow stateful evaluation represents a new way of looking at SFE and further reduces computational barriers.

Research paper thumbnail of Economics and Computation

Research paper thumbnail of Towards an Economic Analysis of Trusted Systems

Research paper thumbnail of Secure circuit evaluation - A protocol based on hiding information from an oracle

Journal of Cryptology, 1990

: We present a simple protocol for two-player secure circuit evaluation. The protocolenables play... more : We present a simple protocol for two-player secure circuit evaluation. The protocolenables players C and D to cooperate in the computation of f(x) while D conceals herdata x from C and C conceals his circuit for f from D. The protocol is based on the techniqueof hiding information from an oracle [Abadi, Feigenbaum, Kilian, J. Comput. SystemSci. 39(1):21--50, August,

Research paper thumbnail of Nonmonotonicity, User Interfaces, and Risk Assessment in Certificate Revocation

Lecture Notes in Computer Science, 2002

– Revocation makes certification nonmonotonic. More precisely, in a PKI that has revocation, the ... more – Revocation makes certification nonmonotonic. More precisely, in a PKI that has revocation, the validity of a certificate is nonmonotonic with respect to time, ie, a certificate may go from valid to invalid as time passes. – A PKI has a user interface and internal entities and ...

Research paper thumbnail of Sharin g the Cost of Multicast Transmission

Research paper thumbnail of Distributed trust management

IEEE Symposium on Security and Privacy, 1996

Research paper thumbnail of Incentive-compatible interdomain routing

Abstract: The routing of traffic between Internet domains or Autonomous Systems (ASs), a task kno... more Abstract: The routing of traffic between Internet domains or Autonomous Systems (ASs), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP). In this paper, we address the problem of interdomain routing from a mechanism-design point of view. We assume that each AS incurs a per-packet cost for carrying transit traffic and, in turn, is

Research paper thumbnail of The Role of Trust Management in Distributed Systems Security

Lecture Notes in Computer Science, 1999

Existing authorization mechanisms fail to provide powerful and robust tools for handling security... more Existing authorization mechanisms fail to provide powerful and robust tools for handling security at the scale necessary for today's Internet. These mechanisms are coming under increasing strain from the development and deployment of systems that increase the programmability of the Internet. Moreover, this "increased flexibility through programmability" trend seems to be accelerating with the advent of proposals such as Active Networking and Mobile Agents. The trust-management approach to distributed-system security was developed as an answer to the inadequacy of traditional authorization mechanisms. Trust-management engines avoid the need to resolve "identities" in an authorization decision. Instead, they express privileges and restrictions in a programming language. This allows for increased flexibility and expressibility, as well as standardization of modern, scalable security mechanisms. Further advantages of the trust-management approach include proofs that requested transactions comply with local policies and system architectures that encourage developers and administrators to consider an application's security policy carefully and specify it explicitly. In this paper, we examine existing authorization mechanisms and their inadequacies. We introduce the concept of trust management, explain its basic principles, and describe some existing trust-management engines, including PolicyMaker and KeyNote. We also report on our experience using trust-management engines in several distributed-system applications.

Research paper thumbnail of Distributed Algorithmic Mechanism Design

Algorithmic Game Theory, 2007

Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's tradition... more Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet's decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems.

Research paper thumbnail of Hiding Instances in Zero-Knowledge Proof Systems

Lecture Notes in Computer Science, 1991

CRYPTO-1990 Proceedings)Donald BeaverJoan FeigenbaumyVictor ShoupzAbstractInformally speaking, an... more CRYPTO-1990 Proceedings)Donald BeaverJoan FeigenbaumyVictor ShoupzAbstractInformally speaking, an instance-hiding proof system for the function f isa protocol in which a polynomial-time verifier is convinced of the value off(x) but does not reveal the input x to the provers. We show here that aboolean function f has an instance-hiding proof system if and only if it is thecharacteristic function of a language in NEXP " coNEXP. We formalize thenotion of zero-knowledge for...

Research paper thumbnail of The KeyNote Trust-Management System Version 2

[Docs] [txt|pdf] [draft-blaze-ietf-...] Diff1] [Diff2] INFORMATIONAL Network Working Group M. Bla... more [Docs] [txt|pdf] [draft-blaze-ietf-...] Diff1] [Diff2] INFORMATIONAL Network Working Group M. Blaze Request for Comments: 2704 J. Feigenbaum Category: Informational J. Ioannidis AT&T Labs - Research A. Keromytis U. of Pennsylvania September 1999 The KeyNote Trust ...

Research paper thumbnail of An Approximate L

Research paper thumbnail of Streaming Algorithms for Distributed, Massive Data Sets

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several different, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-efficient, one-pass a...

Research paper thumbnail of An Approximate

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several different, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-efficient, one-pass a...

Research paper thumbnail of An Approximate L¹-Difference Algorithm for Massive Data Streams

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several di#erent, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-e#cient, one-pass algo...

Research paper thumbnail of Towards an Economic Analysis of Trusted Systems

Research paper thumbnail of Cryptographic protection of databases and software

ABSTRACT. We describe experimental work on cryptographic protection of databases and software. Th... more ABSTRACT. We describe experimental work on cryptographic protection of databases and software. The database in our experiment is a natural language dictionary of over 4000 Spanish verbs. Our tentative conclusion is that the overhead cost of computing with encrypted data is fairly small.

Research paper thumbnail of Using Intel Software Guard Extensions for Efficient Two-Party Secure Function Evaluation

Recent developments have made two-party secure function evaluation (2P-SFE) vastly more efficient... more Recent developments have made two-party secure function evaluation (2P-SFE) vastly more efficient. However, due to extensive use of cryptographic operations, these protocols remain too slow for practical use by most applications. The introduction of Intel's Software Guard Extensions (SGX), which provide an environment for the isolated execution of code and handling of data, offers an opportunity to overcome such performance concerns. In this paper, we explore the challenges of achieving security guarantees similar to those found in traditional 2P-SFE systems. After demonstrating a number of critical concerns, we develop two protocols for secure computation in the semi-honest model on this platform: one in which both parties are SGX-enabled and a second in which only one party has direct access to this hardware. We then show how these protocols can be made secure in the malicious model. We conclude that implementing 2P-SFE on SGX-enabled devices can render it more practical for a wide range of applications.

Research paper thumbnail of A New Approach to Interdomain Routing Based on Secure Multi-Party Computation

11th ACM Hot Topics in Networks, Oct 2012

Interdomain routing involves coordination among mutually distrustful parties, leading to the requ... more Interdomain routing involves coordination among mutually distrustful parties, leading to the requirements that BGP provide policy autonomy, flexibility, and privacy. BGP provides these properties via the distributed execution of policy-based decisions during the iterative route computation process. This approach has poor convergence properties, makes planning and failover difficult, and is extremely difficult to change. To rectify these and other problems, we propose a radically different approach to interdomain-route computation, based on secure multi-party computation (SMPC). Our approach provides stronger privacy guarantees than BGP and enables the deployment of new policy paradigms. We report on an initial exploration of this idea and outline future directions for research.

Research paper thumbnail of Reuse It Or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values

21st ACM Conference on Computer and Communications Security, Nov 2014

Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resour... more Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input-validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again. This paper presents PartialGC, an SFE system that allows the reuse of encrypted values generated during a garbled-circuit computation. We show that using PartialGC can reduce computation time by as much as 96% and bandwidth by as much as 98% in comparison with previous outsourcing schemes for secure computation. We demonstrate the feasibility of our approach with two sets of experiments, one in which the garbled circuit is evaluated on a mobile device and one in which it is evaluated on a server. We also use PartialGC to build a privacy-preserving "friend-finder" application for Android. The reuse of previous inputs to allow stateful evaluation represents a new way of looking at SFE and further reduces computational barriers.

Research paper thumbnail of Economics and Computation

Research paper thumbnail of Towards an Economic Analysis of Trusted Systems

Research paper thumbnail of Secure circuit evaluation - A protocol based on hiding information from an oracle

Journal of Cryptology, 1990

: We present a simple protocol for two-player secure circuit evaluation. The protocolenables play... more : We present a simple protocol for two-player secure circuit evaluation. The protocolenables players C and D to cooperate in the computation of f(x) while D conceals herdata x from C and C conceals his circuit for f from D. The protocol is based on the techniqueof hiding information from an oracle [Abadi, Feigenbaum, Kilian, J. Comput. SystemSci. 39(1):21--50, August,

Research paper thumbnail of Nonmonotonicity, User Interfaces, and Risk Assessment in Certificate Revocation

Lecture Notes in Computer Science, 2002

– Revocation makes certification nonmonotonic. More precisely, in a PKI that has revocation, the ... more – Revocation makes certification nonmonotonic. More precisely, in a PKI that has revocation, the validity of a certificate is nonmonotonic with respect to time, ie, a certificate may go from valid to invalid as time passes. – A PKI has a user interface and internal entities and ...

Research paper thumbnail of Sharin g the Cost of Multicast Transmission

Research paper thumbnail of Distributed trust management

IEEE Symposium on Security and Privacy, 1996

Research paper thumbnail of Incentive-compatible interdomain routing

Abstract: The routing of traffic between Internet domains or Autonomous Systems (ASs), a task kno... more Abstract: The routing of traffic between Internet domains or Autonomous Systems (ASs), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP). In this paper, we address the problem of interdomain routing from a mechanism-design point of view. We assume that each AS incurs a per-packet cost for carrying transit traffic and, in turn, is

Research paper thumbnail of The Role of Trust Management in Distributed Systems Security

Lecture Notes in Computer Science, 1999

Existing authorization mechanisms fail to provide powerful and robust tools for handling security... more Existing authorization mechanisms fail to provide powerful and robust tools for handling security at the scale necessary for today's Internet. These mechanisms are coming under increasing strain from the development and deployment of systems that increase the programmability of the Internet. Moreover, this "increased flexibility through programmability" trend seems to be accelerating with the advent of proposals such as Active Networking and Mobile Agents. The trust-management approach to distributed-system security was developed as an answer to the inadequacy of traditional authorization mechanisms. Trust-management engines avoid the need to resolve "identities" in an authorization decision. Instead, they express privileges and restrictions in a programming language. This allows for increased flexibility and expressibility, as well as standardization of modern, scalable security mechanisms. Further advantages of the trust-management approach include proofs that requested transactions comply with local policies and system architectures that encourage developers and administrators to consider an application's security policy carefully and specify it explicitly. In this paper, we examine existing authorization mechanisms and their inadequacies. We introduce the concept of trust management, explain its basic principles, and describe some existing trust-management engines, including PolicyMaker and KeyNote. We also report on our experience using trust-management engines in several distributed-system applications.

Research paper thumbnail of Distributed Algorithmic Mechanism Design

Algorithmic Game Theory, 2007

Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's tradition... more Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet's decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems.

Research paper thumbnail of Hiding Instances in Zero-Knowledge Proof Systems

Lecture Notes in Computer Science, 1991

CRYPTO-1990 Proceedings)Donald BeaverJoan FeigenbaumyVictor ShoupzAbstractInformally speaking, an... more CRYPTO-1990 Proceedings)Donald BeaverJoan FeigenbaumyVictor ShoupzAbstractInformally speaking, an instance-hiding proof system for the function f isa protocol in which a polynomial-time verifier is convinced of the value off(x) but does not reveal the input x to the provers. We show here that aboolean function f has an instance-hiding proof system if and only if it is thecharacteristic function of a language in NEXP " coNEXP. We formalize thenotion of zero-knowledge for...

Research paper thumbnail of The KeyNote Trust-Management System Version 2

[Docs] [txt|pdf] [draft-blaze-ietf-...] Diff1] [Diff2] INFORMATIONAL Network Working Group M. Bla... more [Docs] [txt|pdf] [draft-blaze-ietf-...] Diff1] [Diff2] INFORMATIONAL Network Working Group M. Blaze Request for Comments: 2704 J. Feigenbaum Category: Informational J. Ioannidis AT&T Labs - Research A. Keromytis U. of Pennsylvania September 1999 The KeyNote Trust ...

Research paper thumbnail of An Approximate L

Research paper thumbnail of Streaming Algorithms for Distributed, Massive Data Sets

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several different, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-efficient, one-pass a...

Research paper thumbnail of An Approximate

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several different, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-efficient, one-pass a...

Research paper thumbnail of An Approximate L¹-Difference Algorithm for Massive Data Streams

Massive data sets are increasingly important in a wide range of applications, including observati... more Massive data sets are increasingly important in a wide range of applications, including observational sciences, product marketing, and monitoring and operations of large systems. In network operations, raw data typically arrive in streams, and decisions must be made by algorithms that make one pass over each stream, throw much of the raw data away, and produce "synopses" or "sketches" for further processing. Moreover, network-generated massive data sets are often distributed: Several di#erent, physically separated network elements may receive or generate data streams that, together, comprise one logical data set; to be of use in operations, the streams must be analyzed locally and their synopses sent to a central operations facility. The enormous scale, distributed nature, and one-pass processing requirement on the data sets of interest must be addressed with new algorithmic techniques. We present one fundamental new technique here: a space-e#cient, one-pass algo...