Zulfikar Ramzan - Academia.edu (original) (raw)
Papers by Zulfikar Ramzan
IEEE Transactions on Mobile Computing, 2004
There has been a surge of interest in the delivery of personalized information to users (e.g. per... more There has been a surge of interest in the delivery of personalized information to users (e.g. personalized stocks or travel information), particularly as mobile users with limited terminal device capabilities increasingly desire updated, targeted information in real time. When the number of information recipients is large and there is sufficient commonality in their interests, as is often the case, IP multicast is an efficient way of delivering the information. However, IP multicast services do not consider the structure and semantics of the information in the multicast process. We propose the use of Content-Based Multicast (CBM) where extra content filtering is performed at the interior nodes of the IP multicast tree; this will reduce network bandwidth usage and delivery delay, as well as the computation required at the sources and sinks.
This paper describes an attack concept termed Drive-by Pharming where an attacker sets up a web p... more This paper describes an attack concept termed Drive-by Pharming where an attacker sets up a web page that, when simply viewed by the victim (on a JavaScript-enabled browser), attempts to change the DNS server settings on the victim’s home broadband router. As a result, future DNS queries are resolved by a DNS server of the attacker’s choice. The attacker can direct the victim’s Internet traffic and point the victim to the attacker’s own web sites regardless of what domain the victim thinks he is actually going to, potentially leading to the compromise of the victim’s credentials. The same attack methodology can be used to make other changes to the router, like replacing its firmware. Routers could then host malicious web pages or engage in click fraud. Since the attack is mounted through viewing a web page, it does not require the attacker to have any physical proximity to the victim nor does it require the explicit download of traditional malicious software. The attack works under the reasonable assumption that the victim has not changed the default management password on their broadband router.
This work initiates a study of Luby-Racko. ciphers when the bitwise exclusive-or (XOR) operation ... more This work initiates a study of Luby-Racko. ciphers when the bitwise exclusive-or (XOR) operation in the underlying Feistel network is replaced by a binary operation in an arbitrary finite group. We obtain various interesting results in this context: - First, we analyze the security of three-round Feistel ladders over arbitrary groups. We examine various Luby-Racko. ciphers known to be insecure when XOR is used. In some cases, we can break these ciphers over arbitrary Abelian groups and in other cases, however, the security remains an open problem. - Next, we construct a four round Luby-Racko. cipher, operating over finite groups of characteristic greater than 2, that is not only completely secure against adaptive chosen plaintext and ciphertext attacks, but has better time / space complexity and uses fewer random bits than all previously considered Luby-Racko. ciphers of equivalent security in the literature. Surprisingly, when the group is of characteristic 2 (i.e., the underlying operation on strings is bitwise exclusive-or), the cipher can be completely broken in a constant number of queries. Notably, for the former set of results dealing with three rounds (where we report no difference) we need new techniques. However for the latter set of results dealing with four rounds (where we prove a new theorem) we rely on a generalization of known techniques albeit requires a new type of hash function family, called a monosymmetric hash function family, which we introduce in this work. We also discuss the existence (and construction) of this function family over various groups, and argue the necessity of this family in our construction. Moreover, these functions can be very easily and efficiently implemented on most current microprocessors thereby rendering the four round construction very practical.
This paper introduces two new ideas in the construction of fast universal hash functions geared t... more This paper introduces two new ideas in the construction of fast universal hash functions geared towards the task of message authentication. First, we describe a simple but novel family of universal hash functions that is more efficient than many standard constructions. We compare our hash functions to the MMH family studied by Halevi and Krawczyk [12]. All the main techniques used to optimize MMH work on our hash functions as well. Second, we introduce additional techniques for speeding up our constructions; these techniques apply to MMH and may apply to other hash functions. The techniques involve ignoring certain parts of the computation, while still retaining the necessary statistical properties for secure message authentication. Finally, we give implementation results on an ARM processor. Our constructions are general and can be used in any setting where universal hash functions are needed; therefore they may be of independent interest.
We describe a block cipher which is both practical and provably secure. The cipher uses the Secur... more We describe a block cipher which is both practical and provably secure. The cipher uses the Secure Hash Algorithm (SHA-1) as an underlying primitive, and we show that any succesful attack on the cipher results in a succesful attack against one or more of the hallowed properties of SHA-1. Moreover, our block cipher is still as fast as the Data
Existing block ciphers operate on a fixed-input-length (FIL) block size (e.g., 64-bits for DES). ... more Existing block ciphers operate on a fixed-input-length (FIL) block size (e.g., 64-bits for DES). Often, one needs a variable-input-length (VIL) primitive that can operate on a different size input; it is, however, undesirable to construct this primitive from “scratch.” This paper contains two constructions that start with a fixed-input-length block cipher and show how to securely convert it to a variable-input-length block cipher without making any additional cryptographic assumptions. Both constructions model the FIL block cipher as a pseudorandom permutation (PRP) – that is, indistinguishable from a random permutation against adaptive chosen plaintext attack. The first construction converts it to a VIL PRP and is an efficiency improvement over the scheme of Bellare and Rogaway [4]. The second construction converts it to a VIL super pseudorandom permutation (SPRP) – that is, the resulting VIL block cipher is indistinguishable from a random permutation against adaptive chosen plaintext and ciphertext attack.
We provide new constructions for Luby-Rackoff block ciphers which are efficient in terms of compu... more We provide new constructions for Luby-Rackoff block ciphers which are efficient in terms of computations and key material used. Next, we show that we can make some security guarantees for Luby-Rackoff block ciphers under much weaker and more practical assumptions about the underlying function; namely, that the underlying function is a secure Message Authentication Code. Finally, we provide a SHA-1 based example block cipher called Sha-zam.
Mobile devices consider energy to be a limiting resource. Over the past decade significant resear... more Mobile devices consider energy to be a limiting resource. Over the past decade significant research has gone into how one can reduce energy consumption at the hardware level, network protocol level, operating system level, and compiler level. In almost all algorithm analysis, a single resource such as time or communication is often taken as a proxy for energy. We address this problem by defining an algorithmic model for energy, designing algorithm variants that reduce energy cost in this model, and then performing preliminary experiments to test the model.
Service composition recently emerged as a costeffective way to quickly create new services within... more Service composition recently emerged as a costeffective way to quickly create new services within a network. Some research has been done to support user perceived end-toend QoS for service composition. However, not much work has been done to improve a network operator's performance when deploying composite services. In this paper we develop a service composition architecture that optimizes the aggregate bandwidth utilization within a operator's network; this metric is what operators care about most. A general service composition graph is proposed to model the loosely coupled interaction among service components as well as the estimated traffic that flows among them. Then an optimization problem is formalized and proved to be NP-hard, even to approximate. Next, two polynomial-time heuristic algorithms are developed together with several local search algorithms that further improve the performance of these two algorithms. Our simulations demonstrate the effectiveness of both approximation algorithms and show that they are suitable for service graphs with varying topologies.
An aggregate signature is a single short string that convinces any verifier that, for all 1 ≤ i ≤... more An aggregate signature is a single short string that convinces any verifier that, for all 1 ≤ i ≤ n, signer S i signed message M i , where the n signers and n messages may all be distinct. The main motivation of aggregate signatures is compactness. However, while the aggregate signature itself may be compact, aggregate signature verification might require potentially lengthy additional information – namely, the (at most) n distinct signer public keys and the (at most) n distinct messages being signed. If the verifier must obtain and/or store this additional information, the primary benefit of aggregate signatures is largely negated. This paper initiates a line of research whose ultimate objective is to find a signature scheme in which the total information needed to verify is minimized. In particular, the verification information should preferably be as close as possible to the theoretical minimum: the complexity of describing which signer(s) signed what message(s). We move toward this objective by developing identity-based aggregate signature schemes. In our schemes, the verifier does not need to obtain and/or store various signer public keys to verify; instead, the verifier only needs a description of who signed what, along with two constant-length “tags”: the short aggregate signature and the single public key of a Private Key Generator. Our scheme is secure in the random oracle model under the computational Diffie-Hellman assumption over pairing-friendly groups against an adversary that chooses its messages and its target identities adaptively.
IEEE Journal on Selected Areas in Communications, 2005
We consider the problem of maintaining end-to-end security in the presence of intelligent proxies... more We consider the problem of maintaining end-to-end security in the presence of intelligent proxies that may adaptively modify data being transmitted across a network. The video coding community considers this problem in the context of transcoding media streams, but their approaches either fail to address authentication or fail to provide meaningful security guarantees. We present two provably-secure schemes, LISSA and
Most prior designated confirmer signature schemes either prove security in the random oracle mode... more Most prior designated confirmer signature schemes either prove security in the random oracle model (ROM) or use general zero-knowledge proofs for NP statements (making them impractical). By slightly modifying the definition of designated confirmer signatures, Goldwasser and Waisbard presented an approach in which the Confirm and ConfirmedSign protocols could be implemented without appealing to general zero-knowledge proofs for NP statements (their “Disavow” protocol still requires them). The Goldwasser-Waisbard approach could be instantiated using Cramer-Shoup, GMR, or Gennaro-Halevi-Rabin signatures. In this paper, we provide an alternate generic transformation to convert any signature scheme into a designated confirmer signature scheme, without adding random oracles. Our key technique involves the use of a signature on a commitment and a separate encryption of the random string used for commitment. By adding this “layer of indirection,” the underlying protocols in our schemes admit efficient instantiations (i.e., we can avoid appealing to general zero-knowledge proofs for NP statements) and furthermore the performance of these protocols is not tied to the choice of underlying signature scheme. We illustrate this using the Camenisch-Shoup variation on Paillier’s cryptosystem and Pedersen commitments. The confirm protocol in our resulting scheme requires 10 modular exponentiations (compared to 320 for Goldwasser-Waisbard) and our disavow protocol requires 41 modular exponentiations (compared to using a general zero-knowledge proof for Goldwasser-Waisbard). Previous schemes use the “encryption of a signature” paradigm, and thus run into problems when trying to implement the “confirm” and “disavow” protocols efficiently.
Energy is a fundamental resource limitation in mobile and wireless devices. A great deal of resea... more Energy is a fundamental resource limitation in mobile and wireless devices. A great deal of research in mobile and wireless networking over the past decade has examined ways of reducing energy usage, including specific techniques such as energy-aware protocols for routing and communication. However, to our knowledge, no systematic way has been developed for reasoning generally about the energy consumption of algorithms. Techniques to understand and reason about the time and space complexity of algorithms, in particular asymptotic analysis and the big-Oh notation, have helped place computer programming as well as system design on a firm theoretical and practical footing. Clearly a method for analyzing energy complexity at the same abstract algorithmic level would be invaluable. However, it is not clear that a uniform abstract model of energy complexity can be developed that is both theoretically tractable and has practical predictive ability. Minimizing energy consumption requires making tradeoffs between many resources, including computation, communication, and memory accesses; taking any single resource as a proxy for energy cost neglects these tradeoffs and may lead to a poor model.
Address proxying is a process by which one IP node acts as an endpoint intermediary for an IP add... more Address proxying is a process by which one IP node acts as an endpoint intermediary for an IP address that actually belongs to another IP node. Address proxying serves many useful functions in IP networks. In IPv6, the Secure Neighbor Discovery Protocol (SEND) provides powerful tools for securing the mapping between the IP address and the link address which is the basis of local link address proxying; however, these tools don’t work for address proxies. In this paper, we present an extension to SEND for secure proxying. As an example of how secure address proxying can be used, we propose a minor extension of the Mobile IPv6 protocol to allow secure proxying by the home agent. We then present measurements comparing SEND with and without the address proxying extensions.
Broadcast encryption schemes allow a center to transmit encrypted data over a broadcast channel t... more Broadcast encryption schemes allow a center to transmit encrypted data over a broadcast channel to a large number of users such that only a select subset of privileged users can decrypt it. In this paper, we analyze how RSA accumulators can be used as a tool in this area. First, we describe a technique for achieving full key derivability given any broadcast encryption scheme in the general subset-cover framework [16]. Second, we show that Asano’s Broadcast Encryption scheme [5], can be viewed as a special-case instantiation of our general technique. Third, we use our technique to develop a new stateless-receiver broadcast encryption scheme that is a direct improvement on Asano’s scheme with respect to communication complexity, amount of tamper-resistant storage needed, and key derivation costs. Fourth, we derive a new lower bound that characterizes the tradeoffs inherent in broadcast encryption schemes which use our key derivability technique.
We present a multimedia content delivery system that preserves the end-to-end authenticity of ori... more We present a multimedia content delivery system that preserves the end-to-end authenticity of original content while allowing content adaptation by intermediaries. Our system utilizes a novel multi-hop signature scheme using Merkle trees that permits selective element removal and insertion. To permit secure element insertion we introduce the notion of a placeholder. We propose a computationally efficient scheme to instantiate placeholders based on the hash-sign-switch paradigm using trapdoor hash functions. We developed a system prototype in which the proposed signature scheme is implemented as an extension of the W3C XML signature standard and is applied to content meta-data written in XML. Evaluation results show that the proposed scheme improves scalability and response time of protected adaptive content delivery systems by reducing computational overhead for intermediaries to commit to the inserted element by 95% compared to schemes that use conventional digital signatures.
We present a single-database private information retrieval (PIR) scheme with communication comple... more We present a single-database private information retrieval (PIR) scheme with communication complexity O(k + d), where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block. This communication complexity is better asymptotically than previous single-database PIR schemes. The scheme also gives improved performance for practical parameter settings whether the user is retrieving a single bit or very large blocks. For large blocks, our scheme achieves a constant "rate" (e.g., 0.2), even when the user-side communication is very low (e.g., two 1024-bit numbers). Our scheme and security analysis is presented using general groups with hidden smooth subgroups; the scheme can be instantiated using composite moduli, in which case the security of our scheme is based on a simple variant of the "Φ-hiding" assumption by Cachin, Micali and Stadler [2].
Existing techniques for designing efficient password authenticated key exchange (PAKE) protocols ... more Existing techniques for designing efficient password authenticated key exchange (PAKE) protocols all can be viewed as variations of a small number of fundamental paradigms, and all are based on either the Diffie-Hellman or RSA assumptions. In this paper we propose a new technique for the design of PAKE protocols that does not fall into any of those paradigms, and which is based on a different assumption. In our technique, the server uses the password to construct a multiplicative group with a (hidden) smooth order subgroup, where the group order depends on the password. The client uses its knowledge of the password to generate a root extraction problem instance in the server's group and a discrete logarithm problem instance in the (smooth order) subgroup. If the server constructed its group correctly based on the password, the server can use its knowledge of the group order to solve the root extraction problem, and can solve the discrete logarithm problem by leveraging the smoothness of the hidden subgroup.
This paper considers the problem of password-authenticated key exchange (PAKE) in a client-server... more This paper considers the problem of password-authenticated key exchange (PAKE) in a client-server setting, where the server authenticates using a stored password file, and it is desirable to maintain some degree of security even if the server is compromised. A PAKE scheme is said to be resilient to server compromise if an adversary who compromises the server must at least perform an offline dictionary attack to gain any advantage in impersonating a client. (Of course, offline dictionary attacks should be infeasible in the absence of server compromise.) One can see that this is the best security possible, since by definition the password file has enough information to allow one to play the role of the server, and thus to verify passwords in an offline dictionary attack. While some previous PAKE schemes have been proven resilient to server compromise, there was no known general technique to take an arbitrary PAKE scheme and make it provably resilient to server compromise. This paper presents a practical technique for doing so which requires essentially one extra round of communication and one signature computation/ verification. We prove security in the universal composability framework by (1) defining a new functionality for PAKE with resilience to server compromise, (2) specifying a protocol combining this technique with a (basic) PAKE functionality, and (3) proving (in the random oracle model) that this protocol securely realizes the new functionality.
IEEE Transactions on Mobile Computing, 2004
There has been a surge of interest in the delivery of personalized information to users (e.g. per... more There has been a surge of interest in the delivery of personalized information to users (e.g. personalized stocks or travel information), particularly as mobile users with limited terminal device capabilities increasingly desire updated, targeted information in real time. When the number of information recipients is large and there is sufficient commonality in their interests, as is often the case, IP multicast is an efficient way of delivering the information. However, IP multicast services do not consider the structure and semantics of the information in the multicast process. We propose the use of Content-Based Multicast (CBM) where extra content filtering is performed at the interior nodes of the IP multicast tree; this will reduce network bandwidth usage and delivery delay, as well as the computation required at the sources and sinks.
This paper describes an attack concept termed Drive-by Pharming where an attacker sets up a web p... more This paper describes an attack concept termed Drive-by Pharming where an attacker sets up a web page that, when simply viewed by the victim (on a JavaScript-enabled browser), attempts to change the DNS server settings on the victim’s home broadband router. As a result, future DNS queries are resolved by a DNS server of the attacker’s choice. The attacker can direct the victim’s Internet traffic and point the victim to the attacker’s own web sites regardless of what domain the victim thinks he is actually going to, potentially leading to the compromise of the victim’s credentials. The same attack methodology can be used to make other changes to the router, like replacing its firmware. Routers could then host malicious web pages or engage in click fraud. Since the attack is mounted through viewing a web page, it does not require the attacker to have any physical proximity to the victim nor does it require the explicit download of traditional malicious software. The attack works under the reasonable assumption that the victim has not changed the default management password on their broadband router.
This work initiates a study of Luby-Racko. ciphers when the bitwise exclusive-or (XOR) operation ... more This work initiates a study of Luby-Racko. ciphers when the bitwise exclusive-or (XOR) operation in the underlying Feistel network is replaced by a binary operation in an arbitrary finite group. We obtain various interesting results in this context: - First, we analyze the security of three-round Feistel ladders over arbitrary groups. We examine various Luby-Racko. ciphers known to be insecure when XOR is used. In some cases, we can break these ciphers over arbitrary Abelian groups and in other cases, however, the security remains an open problem. - Next, we construct a four round Luby-Racko. cipher, operating over finite groups of characteristic greater than 2, that is not only completely secure against adaptive chosen plaintext and ciphertext attacks, but has better time / space complexity and uses fewer random bits than all previously considered Luby-Racko. ciphers of equivalent security in the literature. Surprisingly, when the group is of characteristic 2 (i.e., the underlying operation on strings is bitwise exclusive-or), the cipher can be completely broken in a constant number of queries. Notably, for the former set of results dealing with three rounds (where we report no difference) we need new techniques. However for the latter set of results dealing with four rounds (where we prove a new theorem) we rely on a generalization of known techniques albeit requires a new type of hash function family, called a monosymmetric hash function family, which we introduce in this work. We also discuss the existence (and construction) of this function family over various groups, and argue the necessity of this family in our construction. Moreover, these functions can be very easily and efficiently implemented on most current microprocessors thereby rendering the four round construction very practical.
This paper introduces two new ideas in the construction of fast universal hash functions geared t... more This paper introduces two new ideas in the construction of fast universal hash functions geared towards the task of message authentication. First, we describe a simple but novel family of universal hash functions that is more efficient than many standard constructions. We compare our hash functions to the MMH family studied by Halevi and Krawczyk [12]. All the main techniques used to optimize MMH work on our hash functions as well. Second, we introduce additional techniques for speeding up our constructions; these techniques apply to MMH and may apply to other hash functions. The techniques involve ignoring certain parts of the computation, while still retaining the necessary statistical properties for secure message authentication. Finally, we give implementation results on an ARM processor. Our constructions are general and can be used in any setting where universal hash functions are needed; therefore they may be of independent interest.
We describe a block cipher which is both practical and provably secure. The cipher uses the Secur... more We describe a block cipher which is both practical and provably secure. The cipher uses the Secure Hash Algorithm (SHA-1) as an underlying primitive, and we show that any succesful attack on the cipher results in a succesful attack against one or more of the hallowed properties of SHA-1. Moreover, our block cipher is still as fast as the Data
Existing block ciphers operate on a fixed-input-length (FIL) block size (e.g., 64-bits for DES). ... more Existing block ciphers operate on a fixed-input-length (FIL) block size (e.g., 64-bits for DES). Often, one needs a variable-input-length (VIL) primitive that can operate on a different size input; it is, however, undesirable to construct this primitive from “scratch.” This paper contains two constructions that start with a fixed-input-length block cipher and show how to securely convert it to a variable-input-length block cipher without making any additional cryptographic assumptions. Both constructions model the FIL block cipher as a pseudorandom permutation (PRP) – that is, indistinguishable from a random permutation against adaptive chosen plaintext attack. The first construction converts it to a VIL PRP and is an efficiency improvement over the scheme of Bellare and Rogaway [4]. The second construction converts it to a VIL super pseudorandom permutation (SPRP) – that is, the resulting VIL block cipher is indistinguishable from a random permutation against adaptive chosen plaintext and ciphertext attack.
We provide new constructions for Luby-Rackoff block ciphers which are efficient in terms of compu... more We provide new constructions for Luby-Rackoff block ciphers which are efficient in terms of computations and key material used. Next, we show that we can make some security guarantees for Luby-Rackoff block ciphers under much weaker and more practical assumptions about the underlying function; namely, that the underlying function is a secure Message Authentication Code. Finally, we provide a SHA-1 based example block cipher called Sha-zam.
Mobile devices consider energy to be a limiting resource. Over the past decade significant resear... more Mobile devices consider energy to be a limiting resource. Over the past decade significant research has gone into how one can reduce energy consumption at the hardware level, network protocol level, operating system level, and compiler level. In almost all algorithm analysis, a single resource such as time or communication is often taken as a proxy for energy. We address this problem by defining an algorithmic model for energy, designing algorithm variants that reduce energy cost in this model, and then performing preliminary experiments to test the model.
Service composition recently emerged as a costeffective way to quickly create new services within... more Service composition recently emerged as a costeffective way to quickly create new services within a network. Some research has been done to support user perceived end-toend QoS for service composition. However, not much work has been done to improve a network operator's performance when deploying composite services. In this paper we develop a service composition architecture that optimizes the aggregate bandwidth utilization within a operator's network; this metric is what operators care about most. A general service composition graph is proposed to model the loosely coupled interaction among service components as well as the estimated traffic that flows among them. Then an optimization problem is formalized and proved to be NP-hard, even to approximate. Next, two polynomial-time heuristic algorithms are developed together with several local search algorithms that further improve the performance of these two algorithms. Our simulations demonstrate the effectiveness of both approximation algorithms and show that they are suitable for service graphs with varying topologies.
An aggregate signature is a single short string that convinces any verifier that, for all 1 ≤ i ≤... more An aggregate signature is a single short string that convinces any verifier that, for all 1 ≤ i ≤ n, signer S i signed message M i , where the n signers and n messages may all be distinct. The main motivation of aggregate signatures is compactness. However, while the aggregate signature itself may be compact, aggregate signature verification might require potentially lengthy additional information – namely, the (at most) n distinct signer public keys and the (at most) n distinct messages being signed. If the verifier must obtain and/or store this additional information, the primary benefit of aggregate signatures is largely negated. This paper initiates a line of research whose ultimate objective is to find a signature scheme in which the total information needed to verify is minimized. In particular, the verification information should preferably be as close as possible to the theoretical minimum: the complexity of describing which signer(s) signed what message(s). We move toward this objective by developing identity-based aggregate signature schemes. In our schemes, the verifier does not need to obtain and/or store various signer public keys to verify; instead, the verifier only needs a description of who signed what, along with two constant-length “tags”: the short aggregate signature and the single public key of a Private Key Generator. Our scheme is secure in the random oracle model under the computational Diffie-Hellman assumption over pairing-friendly groups against an adversary that chooses its messages and its target identities adaptively.
IEEE Journal on Selected Areas in Communications, 2005
We consider the problem of maintaining end-to-end security in the presence of intelligent proxies... more We consider the problem of maintaining end-to-end security in the presence of intelligent proxies that may adaptively modify data being transmitted across a network. The video coding community considers this problem in the context of transcoding media streams, but their approaches either fail to address authentication or fail to provide meaningful security guarantees. We present two provably-secure schemes, LISSA and
Most prior designated confirmer signature schemes either prove security in the random oracle mode... more Most prior designated confirmer signature schemes either prove security in the random oracle model (ROM) or use general zero-knowledge proofs for NP statements (making them impractical). By slightly modifying the definition of designated confirmer signatures, Goldwasser and Waisbard presented an approach in which the Confirm and ConfirmedSign protocols could be implemented without appealing to general zero-knowledge proofs for NP statements (their “Disavow” protocol still requires them). The Goldwasser-Waisbard approach could be instantiated using Cramer-Shoup, GMR, or Gennaro-Halevi-Rabin signatures. In this paper, we provide an alternate generic transformation to convert any signature scheme into a designated confirmer signature scheme, without adding random oracles. Our key technique involves the use of a signature on a commitment and a separate encryption of the random string used for commitment. By adding this “layer of indirection,” the underlying protocols in our schemes admit efficient instantiations (i.e., we can avoid appealing to general zero-knowledge proofs for NP statements) and furthermore the performance of these protocols is not tied to the choice of underlying signature scheme. We illustrate this using the Camenisch-Shoup variation on Paillier’s cryptosystem and Pedersen commitments. The confirm protocol in our resulting scheme requires 10 modular exponentiations (compared to 320 for Goldwasser-Waisbard) and our disavow protocol requires 41 modular exponentiations (compared to using a general zero-knowledge proof for Goldwasser-Waisbard). Previous schemes use the “encryption of a signature” paradigm, and thus run into problems when trying to implement the “confirm” and “disavow” protocols efficiently.
Energy is a fundamental resource limitation in mobile and wireless devices. A great deal of resea... more Energy is a fundamental resource limitation in mobile and wireless devices. A great deal of research in mobile and wireless networking over the past decade has examined ways of reducing energy usage, including specific techniques such as energy-aware protocols for routing and communication. However, to our knowledge, no systematic way has been developed for reasoning generally about the energy consumption of algorithms. Techniques to understand and reason about the time and space complexity of algorithms, in particular asymptotic analysis and the big-Oh notation, have helped place computer programming as well as system design on a firm theoretical and practical footing. Clearly a method for analyzing energy complexity at the same abstract algorithmic level would be invaluable. However, it is not clear that a uniform abstract model of energy complexity can be developed that is both theoretically tractable and has practical predictive ability. Minimizing energy consumption requires making tradeoffs between many resources, including computation, communication, and memory accesses; taking any single resource as a proxy for energy cost neglects these tradeoffs and may lead to a poor model.
Address proxying is a process by which one IP node acts as an endpoint intermediary for an IP add... more Address proxying is a process by which one IP node acts as an endpoint intermediary for an IP address that actually belongs to another IP node. Address proxying serves many useful functions in IP networks. In IPv6, the Secure Neighbor Discovery Protocol (SEND) provides powerful tools for securing the mapping between the IP address and the link address which is the basis of local link address proxying; however, these tools don’t work for address proxies. In this paper, we present an extension to SEND for secure proxying. As an example of how secure address proxying can be used, we propose a minor extension of the Mobile IPv6 protocol to allow secure proxying by the home agent. We then present measurements comparing SEND with and without the address proxying extensions.
Broadcast encryption schemes allow a center to transmit encrypted data over a broadcast channel t... more Broadcast encryption schemes allow a center to transmit encrypted data over a broadcast channel to a large number of users such that only a select subset of privileged users can decrypt it. In this paper, we analyze how RSA accumulators can be used as a tool in this area. First, we describe a technique for achieving full key derivability given any broadcast encryption scheme in the general subset-cover framework [16]. Second, we show that Asano’s Broadcast Encryption scheme [5], can be viewed as a special-case instantiation of our general technique. Third, we use our technique to develop a new stateless-receiver broadcast encryption scheme that is a direct improvement on Asano’s scheme with respect to communication complexity, amount of tamper-resistant storage needed, and key derivation costs. Fourth, we derive a new lower bound that characterizes the tradeoffs inherent in broadcast encryption schemes which use our key derivability technique.
We present a multimedia content delivery system that preserves the end-to-end authenticity of ori... more We present a multimedia content delivery system that preserves the end-to-end authenticity of original content while allowing content adaptation by intermediaries. Our system utilizes a novel multi-hop signature scheme using Merkle trees that permits selective element removal and insertion. To permit secure element insertion we introduce the notion of a placeholder. We propose a computationally efficient scheme to instantiate placeholders based on the hash-sign-switch paradigm using trapdoor hash functions. We developed a system prototype in which the proposed signature scheme is implemented as an extension of the W3C XML signature standard and is applied to content meta-data written in XML. Evaluation results show that the proposed scheme improves scalability and response time of protected adaptive content delivery systems by reducing computational overhead for intermediaries to commit to the inserted element by 95% compared to schemes that use conventional digital signatures.
We present a single-database private information retrieval (PIR) scheme with communication comple... more We present a single-database private information retrieval (PIR) scheme with communication complexity O(k + d), where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block. This communication complexity is better asymptotically than previous single-database PIR schemes. The scheme also gives improved performance for practical parameter settings whether the user is retrieving a single bit or very large blocks. For large blocks, our scheme achieves a constant "rate" (e.g., 0.2), even when the user-side communication is very low (e.g., two 1024-bit numbers). Our scheme and security analysis is presented using general groups with hidden smooth subgroups; the scheme can be instantiated using composite moduli, in which case the security of our scheme is based on a simple variant of the "Φ-hiding" assumption by Cachin, Micali and Stadler [2].
Existing techniques for designing efficient password authenticated key exchange (PAKE) protocols ... more Existing techniques for designing efficient password authenticated key exchange (PAKE) protocols all can be viewed as variations of a small number of fundamental paradigms, and all are based on either the Diffie-Hellman or RSA assumptions. In this paper we propose a new technique for the design of PAKE protocols that does not fall into any of those paradigms, and which is based on a different assumption. In our technique, the server uses the password to construct a multiplicative group with a (hidden) smooth order subgroup, where the group order depends on the password. The client uses its knowledge of the password to generate a root extraction problem instance in the server's group and a discrete logarithm problem instance in the (smooth order) subgroup. If the server constructed its group correctly based on the password, the server can use its knowledge of the group order to solve the root extraction problem, and can solve the discrete logarithm problem by leveraging the smoothness of the hidden subgroup.
This paper considers the problem of password-authenticated key exchange (PAKE) in a client-server... more This paper considers the problem of password-authenticated key exchange (PAKE) in a client-server setting, where the server authenticates using a stored password file, and it is desirable to maintain some degree of security even if the server is compromised. A PAKE scheme is said to be resilient to server compromise if an adversary who compromises the server must at least perform an offline dictionary attack to gain any advantage in impersonating a client. (Of course, offline dictionary attacks should be infeasible in the absence of server compromise.) One can see that this is the best security possible, since by definition the password file has enough information to allow one to play the role of the server, and thus to verify passwords in an offline dictionary attack. While some previous PAKE schemes have been proven resilient to server compromise, there was no known general technique to take an arbitrary PAKE scheme and make it provably resilient to server compromise. This paper presents a practical technique for doing so which requires essentially one extra round of communication and one signature computation/ verification. We prove security in the universal composability framework by (1) defining a new functionality for PAKE with resilience to server compromise, (2) specifying a protocol combining this technique with a (basic) PAKE functionality, and (3) proving (in the random oracle model) that this protocol securely realizes the new functionality.