Correcting errors without leaking partial information (original) (raw)
Related papers
On the Power of Correlated Randomness in Secure Computation
Lecture Notes in Computer Science, 2013
We investigate the extent to which correlated secret randomness can help in secure computation with no honest majority. It is known that correlated randomness can be used to evaluate any circuit of size s with perfect security against semi-honest parties or statistical security against malicious parties, where the communication complexity grows linearly with s. This leaves open two natural questions: (1) Can the communication complexity be made independent of the circuit size? (2) Is it possible to obtain perfect security against malicious parties? We settle the above questions, obtaining both positive and negative results on unconditionally secure computation with correlated randomness. Concretely, we obtain the following results. Minimizing communication. Any multiparty functionality can be realized, with perfect security against semi-honest parties or statistical security against malicious parties, by a protocol in which the number of bits communicated by each party is linear in its input length. Our protocol uses an exponential number of correlated random bits. We give evidence that super-polynomial randomness complexity may be inherent. Perfect security against malicious parties. Any finite "senderreceiver" functionality, which takes inputs from a sender and a receiver and delivers an output only to the receiver, can be perfectly realized given correlated randomness. In contrast, perfect security is generally impossible for functionalities which deliver outputs to both parties. We also show useful functionalities (such as string equality) for which there are efficient perfectly secure protocols in the correlated randomness model.
Robust Fuzzy Extractors and Authenticated Key Agreement From Close Secrets
IEEE Transactions on Information Theory, 2000
Consider two parties holding correlated random variables W and W ′ , respectively, that are within distance t of each other in some metric space. These parties wish to agree on a uniformly distributed secret key R by sending a single message over an insecure channel controlled by an all-powerful adversary. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret SK that they can use to generate a sequence of session keys {Rj} using multiple pairs {(Wj, W ′ j )}. The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors. Our results improve upon previous work in several respects:
Probabilistic Confidentiality Properties based on Indistinguishability
Schutz und Zuverl?ssigkeit, 2005
This paper motivates three confidentiality properties base do n the notion of indistinguishable behavior induce db y adversary observations of nondeterministic and probabilistic systems. Conceale db ehavior is a possibilistic property, whereas ensured entropy and bounded risk are probabilistic properties. In contrast to noninterference- like information flow properties, these properties do not primarily aim at restricting information flow, but on
Concurrent Secrets with Quantified Suspicion
2018 18th International Conference on Application of Concurrency to System Design (ACSD), 2018
A system satisfies opacity if its secret behaviors cannot be detected by any user of the system. Opacity of distributed systems was originally set as a boolean predicate before being quantified as measures in a probabilistic setting. This paper considers a different quantitative approach that measures the efforts that a malicious user has to make to detect a secret. This effort is measured as a distance w.r.t a regular profile specifying a normal behavior. This leads to several notions of quantitative opacity. When attackers are passive that is, when they just observe the system, quantitative opacity is brought back to a language inclusion problem, and is PSPACE-complete. When attackers are active, that is, interact with the system in order to detect secret behaviors within a finite depth observation, quantitative opacity turns to be a two-player finite-state quantitative game of partial observation. A winning strategy for an attacker is a sequence of interactions with the system le...
Information-theoretical Secret-key agreement and Bound information
2009
One big problem of the communication between two parties is the secrecy. That means how much information a third party can obtain by intercepting the messages transmitted from one honest party to the other one. Therefore cryptography offers a wide range of protocols to ensure security with assumptions on the eavesdropper. So one was looking for an information-theoretical description of the scenario to get unconditional secure communication. In this scenario we are considering two honest parties that want to communicate over an authenticated channel that the eavesdropper is wiretapping. This scenario introduced the definition of the intrinsic information and the secret-key rate which are a measure of the secrecy in this setting. Later because of strong analogies to quantum mechanics it turned out that this description was lacking a phenomena called bound information which is the disability of a probability distribution to create a secret-key even though it has predicted secrecy. Nearly ten years of research have shown the existence of bound information for the multipartite case where several parties are communicating but not yet for the bipartite case. Hence the approach of non-distillability seems a very promising one to find this conjecture. Motivated by this the approach we implemented this tool and simulated some distributions that have conjectured bound information. Thereby we improved the tool to reduce its calculation time and to get closer to the aim.
Secure Computation Using Leaky Correlations (Asymptotically Optimal Constructions)
Theory of Cryptography, 2018
Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share. Prior solutions, starting with n-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have Θ(n) circuit-size despite Θ(n)-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS-2009) as a natural generalization of privacy amplification and randomness extraction, recover "fresh" correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces Θ(n)-bit fresh correlations even after Θ(n)-bit leakage. Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly "twisting then permuting" appropriate Algebraic Geometry codes over constant-size fields.
Bounds on the Leakage of the Input’s Distribution in Information-Hiding Protocols
Lecture Notes in Computer Science, 2009
In information-hiding, an adversary that tries to infer the secret information has a higher probability of success if it knows the distribution on the secrets. We show that if the system leaks probabilistically some information about the secrets, (that is, if there is a probabilistic correlation between the secrets and some observables) then the adversary can approximate such distribution by repeating the observations. More precisely, it can approximate the distribution on the observables by computing their frequencies, and then derive the distribution on the secrets by using the correlation in the inverse direction. We illustrate this method, and then we study the bounds on the approximation error associated with it, for various natural notions of error. As a case study, we apply our results to Crowds, a protocol for anonymous communication.
Efficient Cryptographic Protocols Based on Noisy Channels
Lecture Notes in Computer Science, 1997
The Wire-Tap Channel of Wyner [19] shows that a Binary Symmetric Channel may be used as a basis for exchanging a secret key, in a crypt,ographic scenario of two honest people facing an eavesdropper. Later CrCpeau and Kilian [9] showed how a BSC may be used to implement Oblivious Transfer in a cryptographic scenario of two possibly dishonest people facing each other. Unfortunately this result is rather impractical as it requires f l (n ") bits to be transmitted through the BSC to accomplish a single OT. The current paper provides efficient protocols to achieve the cryptographic primitives of Bit Commitment and Oblivious Transfer based on the existence of a Binary Symmetric Channel. Our protocols respectively require sending O(n) and O (n 3) bits through the BSC. These results are based on a technique known as Generalized Privacy Amplification [l] that allow two people to extract secret information from partially compromised data. * Supported in part by Qukbec's FCAR and Canada's NSERC.