J. Blocki - Academia.edu (original) (raw)
Papers by J. Blocki
Cornell University - arXiv, May 15, 2017
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs... more Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using n steps and O(n/ log(n)) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs Ω(n/ log(n)) memory for Ω(n) steps. As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on n nodes with constant indegree with high "sustained-space complexity", meaning that any parallel black-pebbling strategy requires Ω(n/ log(n)) pebbles for at least Ω(n) steps. Along the way we construct a family of maximally "depth-robust" DAGs with maximum indegree O(log n), improving upon the construction of Mahmoody et al. [ITCS'13] which had maximum indegree O log 2 n • polylog(log n) .
2011 IEEE 24th Computer Security Foundations Symposium, 2011
Audit mechanisms are essential for privacy protection in permissive access control regimes, such ... more Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defender's auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender.
Audits to detect policy violations coupled with punishments are essential to manage risks stemmin... more Audits to detect policy violations coupled with punishments are essential to manage risks stemming from inappropriate information use by authorized insiders in organizations that handle large volumes of personal information (e.g., in healthcare, finance, Web services sectors). Our main result is an audit mechanism that effectively manages organizational risks by balancing the cost of audit and punishment against the expected loss from policy violations. We model the interaction between an organization (defender) and an employee (adversary) as a suitable repeated game. We assume that the defender is fully rational and the adversary is near-rational (i.e., acts rationally with high probability and in a byzantine manner otherwise). The mechanism prescribes a strategy for the defender that when paired with the adversary's best response to it yields an asymmetric subgame perfect equilibrium. This equilibrium concept, which we define, implies that the defender's strategy is approximately optimal (she might only gain a small bounded amount of utility by deviating) while the adversary does not gain at all from deviating from her best response strategy. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies, suggest specific studies that can help estimate other parameters, and design a learning algorithm that the defender can use to provably learn the adversary's private incentives. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to conduct more thorough audits.
Lecture Notes in Computer Science, 2011
Privacy has become a significant concern in modern society as personal information about individu... more Privacy has become a significant concern in modern society as personal information about individuals is increasingly collected, used, and shared, often using digital technologies, by a wide range of organizations. Certain information handling practices of organizations that monitor individuals' activities on the Web, data aggregation companies that compile massive databases of personal information, cell phone companies that collect and use location data about individuals, online social networks and search engines-while enabling useful services-have aroused much indignation and protest in the name of privacy. Similarly, as healthcare organizations are embracing electronic health record systems and patient portals to enable patients, employees, and business affiliates more efficient access to personal health information, there is trepidation that the privacy of patients may not be adequately protected if information handling practices are not carefully designed and enforced. Given this state of affairs, it is very important to arrive at a general understanding of (a) why certain information handling practices arouse moral indignation, what practices or policies are appropriate in a given setting, and (b) how to represent and enforce such policies using information processing systems. This article summarizes progress on a research program driven by goal (b). We describe a semantic model and logic of privacy that formalizes privacy as a right to appropriate flows of personal information-a position taken by contextual integrity, a philosphical theory of privacy for answering questions of the form identified in (a). The logic is designed with the goal of enabling specification and enforcement of practical privacy policies. It has been used to develop the first complete formalization of two US privacy laws-the HIPAA Privacy Rule that prescribes and proscribes flows of personal health information, and the Gramm-Leach-Bliley Act that similarly governs flows of personal financial information. Observing that preventive access control mechanisms are not sufficient to enforce such privacy policies, we develop two complementary audit mechanisms for policy enforcement. These mechanisms enable auditing of practical privacy policies, including the entire HIPAA Privacy Rule. The article concludes with a vision for further research in this area.
Lecture Notes in Computer Science, 2013
Online learning algorithms that minimize regret provide strong guarantees in situations that invo... more Online learning algorithms that minimize regret provide strong guarantees in situations that involve repeatedly making decisions in an uncertain environment, e.g. a driver deciding what route to drive to work every day. While regret minimization has been extensively studied in repeated games, we study regret minimization for a richer class of games called bounded memory games.
AbstractWe take the position that audit mechanisms are essential for privacy protection in healt... more AbstractWe take the position that audit mechanisms are essential for privacy protection in healthcare environments. Although audits are used in practice and commercial tools that provide assistance for audits are emerging, we currently lack rigorous models and ...
2011 IEEE 24th Computer Security Foundations Symposium, 2011
Abstract Audit mechanisms are essential for privacy protection in permissive access control regim... more Abstract Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop a principled approach to audits. Our first ...
Lecture Notes in Computer Science, 2012
Organizations that collect and use large volumes of personal information are expected under the p... more Organizations that collect and use large volumes of personal information are expected under the principle of accountable data governance to take measures to protect data subjects from risks that arise from inapproriate uses of this information. In this paper, we focus on a specific class of mechanismsaudits to identify policy violators coupled with punishments-that organizations such as hospitals, financial institutions, and Web services companies may adopt to protect data subjects from privacy and security risks stemming from inappropriate information use by insiders. We model the interaction between the organization (defender) and an insider (adversary) during the audit process as a repeated game. We then present an audit strategy for the defender. The strategy requires the defender to commit to its action and when paired with the adversary's best response to it, provably yields an asymmetric subgame perfect equilibrium. We then present two mechanisms for allocating the total audit budget for inspections across all games the organization plays with different insiders. The first mechanism allocates budget to maximize the utility of the organization. Observing that this mechanism protects the organization's interests but may not protect data subjects, we introduce an accountable data governance property, which requires the organization to conduct thorough audits and impose punishments on violators. The second mechanism we present achieves this property. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies and suggest specific studies that can help estimate other parameters. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to adopt accountable data governance practices.
Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audi... more Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audit to detect and punish insiders who inappropriately access and disclose confidential information. Recent work on audit games models the strategic interaction between an auditor with a single audit resource and auditees as a Stackelberg game, augmenting associated well-studied security games with a configurable punishment parameter. We significantly generalize this audit game model to account for multiple audit resources where each resource is restricted to audit a subset of all potential violations, thus enabling application to practical auditing scenarios. We provide an FPTAS that computes an approximately optimal solution to the resulting non-convex optimization problem. The main technical novelty is in the design and correctness proof of an optimization transformation that enables the construction of this FPTAS. In addition, we experimentally demonstrate that this transformation significantly speeds up computation of solutions for a class of audit games and security games.
Cornell University - arXiv, May 15, 2017
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs... more Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using n steps and O(n/ log(n)) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs Ω(n/ log(n)) memory for Ω(n) steps. As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on n nodes with constant indegree with high "sustained-space complexity", meaning that any parallel black-pebbling strategy requires Ω(n/ log(n)) pebbles for at least Ω(n) steps. Along the way we construct a family of maximally "depth-robust" DAGs with maximum indegree O(log n), improving upon the construction of Mahmoody et al. [ITCS'13] which had maximum indegree O log 2 n • polylog(log n) .
2011 IEEE 24th Computer Security Foundations Symposium, 2011
Audit mechanisms are essential for privacy protection in permissive access control regimes, such ... more Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defender's auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender.
Audits to detect policy violations coupled with punishments are essential to manage risks stemmin... more Audits to detect policy violations coupled with punishments are essential to manage risks stemming from inappropriate information use by authorized insiders in organizations that handle large volumes of personal information (e.g., in healthcare, finance, Web services sectors). Our main result is an audit mechanism that effectively manages organizational risks by balancing the cost of audit and punishment against the expected loss from policy violations. We model the interaction between an organization (defender) and an employee (adversary) as a suitable repeated game. We assume that the defender is fully rational and the adversary is near-rational (i.e., acts rationally with high probability and in a byzantine manner otherwise). The mechanism prescribes a strategy for the defender that when paired with the adversary's best response to it yields an asymmetric subgame perfect equilibrium. This equilibrium concept, which we define, implies that the defender's strategy is approximately optimal (she might only gain a small bounded amount of utility by deviating) while the adversary does not gain at all from deviating from her best response strategy. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies, suggest specific studies that can help estimate other parameters, and design a learning algorithm that the defender can use to provably learn the adversary's private incentives. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to conduct more thorough audits.
Lecture Notes in Computer Science, 2011
Privacy has become a significant concern in modern society as personal information about individu... more Privacy has become a significant concern in modern society as personal information about individuals is increasingly collected, used, and shared, often using digital technologies, by a wide range of organizations. Certain information handling practices of organizations that monitor individuals' activities on the Web, data aggregation companies that compile massive databases of personal information, cell phone companies that collect and use location data about individuals, online social networks and search engines-while enabling useful services-have aroused much indignation and protest in the name of privacy. Similarly, as healthcare organizations are embracing electronic health record systems and patient portals to enable patients, employees, and business affiliates more efficient access to personal health information, there is trepidation that the privacy of patients may not be adequately protected if information handling practices are not carefully designed and enforced. Given this state of affairs, it is very important to arrive at a general understanding of (a) why certain information handling practices arouse moral indignation, what practices or policies are appropriate in a given setting, and (b) how to represent and enforce such policies using information processing systems. This article summarizes progress on a research program driven by goal (b). We describe a semantic model and logic of privacy that formalizes privacy as a right to appropriate flows of personal information-a position taken by contextual integrity, a philosphical theory of privacy for answering questions of the form identified in (a). The logic is designed with the goal of enabling specification and enforcement of practical privacy policies. It has been used to develop the first complete formalization of two US privacy laws-the HIPAA Privacy Rule that prescribes and proscribes flows of personal health information, and the Gramm-Leach-Bliley Act that similarly governs flows of personal financial information. Observing that preventive access control mechanisms are not sufficient to enforce such privacy policies, we develop two complementary audit mechanisms for policy enforcement. These mechanisms enable auditing of practical privacy policies, including the entire HIPAA Privacy Rule. The article concludes with a vision for further research in this area.
Lecture Notes in Computer Science, 2013
Online learning algorithms that minimize regret provide strong guarantees in situations that invo... more Online learning algorithms that minimize regret provide strong guarantees in situations that involve repeatedly making decisions in an uncertain environment, e.g. a driver deciding what route to drive to work every day. While regret minimization has been extensively studied in repeated games, we study regret minimization for a richer class of games called bounded memory games.
AbstractWe take the position that audit mechanisms are essential for privacy protection in healt... more AbstractWe take the position that audit mechanisms are essential for privacy protection in healthcare environments. Although audits are used in practice and commercial tools that provide assistance for audits are emerging, we currently lack rigorous models and ...
2011 IEEE 24th Computer Security Foundations Symposium, 2011
Abstract Audit mechanisms are essential for privacy protection in permissive access control regim... more Abstract Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop a principled approach to audits. Our first ...
Lecture Notes in Computer Science, 2012
Organizations that collect and use large volumes of personal information are expected under the p... more Organizations that collect and use large volumes of personal information are expected under the principle of accountable data governance to take measures to protect data subjects from risks that arise from inapproriate uses of this information. In this paper, we focus on a specific class of mechanismsaudits to identify policy violators coupled with punishments-that organizations such as hospitals, financial institutions, and Web services companies may adopt to protect data subjects from privacy and security risks stemming from inappropriate information use by insiders. We model the interaction between the organization (defender) and an insider (adversary) during the audit process as a repeated game. We then present an audit strategy for the defender. The strategy requires the defender to commit to its action and when paired with the adversary's best response to it, provably yields an asymmetric subgame perfect equilibrium. We then present two mechanisms for allocating the total audit budget for inspections across all games the organization plays with different insiders. The first mechanism allocates budget to maximize the utility of the organization. Observing that this mechanism protects the organization's interests but may not protect data subjects, we introduce an accountable data governance property, which requires the organization to conduct thorough audits and impose punishments on violators. The second mechanism we present achieves this property. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies and suggest specific studies that can help estimate other parameters. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to adopt accountable data governance practices.
Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audi... more Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audit to detect and punish insiders who inappropriately access and disclose confidential information. Recent work on audit games models the strategic interaction between an auditor with a single audit resource and auditees as a Stackelberg game, augmenting associated well-studied security games with a configurable punishment parameter. We significantly generalize this audit game model to account for multiple audit resources where each resource is restricted to audit a subset of all potential violations, thus enabling application to practical auditing scenarios. We provide an FPTAS that computes an approximately optimal solution to the resulting non-convex optimization problem. The main technical novelty is in the design and correctness proof of an optimization transformation that enables the construction of this FPTAS. In addition, we experimentally demonstrate that this transformation significantly speeds up computation of solutions for a class of audit games and security games.