Nadim Kobeissi - Academia.edu (original) (raw)
Papers by Nadim Kobeissi
2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2018
Cryptocurrencies have popularized public ledgers, known colloquially as "blockchains". While the ... more Cryptocurrencies have popularized public ledgers, known colloquially as "blockchains". While the Bitcoin blockchain is relatively simple to reason about as, effectively, a hash chain, more complex public ledgers are largely designed without any formalization of desired cryptographic properties such as authentication or integrity. These designs are then implemented without assurances against real-world bugs leading to little assurance with regards to practical, real-world security. Ledger Design Language (LDL) is a modeling language for describing public ledgers. The LDL compiler produces two outputs. The first output is a an applied-pi calculus symbolic model representing the public ledger as a protocol. Using ProVerif, the protocol can be played against an active attacker, whereupon we can query for block integrity, authenticity and other properties. The second output is a formally verified read/write API for interacting with the public ledger in the real world, written in the F ⋆ programming language. F ⋆ features such as dependent types allow us to validate a block on the public ledger, for example, by type-checking it so that its signing public key be a point on a curve. Using LDL's outputs, public ledger designers obtain automated assurances on the theoretical coherence and the real-world security of their designs with a single framework based on a single modeling language.
As TLS progresses into major new revisions in TLS 1.3, implementers are once again tasked with up... more As TLS progresses into major new revisions in TLS 1.3, implementers are once again tasked with upgrading existing TLS code. This includes dealing with concerns such as testing new cryptographic mechanisms while preserving backwards compatibility and preventing downgrade attacks. It would be attractive for implementers to be able to quickly extract a composite symbolic model from their evolving code and formally verify it against a variety of attacker models. Even when formal verification is not feasible, systematic testing for known and new state machine vulnerabilities is a must, in light of recent attacks like SKIP and FREAK. In this position paper, we describe our ongoing work on implementing TLS 1.0-1.3 in ProScript, a statically typed subset of JavaScript. The type system guarantees that our implementation avoids common pitfalls of JavaScript programming and that it can be properly isolated from unverified application code. The ProScript compiler translates the source code into...
We present EverParse, a framework for generating parsers and serializers from tag-length-value bi... more We present EverParse, a framework for generating parsers and serializers from tag-length-value binary message format descriptions. The resulting code is verified to be safe (no overflow, no use after free), correct (parsing is the inverse of serialization) and non-malleable (each message has a unique binary representation). These guarantees underpin the security of cryptographic message authentication, and they enable testing to focus on interoperability and performance issues. EverParse consists of two parts: LowParse, a library of parser combinators and their formal properties written in F*; and QuackyDucky, a compiler from a domain-specific language of RFC message formats down to low-level F* code that calls LowParse. While LowParse is fully verified, we do not formalize the semantics of the input language and keep QuackyDucky outside our trusted computing base. Instead, it also outputs a formal message specification, and F* automatically verifies our implementation against this ...
IACR Cryptol. ePrint Arch., 2018
Web traffic encryption has shifted from applying only to highly sensitive websites (such as banks... more Web traffic encryption has shifted from applying only to highly sensitive websites (such as banks) to a majority of all Web requests. Until recently, one of the main limiting factors for enabling HTTPS is the requirement to obtain a valid certificate from a trusted certification authority, a tedious process that typically involves fees and ad-hoc key generation, certificate request and domain validation procedures. To remove this barrier of entry, the Internet Security Research Group created Let's Encrypt, a new non-profit certificate authority which uses a new protocol called Automatic Certificate Management Environment (ACME) to automate certificate management at all levels (request, validation , issuance, renewal, and revocation) between clients (website operators) and servers (certificate authority nodes). Let's Encrypt's success is measured by its issuance of over 12 million free certificates since its launch in April 2016. In this paper, we survey the existing proc...
We present FLEXTLS, a tool for rapidly prototyping and testing implementations of the Transport L... more We present FLEXTLS, a tool for rapidly prototyping and testing implementations of the Transport Layer Security (TLS) protocol. FLEXTLS is built upon MITLS, a verified implementation of TLS, and hence protocol scenarios written in FLEXTLS can benefit from robust libraries for messaging and cryptography. Conversely, attack scripts in FLEXTLS can be used to evaluate and communicate the impact of new protocol vulnerabilities. FLEXTLS was used to discover recent attacks on TLS implementations, such as SKIP and FREAK, as well as to program the first proof-of-concept demos for FREAK and Logjam. It is also being used to experiment with proposed designs of the upcoming version 1.3 of TLS. Our goal is to create a common platform where protocol analysts and practitioners can easily test TLS implementations and share protocol designs, attacks or proofs.
Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop, 2020
Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimize... more Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimized with heuristics for common-case protocol specifications, that aims to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation. Furthermore, Verifpal's semantics have been formalized within the Coq theorem prover, and Verifpal models can be automatically translated into Coq as well as into ProVerif models for further verification. Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 as well as the first formal model for the DP-3T pandemic-tracing protocol, which we present in this work. Through Verifpal, we show that advanced verification with formalized semantics and sound logic can exist without any expense towards the convenience of real-world practitioners.
2019 IEEE European Symposium on Security and Privacy (EuroS&P), 2019
The Noise Protocol Framework, introduced recently, allows for the design and construction of secu... more The Noise Protocol Framework, introduced recently, allows for the design and construction of secure channel protocols by describing them through a simple, restricted language from which complex key derivation and local state transitions are automatically inferred. Noise "Handshake Patterns" can support mutual authentication, forward secrecy, zero round-trip encryption, identity hiding and other advanced features. Since the framework's release, Noise-based protocols have been adopted by WhatsApp, WireGuard and other high-profile applications. We present Noise Explorer, an online engine for designing, reasoning about, formally verifying and implementing arbitrary Noise Handshake Patterns. Based on our formal treatment of the Noise Protocol Framework, Noise Explorer can validate any Noise Handshake Pattern and then translate it into a model ready for automated verification and also into a productionready software implementation written in Go or in Rust. We use Noise Explorer to analyze more than 57 handshake patterns. We confirm the stated security goals for 12 fundamental patterns and provide precise properties for the rest. We also analyze unsafe handshake patterns and document weaknesses that occur when validity rules are not followed. All of this work is consolidated into a usable online tool that presents a compendium of results and can parse formal verification results to generate detailedbut-pedagogical reports regarding the exact security goals of each message of a Noise Handshake Pattern with respect to each party, under an active attacker and including malicious principals. Noise Explorer evolves alongside the standard Noise Protocol Framework, having already contributed new security goal verification results and stronger definitions for pattern validation and security parameters.
Financial Cryptography and Data Security, 2017
Web traffic encryption has shifted from applying only to sensitive websites (such as banks) to a ... more Web traffic encryption has shifted from applying only to sensitive websites (such as banks) to a majority of all Web requests. Until recently, one of the main limiting factors for enabling HTTPS was the requirement to obtain a valid certificate from a trusted certification authority. This process traditionally involves steps such as paying a certificate issuance fee, ad-hoc private key and certificate request generation, and domain validation procedures. To remove this barrier of entry, the Internet Security Research Group (ISRG) introduced "Let's Encrypt", a new non-profit certificate authority that uses a new protocol called Automatic Certificate Management Environment (ACME) to automate certificate management at all levels (request, validation, issuance, renewal, and revocation) between clients (website operators) and servers (certificate authority nodes). Let's Encrypt's success is measured by its issuance of over 27 million free certificates since its launch in April 2016. In this paper, we survey the existing process for issuing domain-validated certificates in major certification authorities. Based on our findings, we build a security model of domain-validated certificate issuance. We then model the ACME protocol in the applied pi-calculus and verify its stated security goals against our security model. We compare the effective security of different domain validation methods and show that ACME can be secure under a stronger threat model than that of traditional CAs. We also uncover weaknesses in some flows of ACME 1.0 and propose verified improvements that have been adopted in the latest protocol draft submitted to the IETF.
2017 IEEE Symposium on Security and Privacy (SP), 2017
TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate desig... more TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate design is a reaction both to the increasing demand for low-latency HTTPS connections and to a series of recent high-profile attacks on TLS. The hope is that a fresh protocol with modern cryptography will prevent legacy problems; the danger is that it will expose new kinds of attacks, or reintroduce old flaws that were fixed in previous versions of TLS. After 18 drafts, the protocol is nearing completion, and the working group has appealed to researchers to analyze the protocol before publication. This paper responds by presenting a comprehensive analysis of the TLS 1.3 Draft-18 protocol. We seek to answer three questions that have not been fully addressed in previous work on TLS 1.3: (1) Does TLS 1.3 prevent well-known attacks on TLS 1.2, such as Logjam or the Triple Handshake, even if it is run in parallel with TLS 1.2? (2) Can we mechanically verify the computational security of TLS 1.3 under standard (strong) assumptions on its cryptographic primitives? (3) How can we extend the guarantees of the TLS 1.3 protocol to the details of its implementations? To answer these questions, we propose a methodology for developing verified symbolic and computational models of TLS 1.3 hand-in-hand with a high-assurance reference implementation of the protocol. We present symbolic ProVerif models for various intermediate versions of TLS 1.3 and evaluate them against a rich class of attacks to reconstruct both known and previously unpublished vulnerabilities that influenced the current design of the protocol. We present a computational CryptoVerif model for TLS 1.3 Draft-18 and prove its security. We present RefTLS, an interoperable implementation of TLS 1.0-1.3 and automatically analyze its protocol core by extracting a ProVerif model from its typed JavaScript code. Client C Server S Knows (sk C , pk C), psk Knows (sk S , pk S), psk Negotiation (offer C , modeS)
2017 IEEE European Symposium on Security and Privacy (EuroS&P), 2017
Many popular web applications incorporate end-toend secure messaging protocols, which seek to ens... more Many popular web applications incorporate end-toend secure messaging protocols, which seek to ensure that messages sent between users are kept confidential and authenticated, even if the web application's servers are broken into or otherwise compelled into releasing all their data. Protocols that promise such strong security guarantees should be held up to rigorous analysis, since protocol flaws and implementations bugs can easily lead to real-world attacks. We propose a novel methodology that allows protocol designers, implementers, and security analysts to collaboratively verify a protocol using automated tools. The protocol is implemented in ProScript, a new domain-specific language that is designed for writing cryptographic protocol code that can both be executed within JavaScript programs and automatically translated to a readable model in the applied pi calculus. This model can then be analyzed symbolically using ProVerif to find attacks in a variety of threat models. The model can also be used as the basis of a computational proof using CryptoVerif, which reduces the security of the protocol to standard cryptographic assumptions. If ProVerif finds an attack, or if the CryptoVerif proof reveals a weakness, the protocol designer modifies the ProScript protocol code and regenerates the model to enable a new analysis. We demonstrate our methodology by implementing and analyzing a variant of the popular Signal Protocol with only minor differences. We use ProVerif and CryptoVerif to find new and previously-known weaknesses in the protocol and suggest practical countermeasures. Our ProScript protocol code is incorporated within the current release of Cryptocat, a desktop secure messenger application written in JavaScript. Our results indicate that, with disciplined programming and some verification expertise, the systematic analysis of complex cryptographic web applications is now becoming practical.
Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security, 2016
Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open ... more Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open global computing platform, called the Ethereum Virtual Machine (EVM). EVM executes bytecode on a simple stack machine. Programmers do not usually write EVM code; instead, they can program in a JavaScript-like language, called Solidity, that compiles to bytecode. Since the main purpose of EVM is to execute smart contracts that manage and transfer digital assets (called Ether), security is of paramount importance. However, writing secure smart contracts can be extremely difficult: due to the openness of Ethereum, both programs and pseudonymous users can call into the public methods of other programs, leading to potentially dangerous compositions of trusted and untrusted code. This risk was recently illustrated by an attack on TheDAO contract that exploited subtle details of the EVM semantics to transfer roughly $50M worth of Ether into the control of an attacker. In this paper, we outline a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F , a functional programming language aimed at program verification.
2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2018
Cryptocurrencies have popularized public ledgers, known colloquially as "blockchains". While the ... more Cryptocurrencies have popularized public ledgers, known colloquially as "blockchains". While the Bitcoin blockchain is relatively simple to reason about as, effectively, a hash chain, more complex public ledgers are largely designed without any formalization of desired cryptographic properties such as authentication or integrity. These designs are then implemented without assurances against real-world bugs leading to little assurance with regards to practical, real-world security. Ledger Design Language (LDL) is a modeling language for describing public ledgers. The LDL compiler produces two outputs. The first output is a an applied-pi calculus symbolic model representing the public ledger as a protocol. Using ProVerif, the protocol can be played against an active attacker, whereupon we can query for block integrity, authenticity and other properties. The second output is a formally verified read/write API for interacting with the public ledger in the real world, written in the F ⋆ programming language. F ⋆ features such as dependent types allow us to validate a block on the public ledger, for example, by type-checking it so that its signing public key be a point on a curve. Using LDL's outputs, public ledger designers obtain automated assurances on the theoretical coherence and the real-world security of their designs with a single framework based on a single modeling language.
As TLS progresses into major new revisions in TLS 1.3, implementers are once again tasked with up... more As TLS progresses into major new revisions in TLS 1.3, implementers are once again tasked with upgrading existing TLS code. This includes dealing with concerns such as testing new cryptographic mechanisms while preserving backwards compatibility and preventing downgrade attacks. It would be attractive for implementers to be able to quickly extract a composite symbolic model from their evolving code and formally verify it against a variety of attacker models. Even when formal verification is not feasible, systematic testing for known and new state machine vulnerabilities is a must, in light of recent attacks like SKIP and FREAK. In this position paper, we describe our ongoing work on implementing TLS 1.0-1.3 in ProScript, a statically typed subset of JavaScript. The type system guarantees that our implementation avoids common pitfalls of JavaScript programming and that it can be properly isolated from unverified application code. The ProScript compiler translates the source code into...
We present EverParse, a framework for generating parsers and serializers from tag-length-value bi... more We present EverParse, a framework for generating parsers and serializers from tag-length-value binary message format descriptions. The resulting code is verified to be safe (no overflow, no use after free), correct (parsing is the inverse of serialization) and non-malleable (each message has a unique binary representation). These guarantees underpin the security of cryptographic message authentication, and they enable testing to focus on interoperability and performance issues. EverParse consists of two parts: LowParse, a library of parser combinators and their formal properties written in F*; and QuackyDucky, a compiler from a domain-specific language of RFC message formats down to low-level F* code that calls LowParse. While LowParse is fully verified, we do not formalize the semantics of the input language and keep QuackyDucky outside our trusted computing base. Instead, it also outputs a formal message specification, and F* automatically verifies our implementation against this ...
IACR Cryptol. ePrint Arch., 2018
Web traffic encryption has shifted from applying only to highly sensitive websites (such as banks... more Web traffic encryption has shifted from applying only to highly sensitive websites (such as banks) to a majority of all Web requests. Until recently, one of the main limiting factors for enabling HTTPS is the requirement to obtain a valid certificate from a trusted certification authority, a tedious process that typically involves fees and ad-hoc key generation, certificate request and domain validation procedures. To remove this barrier of entry, the Internet Security Research Group created Let's Encrypt, a new non-profit certificate authority which uses a new protocol called Automatic Certificate Management Environment (ACME) to automate certificate management at all levels (request, validation , issuance, renewal, and revocation) between clients (website operators) and servers (certificate authority nodes). Let's Encrypt's success is measured by its issuance of over 12 million free certificates since its launch in April 2016. In this paper, we survey the existing proc...
We present FLEXTLS, a tool for rapidly prototyping and testing implementations of the Transport L... more We present FLEXTLS, a tool for rapidly prototyping and testing implementations of the Transport Layer Security (TLS) protocol. FLEXTLS is built upon MITLS, a verified implementation of TLS, and hence protocol scenarios written in FLEXTLS can benefit from robust libraries for messaging and cryptography. Conversely, attack scripts in FLEXTLS can be used to evaluate and communicate the impact of new protocol vulnerabilities. FLEXTLS was used to discover recent attacks on TLS implementations, such as SKIP and FREAK, as well as to program the first proof-of-concept demos for FREAK and Logjam. It is also being used to experiment with proposed designs of the upcoming version 1.3 of TLS. Our goal is to create a common platform where protocol analysts and practitioners can easily test TLS implementations and share protocol designs, attacks or proofs.
Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop, 2020
Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimize... more Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimized with heuristics for common-case protocol specifications, that aims to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation. Furthermore, Verifpal's semantics have been formalized within the Coq theorem prover, and Verifpal models can be automatically translated into Coq as well as into ProVerif models for further verification. Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 as well as the first formal model for the DP-3T pandemic-tracing protocol, which we present in this work. Through Verifpal, we show that advanced verification with formalized semantics and sound logic can exist without any expense towards the convenience of real-world practitioners.
2019 IEEE European Symposium on Security and Privacy (EuroS&P), 2019
The Noise Protocol Framework, introduced recently, allows for the design and construction of secu... more The Noise Protocol Framework, introduced recently, allows for the design and construction of secure channel protocols by describing them through a simple, restricted language from which complex key derivation and local state transitions are automatically inferred. Noise "Handshake Patterns" can support mutual authentication, forward secrecy, zero round-trip encryption, identity hiding and other advanced features. Since the framework's release, Noise-based protocols have been adopted by WhatsApp, WireGuard and other high-profile applications. We present Noise Explorer, an online engine for designing, reasoning about, formally verifying and implementing arbitrary Noise Handshake Patterns. Based on our formal treatment of the Noise Protocol Framework, Noise Explorer can validate any Noise Handshake Pattern and then translate it into a model ready for automated verification and also into a productionready software implementation written in Go or in Rust. We use Noise Explorer to analyze more than 57 handshake patterns. We confirm the stated security goals for 12 fundamental patterns and provide precise properties for the rest. We also analyze unsafe handshake patterns and document weaknesses that occur when validity rules are not followed. All of this work is consolidated into a usable online tool that presents a compendium of results and can parse formal verification results to generate detailedbut-pedagogical reports regarding the exact security goals of each message of a Noise Handshake Pattern with respect to each party, under an active attacker and including malicious principals. Noise Explorer evolves alongside the standard Noise Protocol Framework, having already contributed new security goal verification results and stronger definitions for pattern validation and security parameters.
Financial Cryptography and Data Security, 2017
Web traffic encryption has shifted from applying only to sensitive websites (such as banks) to a ... more Web traffic encryption has shifted from applying only to sensitive websites (such as banks) to a majority of all Web requests. Until recently, one of the main limiting factors for enabling HTTPS was the requirement to obtain a valid certificate from a trusted certification authority. This process traditionally involves steps such as paying a certificate issuance fee, ad-hoc private key and certificate request generation, and domain validation procedures. To remove this barrier of entry, the Internet Security Research Group (ISRG) introduced "Let's Encrypt", a new non-profit certificate authority that uses a new protocol called Automatic Certificate Management Environment (ACME) to automate certificate management at all levels (request, validation, issuance, renewal, and revocation) between clients (website operators) and servers (certificate authority nodes). Let's Encrypt's success is measured by its issuance of over 27 million free certificates since its launch in April 2016. In this paper, we survey the existing process for issuing domain-validated certificates in major certification authorities. Based on our findings, we build a security model of domain-validated certificate issuance. We then model the ACME protocol in the applied pi-calculus and verify its stated security goals against our security model. We compare the effective security of different domain validation methods and show that ACME can be secure under a stronger threat model than that of traditional CAs. We also uncover weaknesses in some flows of ACME 1.0 and propose verified improvements that have been adopted in the latest protocol draft submitted to the IETF.
2017 IEEE Symposium on Security and Privacy (SP), 2017
TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate desig... more TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate design is a reaction both to the increasing demand for low-latency HTTPS connections and to a series of recent high-profile attacks on TLS. The hope is that a fresh protocol with modern cryptography will prevent legacy problems; the danger is that it will expose new kinds of attacks, or reintroduce old flaws that were fixed in previous versions of TLS. After 18 drafts, the protocol is nearing completion, and the working group has appealed to researchers to analyze the protocol before publication. This paper responds by presenting a comprehensive analysis of the TLS 1.3 Draft-18 protocol. We seek to answer three questions that have not been fully addressed in previous work on TLS 1.3: (1) Does TLS 1.3 prevent well-known attacks on TLS 1.2, such as Logjam or the Triple Handshake, even if it is run in parallel with TLS 1.2? (2) Can we mechanically verify the computational security of TLS 1.3 under standard (strong) assumptions on its cryptographic primitives? (3) How can we extend the guarantees of the TLS 1.3 protocol to the details of its implementations? To answer these questions, we propose a methodology for developing verified symbolic and computational models of TLS 1.3 hand-in-hand with a high-assurance reference implementation of the protocol. We present symbolic ProVerif models for various intermediate versions of TLS 1.3 and evaluate them against a rich class of attacks to reconstruct both known and previously unpublished vulnerabilities that influenced the current design of the protocol. We present a computational CryptoVerif model for TLS 1.3 Draft-18 and prove its security. We present RefTLS, an interoperable implementation of TLS 1.0-1.3 and automatically analyze its protocol core by extracting a ProVerif model from its typed JavaScript code. Client C Server S Knows (sk C , pk C), psk Knows (sk S , pk S), psk Negotiation (offer C , modeS)
2017 IEEE European Symposium on Security and Privacy (EuroS&P), 2017
Many popular web applications incorporate end-toend secure messaging protocols, which seek to ens... more Many popular web applications incorporate end-toend secure messaging protocols, which seek to ensure that messages sent between users are kept confidential and authenticated, even if the web application's servers are broken into or otherwise compelled into releasing all their data. Protocols that promise such strong security guarantees should be held up to rigorous analysis, since protocol flaws and implementations bugs can easily lead to real-world attacks. We propose a novel methodology that allows protocol designers, implementers, and security analysts to collaboratively verify a protocol using automated tools. The protocol is implemented in ProScript, a new domain-specific language that is designed for writing cryptographic protocol code that can both be executed within JavaScript programs and automatically translated to a readable model in the applied pi calculus. This model can then be analyzed symbolically using ProVerif to find attacks in a variety of threat models. The model can also be used as the basis of a computational proof using CryptoVerif, which reduces the security of the protocol to standard cryptographic assumptions. If ProVerif finds an attack, or if the CryptoVerif proof reveals a weakness, the protocol designer modifies the ProScript protocol code and regenerates the model to enable a new analysis. We demonstrate our methodology by implementing and analyzing a variant of the popular Signal Protocol with only minor differences. We use ProVerif and CryptoVerif to find new and previously-known weaknesses in the protocol and suggest practical countermeasures. Our ProScript protocol code is incorporated within the current release of Cryptocat, a desktop secure messenger application written in JavaScript. Our results indicate that, with disciplined programming and some verification expertise, the systematic analysis of complex cryptographic web applications is now becoming practical.
Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security, 2016
Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open ... more Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open global computing platform, called the Ethereum Virtual Machine (EVM). EVM executes bytecode on a simple stack machine. Programmers do not usually write EVM code; instead, they can program in a JavaScript-like language, called Solidity, that compiles to bytecode. Since the main purpose of EVM is to execute smart contracts that manage and transfer digital assets (called Ether), security is of paramount importance. However, writing secure smart contracts can be extremely difficult: due to the openness of Ethereum, both programs and pseudonymous users can call into the public methods of other programs, leading to potentially dangerous compositions of trusted and untrusted code. This risk was recently illustrated by an attack on TheDAO contract that exploited subtle details of the EVM semantics to transfer roughly $50M worth of Ether into the control of an attacker. In this paper, we outline a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F , a functional programming language aimed at program verification.