Forthcoming events in this series

Wed, 30 Jan 2019

Wave: A New Family of Trapdoor Preimage Sampleable Functions Based on Codes

Thomas Debris-Alazard
(INRIA Paris)
Further Information

It is a long-standing open problem to build an efficient and secure digital signature scheme based on the hardness of decoding a linear code which could compete with widespread schemes like DSA or RSA. The latter signature schemes are broken by a quantum computer with Shor’s algorithm. Code-based schemes could provide a valid quantum resistant replacement. We present here Wave the first « hash-and-sign » code-based signature scheme which strictly follows the GPV strategy which ensures universal unforgeability. It uses the family of ternary generalized $(U, U+V)$ codes. Our algorithm produces uniformly distributed signatures through a suitable rejection sampling (one rejection every 3 or 4 signatures). Furthermore, our scheme enjoys efficient signature and verification algorithms. Typically, for 128 bits of classical security, signatures are in the order of 10 thousand bits long and the public key is in the order of one megabyte.​

Wed, 16 Jan 2019

On the Ring-LWE and Polynomial-LWE problems

Alexandre Wallet
(ENS Lyon)

The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual OK^vee of the ring of integers OK of a specified number field K. In primal-RLWE, the secret instead belongs to OK. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is not defined using a ring of integers OK of a number field K but a polynomial ring Z[x]/f for a monic irreducible f in Z[x]. We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with a small error rate growth for all rings (the resulting reduction is nonuniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for RLWE. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.

Based on joint work with Miruna Roșca and Damien Stehlé.

Wed, 28 Nov 2018

Exploring Crypto Dark Matter: New Simple PRF Candidates and Their Applications

Alain Passelègue
(ENS Lyon)

Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. Traditionally, there have been two main approaches for PRF design: the ``practitioner's approach'' of building concretely-efficient constructions based on known heuristics and prior experience, and the ``theoretician's approach'' of proposing constructions and reducing their security to a previously-studied hardness assumption. While both approaches have their merits, the resulting PRF candidates vary greatly in terms of concrete efficiency and design complexity. In this work, we depart from these traditional approaches by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits (depth-2 ACC^0 circuits). Concretely, our main weak PRF candidate is a ``piecewise-linear'' function that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. We also put forward a similar depth-3 strong PRF candidate.  
The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of depth-2 ACC^0 circuits and width-3 branching programs, interpolation and property testing for sparse polynomials, and natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs).
Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging standard error-correcting codes.
Joint work with Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu.

Wed, 07 Nov 2018

Lattice-Based Zero-Knowledge Arguments for Integer Relations

Khoa Nguyen
(Nanyang Technological University)

We provide lattice-based protocols allowing to prove relations among committed integers. While the most general zero-knowledge proof techniques can handle arithmetic circuits in the lattice setting, adapting them to prove statements over the integers is non-trivial, at least if we want to handle exponentially large integers while working with a polynomial-size modulus qq. For a polynomial L, we provide zero-knowledge arguments allowing a prover to convince a verifier that committed L-bit bitstrings x, y and z are the binary representations of integers X, Y and Z satisfying Z=X+Y over the integers. The complexity of our arguments is only linear in L. Using them, we construct arguments allowing to prove inequalities X <Z among committed integers, as well as arguments showing that a committed X belongs to a public interval [α,β], where α and β can be arbitrarily large. Our range arguments have logarithmic cost (i.e., linear in L) in the maximal range magnitude. Using these tools, we obtain zero-knowledge arguments showing that a committed element X does not belong to a public set S using soft-O(n⋅log|S|) bits of communication, where n is the security parameter. We finally give a protocol allowing to argue that committed L-bit integers X, Y and Z satisfy multiplicative relations Z=XY over the integers, with communication cost subquadratic in L. To this end, we use our protocol for integer addition to prove the correct recursive execution of Karatsuba's multiplication algorithm. The security of our protocols relies on standard lattice assumptions with polynomial modulus and polynomial approximation factor.


Wed, 24 Oct 2018


Carmit Hazay


Wed, 30 May 2018

Falcon: Compact lattice-based signatures based on the hash & sign paradigm

Thomas Prest
(Thales Communications & Security)

Post-quantum cryptography has been one of the most active subfields of
cryptography in the last few years. This is especially true today as
standardization efforts are currently underway, with no less than 69
candidate cryptographic schemes proposed.

In this talk, I will present one of these schemes: Falcon, a signature
scheme based on the NTRU class of structured lattices. I will focus on
mathematical aspects of Falcon: for example how we take advantage of the
algebraic structure to speed up some operations, or how relying on the
most adequate probability divergence can go a long way in getting more
efficient parameters "for free". The talk will be concluded with a few
open problems.

Wed, 16 May 2018

Challenges of End-to-End Encryption in Facebook Messenger

Jon Millican

In 2016, Facebook added an optional end-to-end (E2E) encryption feature called Secret Conversations to Messenger. This was challenging to design, as many of Messenger's key properties and features don't fit the typical model of E2E apps. Additionally, Messenger is already one of the world's most popular messaging apps, supporting nearly a billion people across a variety of technical and cultural environments. Because of this, Messenger's deployment of E2E encryption provides attendees with a valuable case study on how to build usable, secure products. 

We will discuss the core properties of a typical E2E app, the core features of Messenger, the distance between the two, and the approach we took to close the gap. We'll examine how minimizing the distance shaped the current E2E experience within Messenger. Through discussion of the key decisions in this process, we'll address the implications for alternative designs with real world comparisons where they exist. 

Although Secret Conversations in Messenger use off-the-shelf Signal Protocol for message encryption, Facebook also wanted to ensure a safe communication channel for community members who may be victims of online abuse. To this end, we created a way for people to report secret conversations that violate our Community Standards, without breaking any E2E guarantees for other messages.

Developing a reporting protocol created an interesting challenge: the potential of fake reports with no intermediary to invalidate them. To pre-empt the possibility of Bob forging a report to incriminate Alice, we added a method that uses two HMACs - one added by the sender and one by Facebook - to “cryptographically frank” messages as we forward them from one party to the other (physical mail uses a similar franking). This technique ensures similar confidence that a report is genuine as we have for messages stored in plaintext on our servers. Additionally, the frank is only verifiable by Facebook after receiving a report from the recipient, thus preventing a third party from using it as evidence against the sender.

We hope that this talk will provide an insight into the intricacies of deploying security features at scale, and the additional considerations necessary when developing an existing product.

Wed, 09 May 2018

Practical and Tightly-Secure Digital Signatures and Authenticated Key Exchange

Tibor Jager
(Paderborn University)

Tight security is increasingly gaining importance in real-world
cryptography, as it allows to choose cryptographic parameters in a way
that is supported by a security proof, without the need to sacrifice
efficiency by compensating the security loss of a reduction with larger
parameters. However, for many important cryptographic primitives,
including digital signatures and authenticated key exchange (AKE), we
are still lacking constructions that are suitable for real-world deployment.

This talk will present the first first practical AKE protocol with tight
security. It allows the establishment of a key within 1 RTT in a
practical client-server setting, provides forward security, is simple
and easy to implement, and thus very suitable for practical deployment.
It is essentially the "signed Diffie-Hellman" protocol, but with an
additional message, which is crucial to achieve tight security. This
message is used to overcome a technical difficulty in constructing
tightly-secure AKE protocols.

The second important building block is a practical signature scheme with
tight security in a real-world multi-user setting with adaptive
corruptions. The scheme is based on a new way of applying the
Fiat-Shamir approach to construct tightly-secure signatures from certain
identification schemes.

For a theoretically-sound choice of parameters and a moderate number of
users and sessions, our protocol has comparable computational efficiency
to the simple signed Diffie-Hellman protocol with EC-DSA, while for
large-scale settings our protocol has even better computational per-
formance, at moderately increased communication complexity.

Wed, 25 Apr 2018

Blockchain Technology: A Cryptographic Perspective

(University of Salerno (ITALY))

There is currently a large interest in the applications of the Blockchain technology. After the well known success of the cryptocurrency Bitcoin, several other real-world applications of Blockchain technology have been proposed, often raising privacy concerns. We will discuss the potential of advanced cryptographic tools in relaxing the tension between pros and cons of this technology.

Wed, 07 Mar 2018

Catch me if you can: locating (and fixing) side channel leaks

Elisabeth Oswald
(University of Bristol)

Side channel leakage is no longer just a concern for industries that
traditionally have a high degree of awareness and expertise in
(implementing) cryptography. With the rapid growth of security
sensitive applications in other areas, e.g. smartphones, homes, etc.
there is a clear need for developers with little to no crypto
expertise to implement and instantiate cryptography securely on
embedded devices. In this talk, I explain what makes finding side
channel leaks challenging (in theory and in practice) and give an
update on our latest work to develop methods and tools to enable
non-domain experts to ‘get a grip’ on leakage in their

Wed, 21 Feb 2018

Full orbit sequences in affine spaces

Giacomo Micheli
(University of Oxford)

Let n be a positive integer. In this talk we provide a recipe to 
construct full orbit sequences in the affine n-dimensional space over a 
finite field. For n=1 our construction covers the case of the well 
studied pseudorandom number generator ICG.

This is a joint work with Federico Amadio Guidi.

Wed, 14 Feb 2018

Multivariate cryptography and the complexity of computing Groebner bases

Elisa Gorla
(University of Neufchatel (Switzerland))

Multivariate cryptography is one of a handful of proposals for post-quantum cryptographic schemes, i.e. cryptographic schemes that are secure also against attacks carried on with a quantum computer. Their security relies on the assumption that solving a system of multivariate (quadratic) equations over a finite field is computationally hard. 

Groebner bases allow us to solve systems of polynomial equations. Therefore, one of the key questions in assessing the robustness of multivariate cryptosystems is estimating how long it takes to compute the Groebner basis of a given system of polynomial equations. 

After introducing multivariate cryptography and Groebner bases, I will present a rigorous method to estimate the complexity of computing a Groebner basis. This approach is based on techniques from commutative algebra and is joint work with Alessio Caminata (University of Barcelona).

Wed, 07 Feb 2018

Efficient post-quantum crypto from module lattices

Peter Schwabe
(Radboud University)

Large parts of the cryptography in use today,

key-agreement protocols and digital signatures based on the

hardness of factoring large integers or solving the

discrete-logarithm problem, are not secure against attackers

equipped with a large universal quantum computer. It is not

clear when such a large quantum computer will be built, but

continuous progress by various labs around the world suggests

that it may well be less than two decades until today's

cryptography will become insecure.

To address this issue, NIST started a public competition to

identify suitable replacements for today's cryptosystems. In

my talk, I will describe two of these systems: the

key-encapsulation mechanism Kyber and the digital signature

scheme Dilithium. Both schemes are based on the hardness of

solving problems in module lattices and they together form the

"Cryptographic Suite for Algebraic Lattices -- CRYSTALS".

Wed, 24 Jan 2018

On Approximating the Covering Radius and Finding Dense Lattice Subspaces

Daniel Dadush
(CWI Amsterdam)

Integer programming, the problem of finding an optimal integer solution satisfying linear constraints, is one of the most fundamental problems in discrete optimization. In the first part of this talk, I will discuss the important open problem of whether there exists a single exponential time algorithm for solving a general n variable integer program, where the best current algorithm requires n^{O(n)} time. I will use this to motivate a beautiful conjecture of Kannan & Lovasz (KL) regarding how "flat" convex bodies not containing integer points must be.

The l_2 case of KL was recently resolved in breakthrough work by Regev & Davidowitz `17, who proved a more general "Reverse Minkowski" theorem which gives an effective way of bounding lattice point counts inside any ball around the origin as a function of sublattice determinants. In both cases, they prove the existence of certain "witness" lattice subspaces in a non-constructive way that explains geometric parameters of the lattice. In this work, as my first result, I show how to make these results constructive in 2^{O(n)} time, i.e. which can actually find these witness subspaces, using discrete Gaussian sampling techniques. As a second main result, I show an improved complexity characterization for approximating the covering radius of a lattice, i.e. the farthest distance of any point in space to the lattice. In particular, assuming the slicing conjecture, I show that this problem is in coNP for constant approximation factor, which improves on the corresponding O(log^{3/2} n) approximation factor given by Regev & Davidowitz's proof of the l_2 KL conjecture.

Wed, 17 Jan 2018

Past and Future of Embedded Security: From Self-driving Cars to Transistor Trojans

Christof Paar
(Ruhr-Uni­ver­si­tät Bo­chum)

With the evolution towards the IoT and cyber-physical systems, the role that the underlying hardware plays in securing an application is becoming more prominent. Hardware can be used constructively, e.g. for accelerating computationally- intensive cryptographic algorithms. Hardware can also be used destructively, e.g., for physical attacks or transistor-level Trojans which are virtually impossible to detect. In this talk, we will present case studies for high-speed cryptography used in car2x communication and recent research on low-level hardware Trojans. 

Wed, 22 Nov 2017

Breakdown Resilience of Key Exchange Protocols

Marc Fischlin
(Technische Universitat Darmstadt)

Broken cryptographic algorithms and hardness assumptions are a constant
threat to real-world protocols. Prominent examples are
hash functions for which collisions become known, or number-theoretic
assumptions which are threatened by advances in quantum computing.
Especially when it comes to key exchange protocols, the switch to
quantum-resistant primitives has begun and aims to protect today’s
secrets against future developments, moving from common Diffie–Hellman
based solutions to Learning-With-Errors-based approaches. Remarkably,
the authentication step in such protocols is usually still carried out
with quantum-vulnerable signature schemes. The intuition here is that
the adversary would need to break this protocol primitive today, without
having quantum power yet. The question we address here is if this
intuition is justified, and if so, if we can show this rigorously. We
particularly consider the authenticated variant of the recently
introduced post-quantum secure key exchange protocol NewHope (Alkim et
al., USENIX Security 2016), as well as by TLS 1.3, which is currently
being developed by the Internet Engineering Task Force.

Wed, 08 Nov 2017

Adaptive Oblivious Transfer with Access Control from Lattice Assumptions

Fabrice Mouhartem
(ENS Lyon)

Adaptive oblivious transfer (OT) is a protocol where a sender
initially commits to a database {M_i}_{i=1}^N . Then, a receiver can query the
sender up to k times with private indexes ρ_1, …, ρ_k so as to obtain
M_{ρ_1}, …, M_{ρ_k} and nothing else. Moreover, for each i ∈ [k], the receiver’s
choice ρ_i may depend on previously obtained messages {M_ρ_j}_{j<i} . Oblivious transfer
with access control (OT-AC) is a flavor of adaptive OT
where database records are protected by distinct access control policies
that specify which credentials a receiver should obtain in order to access
each M_i . So far, all known OT-AC protocols only support access policies
made of conjunctions or rely on ad hoc assumptions in pairing-friendly
groups (or both). In this paper, we provide an OT-AC protocol where access policies may consist of any branching program of polynomial length, which is sufficient to realize any access policy in NC^1. The security of
our protocol is proved under the Learning-with-Errors (LWE) and Short-
Integer-Solution (SIS) assumptions. As a result of independent interest,
we provide protocols for proving the correct evaluation of a committed
branching program on a committed input.

Joint work with Benoît Libert, San Ling, Khoa Nguyen and Huaxiong Wang.

Wed, 11 Oct 2017

Hierarchical Identity-based Encryption from Ideal Lattices

Peter Campbell

Identity-based cryptography can be useful in situations where a full-scale public-key infrastructure is impractical.  Original identity-based proposals relied on elliptic curve pairings and so are vulnerable to quantum computers.  I will describe some on-going work to design a post-quantum identity-based encryption scheme using ideas from Ring Learning with Errors. Our scheme has the advantage that it can be extended to the hierarchical setting for more flexible key management.

Wed, 21 Jun 2017

Post-Quantum Key Exchange from the LWE

Jintai Ding
(University of Cincinnati)

In this lecture, we present  practical and provably
secure (authenticated) key exchange protocol and password
authenticated key exchange protocol, which are based on the
learning with errors problems. These protocols are conceptually
simple and have strong provable security properties.
This type of new constructions were started in 2011-2012.
These protocols are shown indeed practical.  We will explain
that all the existing LWE based key exchanges are variants
of this fundamental design.  In addition, we will explain
some issues with key reuse and how to use the signal function
invented for KE for authentication schemes.

Wed, 07 Jun 2017

Direct Anonymous Attestation: From 2003 to 2017

Jan Camenisch
(IBM Research)

Direct Anonymous Attestation (DAA) is a protocol that allows a security chip embedded in a platform such as laptop to authenticate itself as a genuine chip.  Different authentications are not linkeable, thus the protocol protects the privacy of the platform. The first DAA protocol was proposed by Brickell, Chen, and Camenisch and was standardized in 2004 by the Trusted Computing Group (TCG). Implementations of this protocols were rather slow because it is based on RSA. Later, alternative and faster protocols were proposed based on elliptic curves. Recently the specification by the TCG was updated to allow for DAA protocols based on elliptic curves. Unfortunately, the new standard does not allow for provably secure DAA protocols. In this talk, we will review some of the history of DAA and  then discuss the latest protocols, security models, and finally a provably secure realization of DAA based on elliptic curves.

Wed, 31 May 2017

Hash Proof Systems over Lattices Revisited

Olivier Blazy
(Université de Limoges)

Hash Proof Systems or Smooth Projective Hash Functions (SPHFs) are a
form of implicit arguments introduced by Cramer and Shoup at
Eurocrypt’02. They have found many applications since then, in
particular for authenticated key exchange or honest-verifier
zero-knowledge proofs. While they are relatively well understood in
group settings, they seem painful to construct directly in the lattice
Only one construction of an SPHF over lattices has been proposed, by
Katz and Vaikuntanathan at Asiacrypt’09. But this construction has an
important drawback: it only works for an ad-hoc language of ciphertexts.
Concretely, the corresponding decryption procedure needs to be tweaked,
now requiring q many trapdoor inversion attempts, where q is the modulus
of the underlying Learning With Error (LWE) problem.
Using harmonic analysis, we explain the source of this limitation, and
propose a way around it. We show how to construct SPHFs for standard
languages of LWE ciphertexts, and explicit our construction over a
tag-CCA2 encryption scheme à la Micciancio-Peikert (Eurocrypt’12).

If there is enough time, we will conclude with applications of these
SPHFs to password-authenticated key exchange, honest-verifier
zero-knowledge and a variant of witness encryption.

Wed, 03 May 2017

Verifiable Electronic Voting in Practice

Feng Hao
(Newcastle University)

In this talk, I’ll share the progress that we have made in the field of e-voting, including the proposal of a new paradigm of e-voting system called self-enforcing e-voting (SEEV). A SEEV system is End-to-End (E2E) verifiable, but it differs from all previous E2E systems in that it does not require tallying authorities. The removal of tallying authorities significantly simplifies the election management and makes the system much more practical than before. A prototype of a SEEV system based on the DRE-i protocol (Hao et al. USENIX JETS 2014) has been built and used regularly in Newcastle University for classroom voting and student prize competitions with very positive student feedback. Lessons from our experience of designing, analysing and deploying an e-voting system for real-world applications are also presented.