16:00
On Hodge-Tate local systems
Abstract
I will revisit the theory of Hodge-Tate local systems in the light of the p-adic Simpson correspondence. This is a joint work with Michel Gros.
I will revisit the theory of Hodge-Tate local systems in the light of the p-adic Simpson correspondence. This is a joint work with Michel Gros.
Abstract: We will recall some analogies between structures arising from three-manifold topology and rings of integers in number fields. This can be used to define a Chern-Simons functional on spaces of Galois representations. Some sample computations and elementary applications will be shown.
Let $F$ be a binary form of degree $d \geq 3$ with integer coefficients and non-zero discriminant. In this talk we give an asymptotic formula for the quantity $R_F(Z)$, the number of integers in the interval $[-Z,Z]$ representable by the binary form $F$.
This is joint work with C.L. Stewart.
The amount of digital data that requires long-term protection
of integrity, authenticity, and confidentiality protection is steadily
increasing. Examples are health records and genomic data which may have
to be kept and protected for 100 years and more. However, current
security technology does not provide such protection which I consider a
major challenge. In this talk I report about a storage system that
achieves the above protection goals in the long-term. It is based on
information theoretic secure cryptography (both classical and quantum)
as well as on chains of committments. I discuss its security and present
a proof-of-concept implementation including an experimental analysis.
Bayesian optimization (BO) is a powerful tool for sequentially optimizing black-box functions that are expensive to evaluate, and has extensive applications including automatic hyperparameter tuning, environmental monitoring, and robotics. The problem of level-set estimation (LSE) with Gaussian processes is closely related; instead of performing optimization, one seeks to classify the whole domain according to whether the function lies above or below a given threshold, which is also of direct interest in applications.
In this talk, we present a new algorithm, truncated variance reduction (TruVaR) that addresses Bayesian optimization and level-set estimation in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TruVaR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs, non-uniform noise, and multi-task settings. We provide a general theoretical guarantee for TruVaR covering these phenomena, and use it to obtain regret bounds for several specific settings. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.