16:00
A new approach to modularity
Abstract
In the 1960's Langlands proposed a generalisation of Class Field Theory. I will review this and describe a new approach using the trace formua as well as some analytic arguments reminiscent of those used in the classical case. In more concrete terms the problem is to prove general modularity theorems, and I will explain the progress I have made on this problem.
16:00
Tangent spaces of Schubert varieties
Abstract
Schubert varieties in (twisted) affine Grassmannians and their singularities are of interest to arithmetic geometers because they model the étale local structure of the special fiber of Shimura varieties. In this talk, I will discuss a proof of a conjecture of Haines-Richarz classifying the smooth locus of Schubert varieties, generalizing a classical result of Evens-Mirkovic. The main input is to obtain a lower bound for the tangent space at a point of the Schubert variety which arises from considering certain smooth curves passing through it. In the second part of the talk, I will explain how in many cases, we can prove this bound is actually sharp, and discuss some applications to Shimura varieties. This is based on joint work with Pappas and Kisin-Pappas.
16:00
Tame Triple Product Periods
Abstract
A recent conjecture proposed by Harris and Venkatesh relates the action of derived Hecke operators on the space of weight one modular forms to certain Stark units. In this talk, I will explain how this can be rephrased as a conjecture about "tame" analogues of triple product periods for a triple of mod p eigenforms of weights (2,1,1). I will then present an elliptic counterpart to this conjecture relating a tame triple product period to a regulator for global points of elliptic curves in rank 2. This conjecture can be proved in some special cases for CM weight 1 forms, with techniques resonating with the so-called Jochnowitz congruences. This is joint work in preparation with Henri Darmon.
16:00
Strong Bounds for 3-Progressions
Abstract
The bilevel optimization renaissance through machine learning: lessons and challenges
Abstract
Bilevel optimization has been part of machine learning for over 4 decades now, although perhaps not always in an obvious way. The interconnection between the two topics started appearing more clearly in publications since about 20 years now, and in the last 10 years, the number of machine learning applications of bilevel optimization has literally exploded. This rise of bilevel optimization in machine learning has been highly positive, as it has come with many innovations in the theoretical and numerical perspectives in understanding and solving the problem, especially with the rebirth of the implicit function approach, which seemed to have been abandoned at some point.
Overall, machine learning has set the bar very high for the whole field of bilevel optimization with regards to the development of numerical methods and the associated convergence analysis theory, as well as the introduction of efficient tools to speed up components such as derivative calculations among other things. However, it remains unclear how the techniques from the machine learning—based bilevel optimization literature can be extended to other applications of bilevel programming.
For instance, many machine learning loss functions and the special problem structures enable the fulfillment of some qualification conditions that will fail for multiple other applications of bilevel optimization. In this talk, we will provide an overview of machine learning applications of bilevel optimization while giving a flavour of corresponding solution algorithms and their limitations.
Furthermore, we will discuss possible paths for algorithms that can tackle more complicated machine learning applications of bilevel optimization, while also highlighting lessons that can be learned for more general bilevel programs.
Fast optimistic methods for monotone equations and convex optimization problems
Please note; the seminar is taking place in Lecture Room 4 on this occasion
Abstract
In this talk, we discuss continuous in time dynamics for the problem of approaching the set of zeros of a single-valued monotone and continuous operator V . Such problems are motivated by minimax convexconcave and, in particular, by convex optimization problems with linear constraints. The central role is played by a second-order dynamical system that combines a vanishing damping term with the time derivative of V along the trajectory, which can be seen as an analogous of the Hessian-driven damping in case the operator is originating from a potential. We show that these methods exhibit fast convergence rates for kV (z(t))k as t ! +1, where z( ) denotes the generated trajectory, and for the restricted gap function, and that z( ) converges to a zero of the operator V . For the corresponding implicit and explicit discrete time models with Nesterov’s momentum, we prove that they share the asymptotic features of the continuous dynamics.
Extensions to variational inequalities and fixed-point problems are also addressed. The theoretical results are illustrated by numerical experiments on bilinear games and the training of generative adversarial networks.
Oxford Mathematician Roger Heath-Brown has been been appointed Officer of the Order of the British Empire (OBE) for services to Mathematics and Mathematical Research in the 2024 New Year Honours List.
Roger Heath-Brown is one of the foremost analytic number theorists of his generation. His important works on prime numbers and related topics include, among many others: