Mon, 20 Nov 2023

14:00 - 15:00
Lecture Room 6

Meta Optimization

Prof. Elad Hazan
(Princeton University and Google DeepMind)
Abstract

How can we find and apply the best optimization algorithm for a given problem?   This question is as old as mathematical optimization itself, and is notoriously hard: even special cases such as finding the optimal learning rate for gradient descent is nonconvex in general. 

In this talk we will discuss a dynamical systems approach to this question. We start by discussing an emerging paradigm in differentiable reinforcement learning called “online nonstochastic control”. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. We then show how this methodology can yield global guarantees for learning the best algorithm in certain cases of stochastic and online optimization. 

No background is required for this talk, but relevant material can be found in this new text on online control and paper on meta optimization.

 

Prof. Elad's Bio

Thu, 19 Oct 2023
16:00
L5

Siegel modular forms and algebraic cycles

Aleksander Horawa
(Oxford University)
Abstract

(Joint work with Kartik Prasanna)

Siegel modular forms are higher-dimensional analogues of modular forms. While each rational elliptic curve corresponds to a single holomorphic modular form, each abelian surface is expected to correspond to a pair of Siegel modular forms: a holomorphic and a generic one. We propose a conjecture that explains the appearance of these two forms (in the cohomology of vector bundles on Siegel modular threefolds) in terms of certain higher algebraic cycles on the self-product of the abelian surface. We then prove three results:
(1) The conjecture is implied by Beilinson's conjecture on special values of L-functions. Amongst others, this uses a recent analytic result of Radzwill-Yang about non-vanishing of twists of L-functions for GL(4).
(2) The conjecture holds for abelian surfaces associated with elliptic curves over real quadratic fields.
(3) The conjecture implies a conjecture of Prasanna-Venkatesh for abelian surfaces associated with elliptic curves over imaginary quadratic fields.

Mon, 13 Nov 2023

14:00 - 15:00
Lecture Room 6

No Seminar

TBA
Abstract

TBA

Mon, 06 Nov 2023

14:00 - 15:00
Lecture Room 6
Mon, 30 Oct 2023

14:00 - 15:00
Lecture Room 6
Mon, 23 Oct 2023

14:00 - 15:00
Lecture Room 6

Tractable Riemannian Optimization via Randomized Preconditioning and Manifold Learning

Boris Shustin
(Mathematical Institute University of Oxford)
Abstract

Optimization problems constrained on manifolds are prevalent across science and engineering. For example, they arise in (generalized) eigenvalue problems, principal component analysis, and low-rank matrix completion, to name a few problems. Riemannian optimization is a principled framework for solving optimization problems where the desired optimum is constrained to a (Riemannian) manifold.  Algorithms designed in this framework usually require some geometrical description of the manifold, i.e., tangent spaces, retractions, Riemannian gradients, and Riemannian Hessians of the cost function. However, in some cases, some of the aforementioned geometric components cannot be accessed due to intractability or lack of information.


 

In this talk, we present methods that allow for overcoming cases of intractability and lack of information. We demonstrate the case of intractability on canonical correlation analysis (CCA) and on Fisher linear discriminant analysis (FDA). Using Riemannian optimization to solve CCA or FDA with the standard geometric components is as expensive as solving them via a direct solver. We address this shortcoming using a technique called Riemannian preconditioning, which amounts to changing the Riemannian metric on the constraining manifold. We use randomized numerical linear algebra to form efficient preconditioners that balance the computational costs of the geometric components and the asymptotic convergence of the iterative methods. If time permits, we also show the case of lack of information, e.g., the constraining manifold can be accessed only via samples of it. We propose a novel approach that allows approximate Riemannian optimization using a manifold learning technique.

 

Mon, 09 Oct 2023

14:00 - 15:00
Lecture Room 6

Mathematics of transfer learning and transfer risk: from medical to financial data analysis

Prof. Xin Guo
(University of California Berkeley)
Abstract

Transfer learning is an emerging and popular paradigm for utilizing existing knowledge from  previous learning tasks to improve the performance of new ones. In this talk, we will first present transfer learning in the early diagnosis of eye diseases: diabetic retinopathy and retinopathy of prematurity.  

We will discuss how this empirical  study leads to the mathematical analysis of the feasibility and transferability  issues in transfer learning. We show how a mathematical framework for the general procedure of transfer learning helps establish  the feasibility of transfer learning as well as  the analysis of the associated transfer risk, with applications to financial time series data.

Mon, 16 Oct 2023

14:00 - 15:00
Lecture Room 6
Higman-Thompson groups and profinite properties of right-angled Coxeter
groups
Corson, S Hughes, S Möller, P Varghese, O (12 Sep 2023) http://arxiv.org/abs/2309.06213v1
Mon, 20 Nov 2023
14:15
L4

A theory of type B/C/D enumerative invariants

Chenjing Bu
(Oxford)
Abstract

We propose a theory of enumerative invariants for structure groups of type B/C/D, that is, for the orthogonal and symplectic groups. For example, we count orthogonal or symplectic principal bundles on projective varieties, and there is also a quiver analogue called self-dual quiver representations. We discuss two different flavours of these invariants, namely, motivic invariants and homological invariants, the former of which can be used to define Donaldson–Thomas invariants in type B/C/D. We also discuss algebraic structures arising from the relevant moduli spaces, including Hall algebras, Joyce's vertex algebras, and modules for these algebras, which are used to write down wall-crossing formulae for our invariants.

Subscribe to