Mon, 27 Nov 2023

14:00 - 15:00
Lecture Room 6

Towards Reliable Solutions of Inverse Problems with Deep Learning

Prof. Matthias Ehrhardt
(University of Bath)
Abstract

Deep learning has revolutionised many scientific fields and so it is no surprise that state-of-the-art solutions to several inverse problems also include this technology. However, for many inverse problems (e.g. in medical imaging) stability and reliability are particularly important.

Furthermore, unlike other image analysis tasks, usually only a fairly small amount of training data is available to train image reconstruction algorithms.

Thus, we require tailored solutions which maximise the potential of all ingredients: data, domain knowledge and mathematical analysis. In this talk we discuss a range of such hybrid approaches and will encounter along the way connections to various topics like generative models, convex optimization, differential equations and equivariance.

Thu, 26 Oct 2023
16:00
L5

The sum-product problem for integers with few prime factors (joint work with Hanson, Rudnev, Zhelezov)

Ilya Shkredov
(LIMS)
Abstract

It was asked by E. Szemerédi if, for a finite set $A\subset \mathbf{Z}$, one can improve estimates for $\max\{|A+A|,|A\cdot A|\}$, under the constraint that all integers involved have a bounded number of prime factors -- that is, each $a\in A$ satisfies $\omega(a)\leq k$. In this paper we show that this maximum is at least of order $|A|^{\frac{5}{3}-o(1)}$ provided $k\leq (\log|A|)^{1-\varepsilon}$ for some $\varepsilon>0$. In fact, this will follow from an estimate for additive energy which is best possible up to factors of size $|A|^{o(1)}$. Our proof consists of three parts: combinatorial, analytical and number theoretical.

 

Mon, 20 Nov 2023

14:00 - 15:00
Lecture Room 6

Meta Optimization

Prof. Elad Hazan
(Princeton University and Google DeepMind)
Abstract

How can we find and apply the best optimization algorithm for a given problem?   This question is as old as mathematical optimization itself, and is notoriously hard: even special cases such as finding the optimal learning rate for gradient descent is nonconvex in general. 

In this talk we will discuss a dynamical systems approach to this question. We start by discussing an emerging paradigm in differentiable reinforcement learning called “online nonstochastic control”. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. We then show how this methodology can yield global guarantees for learning the best algorithm in certain cases of stochastic and online optimization. 

No background is required for this talk, but relevant material can be found in this new text on online control and paper on meta optimization.

 

Prof. Elad's Bio

Thu, 19 Oct 2023
16:00
L5

Siegel modular forms and algebraic cycles

Aleksander Horawa
(Oxford University)
Abstract

(Joint work with Kartik Prasanna)

Siegel modular forms are higher-dimensional analogues of modular forms. While each rational elliptic curve corresponds to a single holomorphic modular form, each abelian surface is expected to correspond to a pair of Siegel modular forms: a holomorphic and a generic one. We propose a conjecture that explains the appearance of these two forms (in the cohomology of vector bundles on Siegel modular threefolds) in terms of certain higher algebraic cycles on the self-product of the abelian surface. We then prove three results:
(1) The conjecture is implied by Beilinson's conjecture on special values of L-functions. Amongst others, this uses a recent analytic result of Radzwill-Yang about non-vanishing of twists of L-functions for GL(4).
(2) The conjecture holds for abelian surfaces associated with elliptic curves over real quadratic fields.
(3) The conjecture implies a conjecture of Prasanna-Venkatesh for abelian surfaces associated with elliptic curves over imaginary quadratic fields.

Mon, 13 Nov 2023

14:00 - 15:00
Lecture Room 6

No Seminar

TBA
Abstract

TBA

Mon, 06 Nov 2023

14:00 - 15:00
Lecture Room 6
Mon, 30 Oct 2023

14:00 - 15:00
Lecture Room 6
Mon, 23 Oct 2023

14:00 - 15:00
Lecture Room 6

Tractable Riemannian Optimization via Randomized Preconditioning and Manifold Learning

Boris Shustin
(Mathematical Institute University of Oxford)
Abstract

Optimization problems constrained on manifolds are prevalent across science and engineering. For example, they arise in (generalized) eigenvalue problems, principal component analysis, and low-rank matrix completion, to name a few problems. Riemannian optimization is a principled framework for solving optimization problems where the desired optimum is constrained to a (Riemannian) manifold.  Algorithms designed in this framework usually require some geometrical description of the manifold, i.e., tangent spaces, retractions, Riemannian gradients, and Riemannian Hessians of the cost function. However, in some cases, some of the aforementioned geometric components cannot be accessed due to intractability or lack of information.


 

In this talk, we present methods that allow for overcoming cases of intractability and lack of information. We demonstrate the case of intractability on canonical correlation analysis (CCA) and on Fisher linear discriminant analysis (FDA). Using Riemannian optimization to solve CCA or FDA with the standard geometric components is as expensive as solving them via a direct solver. We address this shortcoming using a technique called Riemannian preconditioning, which amounts to changing the Riemannian metric on the constraining manifold. We use randomized numerical linear algebra to form efficient preconditioners that balance the computational costs of the geometric components and the asymptotic convergence of the iterative methods. If time permits, we also show the case of lack of information, e.g., the constraining manifold can be accessed only via samples of it. We propose a novel approach that allows approximate Riemannian optimization using a manifold learning technique.

 

Mon, 09 Oct 2023

14:00 - 15:00
Lecture Room 6

Mathematics of transfer learning and transfer risk: from medical to financial data analysis

Prof. Xin Guo
(University of California Berkeley)
Abstract

Transfer learning is an emerging and popular paradigm for utilizing existing knowledge from  previous learning tasks to improve the performance of new ones. In this talk, we will first present transfer learning in the early diagnosis of eye diseases: diabetic retinopathy and retinopathy of prematurity.  

We will discuss how this empirical  study leads to the mathematical analysis of the feasibility and transferability  issues in transfer learning. We show how a mathematical framework for the general procedure of transfer learning helps establish  the feasibility of transfer learning as well as  the analysis of the associated transfer risk, with applications to financial time series data.

Subscribe to