Tue, 23 Jan 2024

14:00 - 14:30
L6

Scalable Gaussian Process Regression with Quadrature-based Features

Paz Fink Shustin
(Oxford)
Abstract

Gaussian processes provide a powerful probabilistic kernel learning framework, which allows high-quality nonparametric learning via methods such as Gaussian process regression. Nevertheless, its learning phase requires unrealistic massive computations for large datasets. In this talk, we present a quadrature-based approach for scaling up Gaussian process regression via a low-rank approximation of the kernel matrix. The low-rank structure is utilized to achieve effective hyperparameter learning, training, and prediction. Our Gauss-Legendre features method is inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration. However, our method is capable of generating high-quality kernel approximation using a number of features that is poly-logarithmic in the number of training points, while similar guarantees will require an amount that is at the very least linear in the number of training points when using random Fourier features. The utility of our method for learning with low-dimensional datasets is demonstrated using numerical experiments.

Tue, 24 Oct 2023

14:30 - 15:00
VC

Redefining the finite element

India Marsden
(Oxford)
Abstract

The Ciarlet definition of a finite element has been used for many years to describe the requisite parts of a finite element. In that time, finite element theory and implementation have both developed and improved, which has left scope for a redefinition of the concept of a finite element. In this redefinition, we look to encapsulate some of the assumptions that have historically been required to complete Ciarlet’s definition, as well as incorporate more information, in particular relating to the symmetries of finite elements, using concepts from Group Theory. This talk will present the machinery of the proposed new definition, discuss its features and provide some examples of commonly used elements.

Tue, 23 Jan 2024

14:30 - 15:00
L6

Manifold-Free Riemannian Optimization

Boris Shustin
(Mathematical Institute (University of Oxford))
Abstract

Optimization problems constrained to a smooth manifold can be solved via the framework of Riemannian optimization. To that end, a geometrical description of the constraining manifold, e.g., tangent spaces, retractions, and cost function gradients, is required. In this talk, we present a novel approach that allows performing approximate Riemannian optimization based on a manifold learning technique, in cases where only a noiseless sample set of the cost function and the manifold’s intrinsic dimension are available.

Tue, 24 Oct 2023

14:00 - 14:30
VC

Analysis and Numerical Approximation of Mean Field Game Partial Differential Inclusions

Yohance Osborne
(UCL)
Abstract

The PDE formulation of Mean Field Games (MFG) is described by nonlinear systems in which a Hamilton—Jacobi—Bellman (HJB) equation and a Kolmogorov—Fokker—Planck (KFP) equation are coupled. The advective term of the KFP equation involves a partial derivative of the Hamiltonian that is often assumed to be continuous. However, in many cases of practical interest, the underlying optimal control problem of the MFG may give rise to bang-bang controls, which typically lead to nondifferentiable Hamiltonians. In this talk we present results on the analysis and numerical approximation of second-order MFG systems for the general case of convex, Lipschitz, but possibly nondifferentiable Hamiltonians.
In particular, we propose a generalization of the MFG system as a Partial Differential Inclusion (PDI) based on interpreting the partial derivative of the Hamiltonian in terms of subdifferentials of convex functions.

We present theorems that guarantee the existence of unique weak solutions to MFG PDIs under a monotonicity condition similar to one that has been considered previously by Lasry & Lions. Moreover, we introduce a monotone finite element discretization of the weak formulation of MFG PDIs and prove the strong convergence of the approximations to the value function in the H1-norm and the strong convergence of the approximations to the density function in Lq-norms. We conclude the talk with some numerical experiments involving non-smooth solutions. 

This is joint work with my supervisor Iain Smears. 

Oxford Digital Festival is celebrating Oxford’s digital transformation with a showcase of the exciting digital innovations from across the University. Colleagues can exhibit their work with a poster or an exhibition stand on 9 November at the Jesus College Digital Hub. 

Mon, 09 Oct 2023
14:15
L4

How homotopy theory helps to classify algebraic vector bundles

Mura Yakerson
(Oxford)
Abstract

Classically, topological vector bundles are classified by homotopy classes of maps into infinite Grassmannians. This allows us to study topological vector bundles using obstruction theory: we can detect whether a vector bundle has a trivial subbundle by means of cohomological invariants. In the context of algebraic geometry, one can ask whether algebraic vector bundles over smooth affine varieties can be classified in a similar way. Recent advances in motivic homotopy theory give a positive answer, at least over an algebraically closed base field. Moreover, the behaviour of vector bundles over general base fields has surprising connections with the theory of quadratic forms.

Tue, 21 Nov 2023

14:00 - 15:00
L5

Proximal Galekin: A Structure-Preserving Finite Element Method For Pointwise Bound Constraints

Brendan Keith
(Brown University)
Abstract

The proximal Galerkin finite element method is a high-order, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinitedimensional function spaces. In this talk, we will introduce the proximal Galerkin method and apply it to solve free-boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The proximal Galerkin framework is a natural consequence of the latent variable proximal point (LVPP) method, which is an stable and robust alternative to the interior point method that will also be introduced in this talk.

In particular, LVPP is a low-iteration complexity, infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout the talk, we will arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and an infinite-dimensional Lie group; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization.

The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis. This talk is based on [1].

 

Keywords: pointwise bound constraints, bound-preserving discretization, entropy regularization, proximal point

 

Mathematics Subject Classifications (2010): 49M37, 65K15, 65N30

 

References  [1] B. Keith, T.M. Surowiec. Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints arXiv preprint arXiv:2307.12444 2023.

Brown University Email address: @email

Simula Research Laboratory Email address: @email

Subscribe to