Wed, 14 Feb 2024

16:00 - 17:00
L6

One-ended graph braid groups and where to find them

Ruta Sliazkaite
(University of Warwick)
Abstract

Graph braid groups are similar to braid groups, except that they are defined as ‘braids’ on a graph, rather than the real plane. We can think of graph braid groups in terms of the discrete configuration space of a graph, which is a CW-complex. One can compute a presentation of a graph braid group using Morse theory. In this talk I will give a few examples on how to compute these presentations in terms of generating circuits of the graph. I will then go through a detailed example of a graph that gives a one-ended braid group.

Wed, 07 Feb 2024

16:00 - 17:00
L6

Relationships between hyperbolic and classic knot invatiants

Colin McCulloch
(University of Oxford)
Abstract

For a hyperbolic knot there are two types of invariants, the hyperbolic invariants coming from the geometric structure and the classical invariants coming from the topology or combinatorics. It has been observed in many different cases that these seemingly different types of invariants are in fact related. I will give examples of these relationships and discuss in particular a link by Stoimenow between the determinant and volume.  

Wed, 31 Jan 2024

16:00 - 17:00
L6

Distinguishing free-by-(finite cyclic) groups by their finite quotients

Paweł Piwek
(University of Oxford)
Abstract
Finitely generated free-by-(finite cyclic) groups turn out to be distinguished from each other by their finite quotients - and this is thanks to being very constrained by their finite subgroups and their centralisers. This has a consequence to distinguishing in the same way the free-by-cyclic groups with centre. This is joint work with Martin Bridson.
Wed, 17 Jan 2024

16:00 - 17:00
L6

Spectra of surfaces and MCG actions on random covers

Adam Klukowski
(University of Oxford)
Abstract

The Ivanov conjecture is equivalent to the statement that every covering map of surfaces has the so-called Putman-Wieland property. I will discuss my recent work with Vlad Marković, where we prove it for asymptotically all coverings as the degree grows. I will give some overview of our main tool: spectral geometry, which is related to objects like the heat kernel of a hyperbolic surface, or Cheeger connectivity constant.

Tue, 05 Mar 2024

14:30 - 15:00
L6

Error Bound on Singular Values Approximations by Generalized Nystrom

Lorenzo Lazzarino
(Mathematical Institute (University of Oxford))
Abstract

We consider the problem of approximating singular values of a matrix when provided with approximations to the leading singular vectors. In particular, we focus on the Generalized Nystrom (GN) method, a commonly used low-rank approximation, and its error in extracting singular values. Like other approaches, the GN approximation can be interpreted as a perturbation of the original matrix. Up to orthogonal transformations, this perturbation has a peculiar structure that we wish to exploit. Thus, we use the Jordan-Wieldant Theorem and similarity transformations to generalize a matrix perturbation theory result on eigenvalues of a perturbed Hermitian matrix. Finally, combining the above,  we can derive a bound on the GN singular values approximation error. We conclude by performing preliminary numerical examples. The aim is to heuristically study the sharpness of the bound, to give intuitions on how the analysis can be used to compare different approaches, and to provide ideas on how to make the bound computable in practice.

Tue, 20 Feb 2024

14:30 - 15:00
L6

CMA Light: A novel Minibatch Algorithm for large-scale non convex finite sum optimization

Corrado Coppola
(Sapienza University of Rome)
Abstract
The supervised training of a deep neural network on a given dataset consists of the unconstrained minimization of the finite sum of continuously differentiable functions, commonly referred to as loss with respect to the samples. These functions depend on the network parameters and most of the times are non-convex.  We develop CMA Light, a new globally convergent mini-batch gradient method to tackle this problem. We consider the recently introduced Controlled Minibatch Algorithm (CMA) framework and we overcome its main bottleneck, removing the need for at least one evaluation of the whole objective function per iteration. We prove global convergence of CMA Light under mild assumptions and we discuss extensive computational results on the same experimental test bed used for CMA, showing that CMA Light requires less computational effort than most of the state-of-the-art optimizers. Eventually, we present early results on a large-scale Image Classification task.
 
The reference pre-print is already on arXiv at https://arxiv.org/abs/2307.15775
Tue, 06 Feb 2024

14:00 - 14:30
L6

Fast High-Order Finite Element Solvers on Simplices

Pablo Brubeck Martinez
(Mathematical Institute (University of Oxford))
Abstract

We present new high-order finite elements discretizing the $L^2$ de Rham complex on triangular and tetrahedral meshes. The finite elements discretize the same spaces as usual, but with different basis functions. They allow for fast linear solvers based on static condensation and space decomposition methods.

The new elements build upon the definition of degrees of freedom given by (Demkowicz et al., De Rham diagram for $hp$ finite element spaces. Comput.~Math.~Appl., 39(7-8):29--38, 2000.), and consist of integral moments on a symmetric reference simplex with respect to a numerically computed polynomial basis that is orthogonal in both the $L^2$- and $H(\mathrm{d})$-inner products ($\mathrm{d} \in \{\mathrm{grad}, \mathrm{curl}, \mathrm{div}\}$).

On the reference symmetric simplex, the resulting stiffness matrix has diagonal interior block, and does not couple together the interior and interface degrees of freedom. Thus, on the reference simplex, the Schur complement resulting from elimination of interior degrees of freedom is simply the interface block itself.

This sparsity is not preserved on arbitrary cells mapped from the reference cell. Nevertheless, the interior-interface coupling is weak because it is only induced by the geometric transformation. We devise a preconditioning strategy by neglecting the interior-interface coupling. We precondition the interface Schur complement with the interface block, and simply apply point-Jacobi to precondition the interior block.

The combination of this approach with a space decomposition method on small subdomains constructed around vertices, edges, and faces allows us to efficiently solve the canonical Riesz maps in $H^1$, $H(\mathrm{curl})$, and $H(\mathrm{div})$, at very high order. We empirically demonstrate iteration counts that are robust with respect to the polynomial degree.

Fri, 08 Mar 2024

15:00 - 16:00
L6

Topological Perspectives to Characterizing Generalization in Deep Neural Networks

Tolga Birdal
((Imperial College)
Further Information

 

Dr. Tolga Birdal is an Assistant Professor in the Department of Computing at Imperial College London, with prior experience as a Senior Postdoctoral Research Fellow at Stanford University in Prof. Leonidas Guibas's Geometric Computing Group. Tolga has defended his master's and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His research interests center on geometric machine learning and 3D computer vision, with a theoretical focus on exploring the boundaries of geometric computing, non-Euclidean inference, and the foundations of deep learning. Dr. Birdal has published extensively in leading academic journals and conference proceedings, including NeurIPS, CVPR, ICLR, ICCV, ECCV, T-PAMI, and IJCV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Abstract

 

Training deep learning models involves searching for a good model over the space of possible architectures and their parameters. Discovering models that exhibit robust generalization to unseen data and tasks is of paramount for accurate and reliable machine learning. Generalization, a hallmark of model efficacy, is conventionally gauged by a model's performance on data beyond its training set. Yet, the reliance on vast training datasets raises a pivotal question: how can deep learning models transcend the notorious hurdle of 'memorization' to generalize effectively? Is it feasible to assess and guarantee the generalization prowess of deep neural networks in advance of empirical testing, and notably, without any recourse to test data? This inquiry is not merely theoretical; it underpins the practical utility of deep learning across myriad applications. In this talk, I will show that scrutinizing the training dynamics of neural networks through the lens of topology, specifically using 'persistent-homology dimension', leads to novel bounds on the generalization gap and can help demystifying the inner workings of neural networks. Our work bridges deep learning with the abstract realms of topology and learning theory, while relating to information theory through compression.

 

Thu, 07 Mar 2024
12:00
L6

Well-posedness of nonlocal aggregation-diffusion equations and systems with irregular kernels

Yurij Salmaniw
(Mathematical Institute, University of Oxford)
Abstract

Aggregation-diffusion equations and systems have garnered much attention in the last few decades. More recently, models featuring nonlocal interactions through spatial convolution have been applied to several areas, including the physical, chemical, and biological sciences. Typically, one can establish the well-posedness of such models via regularity assumptions on the kernels themselves; however, more effort is required for many scenarios of interest as the nonlocal kernel is often discontinuous.

 

In this talk, I will present recent progress in establishing a robust well-posedness theory for a class of nonlocal aggregation-diffusion models with minimal regularity requirements on the interaction kernel in any spatial dimension on either the whole space or the torus. Starting with the scalar equation, we first establish the existence of a global weak solution in a small mass regime for merely bounded kernels. Under some additional hypotheses, we show the existence of a global weak solution for any initial mass. In typical cases of interest, these solutions are unique and classical. I will then discuss the generalisation to the $n$-species system for the regimes of small mass and arbitrary mass. We will conclude with some consequences of these theorems for several models typically found in ecological applications.

 

This is joint work with Dr. Jakub Skrzeczkowski and Prof. Jose Carrillo.

Wed, 07 Feb 2024
12:00
L6

Pressure jump in the Cahn-Hilliard equation

Charles Elbar
(Laboratoire Jacques Louis Lions, Sorbonne Université)
Abstract

We model a tumor as an incompressible flow considering two antagonistic effects: repulsion of cells when the tumor grows (they push each other when they divide) and cell-cell adhesion which creates surface tension. To take into account these two effects, we use a 4th-order parabolic equation: the Cahn-Hilliard equation. The combination of these two effects creates a discontinuity at the boundary of the tumor that we call the pressure jump.  To compute this pressure jump, we include an external force and consider stationary radial solutions of the Cahn-Hilliard equation. We also characterize completely the stationary solutions in the incompressible case, prove the incompressible limit and prove convergence of the parabolic problems to stationary states.

Subscribe to L6