Tue, 28 Feb 2023
14:00
L6

A Lusztig-Shoji algorithm for quivers and affine Hecke algebras

Jonas Antor
(University of Oxford)
Abstract

Perverse sheaves are an indispensable tool in representation theory. Their stalks often encode important representation theoretic information such as composition multiplicities or canonical bases. For the nilpotent cone, there is an algorithm that computes these stalks, known as the Lusztig-Shoji algorithm. In this talk, we discuss how this algorithm can be modified to compute stalks of perverse sheaves on more general varieties. As an application, we obtain a new algorithm for computing canonical bases in certain quantum groups as well as composition multiplicities for standard modules of the affine Hecke algebra of $\mathrm{GL}_n$.

Signature Methods in Machine Learning
Lyons, T McLeod, A (15 Nov 2022)
Capturing graphs with hypo-elliptic diffusions
Toth, C Lee, D Hacker, C Oberhauser, H Advances in Neural Information Processing Systems 35 (NeurIPS 2022) volume 50 38803-38817 (01 Apr 2023)
Photo of exhibit with someone walking past

Oxford Mathematics is delighted to be hosting one of the largest exhibitions by the artist Conrad Shawcross in the UK. The exhibition, Cascading Principles: Expansions within Geometry, Philosophy, and Interference, brings together 40 sculptures realised by the artist over the last seventeen years. The artworks are placed in public and private areas, forming a web of relationships which emerge as the viewer moves through the building.

Searching for neutrino transients below 1 TeV with IceCube
Larson, M Koskinen, J Pizzuto, A Vandenbroucke, J Abbasi, R Ackermann, M Adams, J Aguilar, J Ahlers, M Ahrens, M Alispach, C Alves, A Amin, N An, R Andeen, K Anderson, T Anton, G Argüelles, C Ashida, Y Axani, S Bai, X Balagopal, A Barbano, A Barwick, S Bastian, B Basu, V Baur, S Bay, R Beatty, J Becker, K Becker Tjus, J Bellenghi, C BenZvi, S Berley, D Bernardini, E Besson, D Binder, G Bindig, D Blaufuss, E Blot, S Boddenberg, M Bontempo, F Borowka, J Böser, S Botner, O Böttcher, J Bourbeau, E Bradascio, F Braun, J Bron, S Brostean-Kaiser, J Browne, S Burgman, A Burley, R Busse, R Campana, M Carnie-Bronca, E Chen, C Chirkin, D Choi, K Clark, B Clark, K Classen, L Coleman, A Collin, G Conrad, J Coppin, P Correa, P Cowen, D Cross, R Dappen, C Dave, P De Clercq, C DeLaunay, J Dembinski, H Deoskar, K De Ridder, S Desai, A Desiati, P de Vries, K de Wasseige, G de With, M DeYoung, T Dharani, S Diaz, A Díaz-Vélez, J Dittmer, M Dujmovic, H Dunkman, M DuVernois, M Dvorak, E Ehrhardt, T Eller, P Engel, R Erpenbeck, H Evans, J Evenson, P Fan, K Fazely, A Fiedlschuster, S Proceedings of Science volume 395 (18 Mar 2022)
Wed, 22 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_1.pdf

Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Tue, 21 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_0.pdf

Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Thu, 16 Mar 2023

10:00 - 12:00
L4

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Abstract

PhD_course_Esposito.pdfThis course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Tue, 14 Mar 2023

10:00 - 12:00
L4

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Abstract

This DPhil short course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Mon, 20 Feb 2023

14:00 - 15:00
L6

Gradient flows and randomised thresholding: sparse inversion and classification

Jonas Latz
(Heriot Watt University Edinburgh)
Abstract

Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions.

In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.

 

Subscribe to