Wed, 22 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_1.pdf

Tue, 21 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_0.pdf

Thu, 16 Mar 2023

10:00 - 12:00
L4

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(Unviersity of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

PhD_course_Esposito.pdfThis course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

Tue, 14 Mar 2023

10:00 - 12:00
L4

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

This DPhil short course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

Mon, 20 Feb 2023

14:00 - 15:00
L6

Gradient flows and randomised thresholding: sparse inversion and classification

Jonas Latz
(Heriot Watt University Edinburgh)
Abstract

Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions.

In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.

 

Subscribe to