The Dean–Kawasaki Equation: Theory, Numerics, and Applications
Abstract
Professor Ana Djurdjevac will talk about; 'The Dean–Kawasaki Equation: Theory, Numerics, and Applications'
The Dean–Kawasaki equation provides a stochastic partial differential equation description of interacting particle systems at the level of empirical densities and has attracted considerable interest in statistical physics, stochastic analysis, and applied modeling. In this work, we study analytical and numerical aspects of the Dean–Kawasaki equation, with a particular focus on well-posedness, structure preservation, and possible discretization strategies. In addition, we extend the framework to the Dean–Kawasaki equation posed on smooth hypersurfaces. We discuss applications of the Dean–Kawasaki framework to particle-based models arising in biological systems and modeling social dynamics.
A Riemannian Approach for PDE-Constrained Shape Optimization Using Outer Metrics
Abstract
Speaker Estefania Loayza Romero will talk about: A Riemannian Approach for PDE-Constrained Shape Optimization Using Outer Metrics
In PDE-constrained shape optimisation, shapes are traditionally viewed as elements of a Riemannian manifold, specifically as embeddings of the unit circle into the plane, modulo reparameterizations. The standard approach employs the Steklov-Poincaré metric to compute gradients for Riemannian optimisation methods. A significant limitation of current methods is the absence of explicit expressions for the geodesic equations associated with this metric. Consequently, algorithms have relied on retractions (often equivalent to the perturbation of identity method in shape optimisation) rather than true geodesic paths. Previous research suggests that incorporating geodesic equations, or better approximations thereof, can substantially enhance algorithmic performance. This talk presents numerical evidence demonstrating that using outer metrics, defined on the space of diffeomorphisms with known geodesic expressions, improves Riemannian gradient-based optimisation by significantly reducing the number of required iterations and preserving mesh quality throughout the optimisation process.
This talk is hosted at RAL.
Finite element form-valued forms
Abstract
Professor Kaibo Hu will be talking about: 'Structure-Preserving Finite Element Methods for the Einstein Equations'
Some of the most successful vector-valued finite elements in computational electromagnetics and fluid mechanics, such as the Nédélec and Raviart-Thomas elements, are recognized as special cases of Whitney’s discrete differential forms. Recent efforts aim to go beyond differential forms and establish canonical discretizations for more general tensors. An important class is that of form-valued forms, or double forms, which includes the metric tensor (symmetric (1,1)-forms) and the curvature tensor (symmetric (2,2)-forms). Like the differential structure of forms is encoded in the de Rham complex, that of double forms is encoded in the Bernstein–Gelfand–Gelfand (BGG) sequences and their cohomologies. Important examples include the Calabi complex in geometry and the Kröner complex in continuum mechanics.
These constructions aim to address the problem of discretizing tensor fields with general symmetries on a triangulation, with a particular focus on establishing discrete differential-geometric structures and compatible tensor decompositions in 2D, 3D, and higher dimensions.
Quadrature = rational approximation
Abstract
Professor Nick Trefethen will speak about: 'Quadrature = rational approximation'
Whenever you see a string of quadrature nodes, you can consider it as a branch cut defined by the poles of a rational approximation to the Cauchy transform of a weight function. The aim of this talk is to explain this strange statement and show how it opens the way to calculation of targeted quadrature formulas for all kinds of applications. Gauss quadrature is an example, but it is just the starting point, and many more examples will be shown. I hope this talk will change your understanding of quadrature formulas.
This is joint work with Andrew Horning.
Neural-network monotone schemes for the approximation of Hamilton–Jacobi–Bellman equations
Abstract
In this talk, we are interested in neural network approximations for Hamilton–Jacobi–Bellman equations.These are non linear PDEs for which the solution should be considered in the viscosity sense. The solutions also corresponds to value functions of deterministic or stochastic optimal control problems. For these equations, it is well known that solving the PDE almost everywhere may lead to wrong solutions.
We present a new method for approximating these PDEs using neural networks. We will closely follow a previous work by C. Esteve-Yagüe, R. Tsai and A. Massucco (2025), while extending the versatility of the approach.
We will first show the existence and unicity of a general monotone abstract scheme (that can be chosen in a consistent way to the PDE), and that includes implicit schemes. Then, rather than directly approximating the PDE -- as is done in methods such as PINNs (Physics-Informed Neural Networks) or DGM (Deep Galerkin Method) -- we incorporate the monotone numerical scheme into the definition of the loss function.
Finally, we can show that the critical point of the loss function is unique and corresponds to solving the desired scheme. When coupled with neural networks, this strategy allows for a (more) rigorous convergence analysis and accommodates a broad class of schemes. Preliminary numerical results are presented to support our theoretical findings.
This is joint work with C. Esteve-Yagüe and R. Tsai.
11:00
Renormalisation of the Gross-Neveu model in two dimensions à la Duch
Abstract
I will discuss the paper "Construction of Gross-Neveu model using Polchinski flow equation" by Pawel Duch (https://arxiv.org/abs/2403.18562).