How to do a Career Development Review – for Research Staff and Principal Investigators
Wednesday 11 February 2026, 09:30 – 11:00
Regular, meaningful Career Development Reviews (CDRs) are vital for building a positive research culture and supporting researchers’ long‑term development. This session will help reviewers hold effective, supportive, and forward‑looking CDR conversations.
Local and Global Well-Posedness for the Phi^4 Equation in Bounded Domains
Abstract
In recent years, a more top-down approach to renormalisation for singular SPDEs has emerged within the theory of regularity structures, based on regularity structures of multi-indices. This approach adopts a geometric viewpoint, aiming to stably parametrise the solution manifold rather than the larger space of renormalised objects that typically arise in fixed-point formulations of the equation. While several works have established the construction of the renormalised data (the model) in this setting, less has been shown with regards to the corresponding solution theory since the intrinsic nature of the model leads to renormalised data that is too lean to apply Hairer’s fixed-point approach.
In this talk, I will discuss past and ongoing work with L. Broux and F. Otto addressing this issue for the Phi^4 equation in its full subcritical regime. We establish local and global well-posedness within the framework of regularity structures of multi-indices; first in a space-time periodic setting and subsequently in domains with Dirichlet boundary conditions.
13:00
Metrics and stable invariants in persistence
Abstract
Stability is a key property of topological invariants used in data analysis and motivates the fundamental role of metrics in persistence theory. This talk reviews noise systems, a framework for constructing and analysing metrics on persistence modules, and shows how a rich family of metrics enables the definition of metric-dependent stable invariants. Focusing on one-parameter persistence, we discuss algebraic Wasserstein distances and the associated Wasserstein stable ranks, invariants that can be computed and compared efficiently. These invariants depend on interpretable parameters that can be optimised within machine-learning pipelines. We illustrate the use of Wasserstein stable ranks through experiments on synthetic and real datasets, showing how different metric choices highlight specific structural features of the data.