Forthcoming events in this series


Thu, 30 Oct 2025

14:00 - 15:00
Lecture Room 3

Sparse Graphical Linear Dynamical Systems

Emilie Chouzenoux
(INRIA Saclay, France)
Abstract

Time-series datasets are central in numerous fields of science and engineering, such as biomedicine, Earth observation, and network analysis. Extensive research exists on state-space models (SSMs), which are powerful mathematical tools that allow for probabilistic and interpretable learning on time series. Estimating the model parameters in SSMs is arguably one of the most complicated tasks, and the inclusion of prior knowledge is known to both ease the interpretation but also to complicate the inferential tasks. In this talk, I will introduce a novel joint graphical modeling framework called DGLASSO (Dynamic Graphical Lasso) [1], that bridges the static graphical Lasso model [2] and the causal-based graphical approach for the linear-Gaussian SSM in [3]. I will also present a new inference method within the DGLASSO framework that implements an efficient block alternating majorization-minimization algorithm. The algorithm's convergence is established by departing from modern tools from nonlinear analysis. Experimental validation on synthetic and real weather variability data showcases the effectiveness of the proposed model and inference algorithm.

 

[1] E. Chouzenoux and V. Elvira. Sparse Graphical Linear Dynamical Systems. Journal of Machine Learning Research, vol. 25, no. 223, pp. 1-53, 2024

[2] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical LASSO. Biostatistics, vol. 9, no. 3, pp. 432–441, Jul. 2008.

[3] V. Elvira and E. Chouzenoux. Graphical Inference in Linear-Gaussian State-Space Models. IEEE Transactions on Signal Processing, vol. 70, pp. 4757-4771, Sep. 2022.

 

 

Thu, 23 Oct 2025

14:00 - 15:00
(This talk is hosted by Rutherford Appleton Laboratory)

Interior-point optimisation for quadratic programs with conic constraints

Paul Goulart
(Oxford University)
Abstract

 The talk will present the open-source convex optimisation solver Clarabel, an interior-point based solver that uses a novel homogeneous embedding technique offering substantially faster solve times relative to existing open-source and commercial interior-point solvers for some problem types. This improvement is due to both a reduction in the number of required interior point iterations as well as an improvement in both the size and sparsity of the linear system that must be solved at each iteration. For large-scale problems we employ a variety of additional techniques to accelerate solve times, including chordal decomposition methods, GPU sub-solvers, and custom handling of certain specialised cones. The talk will describe details of our implementation and show performance results with respect to solvers based on the standard homogeneous self-dual embedding.

 

This talk is hosted by Rutherford Appleton Laboratory and will take place @ Harwell Campus, Didcot, OX11 0QX

Thu, 16 Oct 2025

14:00 - 15:00
Lecture Room 3

Piecewise rational finite element spaces of differential forms

Evan Gawlik
(Santa Clara University)
Abstract

The Whitney forms on a simplicial triangulation are piecewise affine differential forms that are dual to integration over chains.  The so-called blow-up Whitney forms are piecewise rational generalizations of the Whitney forms.  These differential forms, which are also called shadow forms, were first introduced by Brasselet, Goresky, and MacPherson in the 1990s.  The blow-up Whitney forms exhibit singular behavior on the boundary of the simplex, and they appear to be well-suited for constructing certain novel finite element spaces, like tangentially- and normally-continuous vector fields on triangulated surfaces.  This talk will discuss the blow-up Whitney forms, their properties, and their applicability to PDEs like the Bochner Laplace problem.  

Thu, 09 Oct 2025

14:00 - 15:00
(This talk is hosted by Rutherford Appleton Laboratory)

HSS iteration for solving the indefinite Helmholtz equation by multigrid with standard components

Colin Cotter
(Imperial College, London)
Abstract

We provide an iterative solution approach for the indefinite Helmholtz equation discretised using finite elements, based upon a Hermitian Skew-Hermitian Splitting (HSS) iteration applied to the shifted operator, and prove that the iteration is k- and mesh-robust when O(k) HSS iterations are performed. The HSS iterations involve solving a shifted operator that is suitable for approximation by multigrid using standard smoothers and transfer operators, and hence we can use O(N) parallel processors in a high performance computing implementation, where N is the total number of degrees of freedom. We argue that the algorithm converges in O(k) wallclock time when within the range of scalability of the multigrid. We provide numerical results verifying our proofs and demonstrating this claim, establishing a method that can make use of large scale high performance computing systems.

 

 

This talk is hosted by Rutherford Appleton Laboratory and will take place @ Harwell Campus, Didcot, OX11 0QX

Fri, 29 Aug 2025
12:30

TBA

Colin Cotter
(Imperial College, London)
Abstract

TBA

Thu, 19 Jun 2025
14:00
Lecture Room 3

Hilbert’s 19th problem and discrete De Giorgi-Nash-Moser theory: analysis and applications

Endre Süli
(Mathematical Institute (University of Oxford))
Abstract
This talk is concerned with the construction and mathematical analysis of a system of nonlinear partial differential equations featuring in a model of an incompressible non-Newtonian fluid, the synovial fluid, contained in the cavities of human joints. To prove the convergence of the numerical method one has to develop a discrete counterpart of the De Giorgi-Nash-Moser theorem, which guarantees a uniform bound on the sequence of continuous piecewise linear finite element approximations in a Hölder norm, for divergence-form uniformly elliptic partial differential equations with measurable coefficients.
Thu, 12 Jun 2025

14:00 - 15:00
Lecture Room 3

Finite volumes for a generalized Poisson-Nernst-Planck system with cross-diffusion and size exclusion

Clément Cancès
(INRIA LILLE)
Abstract

We propose and analyse two structure preserving finite volume schemes to approximate the solutions to a cross-diffusion system with self-consistent electric interactions introduced by Burger, Schlake & Wolfram (2012). This system has been derived thanks to probabilistic arguments and admits a thermodynamically motivated Lyapunov functional that is preserved by suitable two-point flux finite volume approximations. This allows to carry out the mathematical analysis of two schemes to be compared.

This is joint work with Maxime Herda and Annamaria Massimini.

 

 

Thu, 05 Jun 2025
14:00
Lecture Room 3

Solving sparse linear systems using quantum computing algorithms

Leigh Lapworth
(Rolls-Royce)
Abstract

The currently available quantum computers fall into the NISQ (Noisy Intermediate Scale Quantum) regime. These enable variational algorithms with a relatively small number of free parameters. We are now entering the FTQC (Fault Tolerant Quantum Computer)  regime where gate fidelities are high enough that error-correction schemes are effective. The UK Quantum Missions include the target for a FTQC device that can perform a million operations by 2028, and a trillion operations by 2035.

 

This talk will present the outcomes from assessments of  two quantum linear equation solvers for FTQCs– the Harrow–Hassidim–Lloyd (HHL) and the Quantum Singular Value Transform (QSVT) algorithms. These have used sample matrices from a Computational Fluid Dynamics (CFD) testcase. The quantum solvers have also been embedded with an outer non-linear solver to judge their impact on convergence. The analysis uses circuit emulation and is used to judge the FTQC requirements to deliver quantum utility.

Thu, 29 May 2025

14:00 - 15:00
Lecture Room 3

On the data-sparsity of the solution of Riccati equations with quasiseparable coefficients

Stefano Massei
(Universita di Pisa)
Abstract

Solving large-scale continuous-time algebraic Riccati equations is a significant challenge in various control theory applications. 

This work demonstrates that when the matrix coefficients of the equation are quasiseparable, the solution also exhibits numerical quasiseparability. This property enables us to develop two efficient Riccati solvers. The first solver is applicable to the general quasiseparable case, while the second is tailored to the particular case of banded coefficients. Numerical experiments confirm the effectiveness of the proposed algorithms on both synthetic examples and case studies from the control of partial differential equations and agent-based models. 

Thu, 22 May 2025

14:00 - 15:00
Lecture Room 3

When you truncate an infinite equation, what happens to the leftovers?

Geoff Vasil
(University of Edinburgh)
Abstract

Numerically solving PDEs typically requires compressing infinite information into a finite system of algebraic equations. Pragmatically, we usually follow a recipe: “Assume solutions of form X; substitute into PDE Y; discard terms by rule Z.” In contrast, Lanczos’s pioneering “tau method” prescribes modifying the PDE to form an exact finite system. Crucially, any recipe-based method can be viewed as adding a small equation correction, enabling us to compare multiple schemes independently of the solver. 

This talk also addresses a paradox: PDEs often admit infinitely many solutions, but finite systems produce only a finite set. When we include a “small” correction, the missing solutions are effectively hidden. I will discuss how tau methods frame this perspective and outline proposals for systematically studying and optimising various residuals.

Thu, 15 May 2025
14:00
Lecture Room 3

Quick on the draw: high-frequency trading in the Wild West of cryptocurrency limit order-book markets

Sam Howison
(Mathematical Institute (University of Oxford))
Abstract

Cryptocurrencies such as Bitcoin have only recently become a significant part of the financial landscape. Many billions of dollars are now traded daily on limit order-book markets such as Binance, and these are probably among the most open, liquid and transparent markets there are. They therefore make an interesting platform from which to investigate myriad questions to do with market microstructure. I shall talk about a few of these, including live-trading experiments to investigate the difference between on-paper strategy analysis (typical in the academic literature) and actual trading outcomes. I shall also mention very recent work on the new Hyperliquid exchange which runs on a blockchain basis, showing how to use this architecture to obtain datasets of an unprecendented level of granularity. This is joint work with Jakob Albers, Mihai Cucuringu and Alex Shestopaloff.

Thu, 08 May 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Multilevel Monte Carlo Methods with Smoothing

Aretha Teckentrup
(University of Edinburgh)
Abstract

Parameters in mathematical models are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the uncertainty in the model outputs. 

In this talk, we employ the multilevel Monte Carlo (MLMC) method to compute expected values of quantities of interest related to partial differential equations with random coefficients. We make use of the circulant embedding method for sampling from the coefficient, and to further improve the computational complexity of the MLMC estimator, we devise and implement the smoothing technique integrated into the circulant embedding method. This allows to choose the coarsest mesh on the  first level of MLMC independently of the correlation length of the covariance function of the random  field, leading to considerable savings in computational cost.

 

 

Please note; this talk is hosted by Rutherford Appleton Laboratory, Harwell Campus, Didcot, OX11 0QX

 

 

 

Thu, 01 May 2025

14:00 - 15:00
Lecture Room 3

Adventures in structured matrix computations

Gunnar Martinsson
(UT Austin)
Abstract

Many matrices that arise in scientific computing and in data science have internal structure that can be exploited to accelerate computations. The focus in this talk will be on matrices that are either of low rank, or can be tessellated into a collection of subblocks that are either of low rank or are of small size. We will describe how matrices of this nature arise in the context of fast algorithms for solving PDEs and integral equations, and also in handling "kernel matrices" from computational statistics. A particular focus will be on randomized algorithms for obtaining data sparse representations of such matrices.

 

At the end of the talk, we will explore an unorthodox technique for discretizing elliptic PDEs that was designed specifically to play well with fast algorithms for dense structured matrices.

Thu, 20 Mar 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Firedrake: a differentiable programming framework for finite element simulation

David Ham
(Imperial College London)
Abstract

Differentiable programming is the underpinning technology for the AI revolution. It allows neural networks to be programmed in very high level user code while still achieving very high performance for both the evaluation of the network and, crucially, its derivatives. The Firedrake project applies exactly the same concepts to the simulation of physical phenomena modelled with partial differential equations (PDEs). By exploiting the high level mathematical abstraction offered by the finite element method, users are able to write mathematical operators for the problem they wish to solve in Python. The high performance parallel implementations of these operators are then automatically generated, and composed with the PETSc solver framework to solve the resulting PDE. However, because the symbolic differential operators are available as code, it is possible to reason symbolically about them before the numerical evaluation. In particular, the operators can be differentiated with respect to their inputs, and the resulting derivative operators composed in forward or reverse order. This creates a differentiable programming paradigm congruent with (and compatible with) machine learning frameworks such as Pytorch and JAX. 

 

In this presentation, David Ham will present Firedrake in the context of differentiable programming, and show how this enables productivity, capability and performance to be combined in a unique way. I will also touch on the mechanism that enables Firedrake to be coupled with Pytorch and JAX.

  

Please note this talk will take place at Rutherford Appleton Laboratory, Harwell Campus, Didcot. 

Thu, 13 Mar 2025

14:00 - 15:00
Lecture Room 3

On the long time behaviour of numerical schemes applied to Hamiltonian PDEs

Erwan Faou
(INRIA)
Abstract

In this talk I will review some recent results concerning the qualitative behaviour of symplectic integrators applied to Hamiltonian PDEs, such as the nonlinear wave equation or Schrödinger equations. 

Additionally, I will discuss the problem of numerical resonances, the existence of modified energy and the existence and stability of numerical solitons over long times. 

 

These are works with B. Grébert, D. Bambusi, G. Maierhofer and K. Schratz. 

Thu, 06 Mar 2025

14:00 - 15:00
Lecture Room 3

Near-optimal hierarchical matrix approximation

Diana Halikias
(Cornell University)
Abstract

Can one recover a matrix from only matrix-vector products? If so, how many are needed? We will consider the matrix recovery problem for the class of hierarchical rank-structured matrices. This problem arises in scientific machine learning, where one wishes to recover the solution operator of a PDE from only input-output pairs of forcing terms and solutions. Peeling algorithms are the canonical method for recovering a hierarchical matrix from matrix-vector products, however their recursive nature poses a potential stability issue which may deteriorate the overall quality of the approximation. Our work resolves the open question of the stability of peeling. We introduce a robust version of peeling and prove that it achieves low error with respect to the best possible hierarchical approximation to any matrix, allowing us to analyze the performance of the algorithm on general matrices, as opposed to exactly hierarchical ones. This analysis relies on theory for low-rank approximation, as well as the surprising result that the Generalized Nystrom method is more accurate than the randomized SVD algorithm in this setting. 

Thu, 27 Feb 2025

14:00 - 15:00
Lecture Room 3

Learning-enhanced structure preserving particle methods for Landau equation

Li Wang
(University of Minnesota)
Abstract

The Landau equation stands as one of the fundamental equations in kinetic theory and plays a key role in plasma physics. However, computing it presents significant challenges due to the complexity of the Landau operator,  the dimensionality, and the need to preserve the physical properties of the solution. In this presentation, I will introduce deep learning assisted particle methods aimed at addressing some of these challenges. These methods combine the benefits of traditional structure-preserving techniques with the approximation power of neural networks, aiming to handle high dimensional problems with minimal training. 

Thu, 20 Feb 2025

14:00 - 15:00
(This talk is hosted by Rutherford Appleton Laboratory)

Integrate your residuals while solving dynamic optimization problems

Eric Kerrigan
(Imperial College London)
Abstract

 Many optimal control, estimation and design problems can be formulated as so-called dynamic optimization problems, which are optimization problems with differential equations and other constraints. State-of-the-art methods based on collocation, which enforce the differential equations at only a finite set of points, can struggle to solve certain dynamic optimization problems, such as those with high-index differential algebraic equations, consistent overdetermined constraints or problems with singular arcs. We show how numerical methods based on integrating the differential equation residuals can be used to solve dynamic optimization problems where collocation methods fail. Furthermore, we show that integrated residual methods can be computationally more efficient than direct collocation.

This seminar takes place at RAL (Rutherford Appleton Lab). 

Thu, 13 Feb 2025

14:00 - 15:00
Lecture Room 3

Global Optimization with Hamilton-Jacobi PDEs

Dante Kalise
(Imperial College London)
Abstract

We introduce a novel approach to global optimization  via continuous-time dynamic programming and Hamilton-Jacobi-Bellman (HJB) PDEs. For non-convex, non-smooth objective functions,  we reformulate global optimization as an infinite horizon, optimal asymptotic stabilization control problem. The solution to the associated HJB PDE provides a value function which corresponds to a (quasi)convexification of the original objective.  Using the gradient of the value function, we obtain a  feedback law driving any initial guess towards the global optimizer without requiring derivatives of the original objective. We then demonstrate that this HJB control law can be integrated into other global optimization frameworks to improve its performance and robustness. 

Thu, 06 Feb 2025

14:00 - 15:00
Lecture Room 3

Deflation Techniques for Finding Multiple Local Minima of a Nonlinear Least Squares Problem

Marcus Webb
(University of Manchester)
Abstract

Deflation is a technique to remove a solution to a problem so that other solutions to this problem can subsequently be found. The most prominent instance is deflation we see in eigenvalue solvers, but recent interest has been in deflation of rootfinding problems from nonlinear PDEs with many isolated solutions (spearheaded by Farrell and collaborators). 

 

In this talk I’ll show you recent results on deflation techniques for optimisation algorithms with many local minima, focusing on the Gauss—Newton algorithm for nonlinear least squares problems.  I will demonstrate advantages of these techniques instead of the more obvious approach of applying deflated Newton’s method to the first order optimality conditions and present some proofs that these algorithms will avoid the deflated solutions. Along the way we will see an interesting generalisation of Woodbury’s formula to least squares problems, something that should be more well known in Numerical Linear Algebra (joint work with Güttel, Nakatsukasa and Bloor Riley).

 

Main preprint: https://arxiv.org/abs/2409.14438

WoodburyLS preprint: https://arxiv.org/abs/2406.15120

Thu, 30 Jan 2025

14:00 - 15:00
Lecture Room 3

Operator learning without the adjoint

Nicolas Boullé
(Imperial College London )
Abstract

There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint for learning time-dependent PDEs. 

In this talk, we will first explore connections with low-rank matrix recovery problems in numerical linear algebra. Then, we will show that one can approximate a family of non-self-adjoint infinite-dimensional compact operators via projection onto a Fourier basis without querying the adjoint.

 

Thu, 23 Jan 2025

14:00 - 15:00
Lecture Room 3

Multi-Index Monte Carlo Method for Semilinear Stochastic Partial Differential Equations

Abdul Lateef Haji-Ali
(Heriot Watt)
Abstract

We present an exponential-integrator-based multi-index Monte Carlo (MIMC) method for the weak approximation of mild solutions to semilinear stochastic partial differential equations (SPDEs). Theoretical results on multi-index coupled solutions of the SPDE are provided, demonstrating their stability and the satisfaction of multiplicative error estimates. Leveraging this theory, we develop a tractable MIMC algorithm. Numerical experiments illustrate that MIMC outperforms alternative approaches, such as multilevel Monte Carlo, particularly in low-regularity settings.

Thu, 12 Dec 2024
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

A Subspace-conjugate Gradient Method for Linear Matrix Equations

Davide Palitta
(Università di Bologna)
Abstract

 
The solution of multiterm linear matrix equations is still a very challenging task in numerical linear algebra.
If many different solid methods for equations with (at most) two terms exist in the literature, having a number of terms greater than two makes the numerical treatment of these equations much trickier. Very few options are available in the literature. In particular, to the best of our knowledge, no decomposition-based method for multiterm equations has never been proposed; only iterative procedures exist.
A non-complete list of contributions in this direction includes a greedy procedure designed by Sirkovi\'c and Kressner, projection methods tailored to the equation at hand, Riemannian optimization schemes, and matrix-oriented Krylov methods with low-rank truncations. The last class of solvers is probably one of the most commonly used ones. These schemes amount to adapting standard Krylov schemes for linear systems to matrix equations by leveraging the equivalence between matrix equations and their Kronecker form.
As long as no truncations are performed, this equivalence means that the algorithm itself is not exploiting the structure of the problem as it is not able to see that we are actually solving a matrix equation and not a linear system. The low-rank truncations we may perform in the matrix-equation framework can be viewed as a simple computational tool needed to make the solution process affordable in terms of storage allocation and not as an algorithmic advance.

 
By taking inspiration from the matrix-oriented cg method, our goal is to design a novel iterative scheme for the solution of multiterm matrix equations. We name this procedure the subspace-conjugate gradient method (Ss-cg) due to the peculiar orthogonality conditions imposed to compute some of the quantities involved in the scheme. As we will show in this talk, the main difference between Ss-cg and cg is the ability of the former to capitalize on the matrix equation structure of the underlying problem. In particular, we will make full use of the (low-rank) matrix format of the iterates to define appropriate ``step-sizes''. If these quantities correspond to scalars alpha_k and beta_k in cg, they will amount to small-dimensional matrices in our fresh approach.
This novel point of view leads to remarkable computational gains making Ss-cg a very competitive option for the solution of multi-term linear matrix equations.

 
This is a joint work with Martina Iannacito and Valeria Simoncini, both from the Department of Mathematics, University of Bologna.
 
Thu, 05 Dec 2024

14:00 - 15:00
Lecture Room 3

Solving (algebraic problems from) PDEs; a personal perspective

Andy Wathen
(Oxford University)
Abstract

We are now able to solve many partial differential equation problems that were well beyond reach when I started in academia. Some of this success is due to computer hardware but much is due to algorithmic advances. 

I will give a personal perspective of the development of computational methodology in this area over my career thus far. 

Thu, 28 Nov 2024

14:00 - 15:00
Lecture Room 3

Unleashing the Power of Deeper Layers in LLMs

Shiwei Liu
(Oxford University)
Abstract

Large Language Models (LLMs) have demonstrated impressive achievements. However, recent research has shown that their deeper layers often contribute minimally, with effectiveness diminishing as layer depth increases. This pattern presents significant opportunities for model compression. 

In the first part of this seminar, we will explore how this phenomenon can be harnessed to improve the efficiency of LLM compression and parameter-efficient fine-tuning. Despite these opportunities, the underutilization of deeper layers leads to inefficiencies, wasting resources that could be better used to enhance model performance. 

The second part of the talk will address the root cause of this ineffectiveness in deeper layers and propose a solution. We identify the issue as stemming from the prevalent use of Pre-Layer Normalization (Pre-LN) and introduce Mix-Layer Normalization (Mix-LN) with combined Pre-LN and Post-LN as a new approach to mitigate this training deficiency.

Thu, 21 Nov 2024

14:00 - 15:00
Lecture Room 3

Tackling complexity in multiscale kinetic and mean-field equations

Lorenzo Pareschi
(Heriot Watt University)
Abstract

Kinetic and mean-field equations are central to understanding complex systems across fields such as classical physics, engineering, and the socio-economic sciences. Efficiently solving these equations remains a significant challenge due to their high dimensionality and the need to preserve key structural properties of the models. 

In this talk, we will focus on recent advancements in deterministic numerical methods, which provide an alternative to particle-based approaches (such as Monte Carlo or particle-in-cell methods) by avoiding stochastic fluctuations and offering higher accuracy. We will discuss strategies for designing these methods to reduce computational complexity while preserving fundamental physical properties and maintaining efficiency in stiff regimes. 
Special attention will be given to the role of these methods in addressing multi-scale problems in rarefied gas dynamics and plasma physics. Time permitting, we will also touch on emerging techniques for uncertainty quantification in these systems.

Thu, 14 Nov 2024

14:00 - 15:00
Lecture Room 3

Group discussion on the use of AI tools in research

Mike Giles
(Oxford University)
Abstract

AI tools like ChatGPT, Microsoft Copilot, GitHub Copilot, Claude and even older AI-enabled tools like Grammarly and MS Word, are becoming an everyday part of our research environment.  This last-minute opening up of a seminar slot due to the unfortunate illness of our intended speaker (who will hopefully re-schedule for next term) gives us an opportunity to discuss what this means for us as researchers; what are good helpful uses of AI, and are there uses of AI which we might view as inappropriate?  Please come ready to participate with examples of things which you have done yourselves with AI tools.

Thu, 07 Nov 2024

14:00 - 15:00
Lecture Room 3

Multilevel Monte Carlo methods

Mike Giles
(Oxford University)
Abstract

In this seminar I will begin by giving an overview of some problems in stochastic simulation and uncertainty quantification. I will then outline the Multilevel Monte Carlo for situations in which accurate simulations are very costly, but it is possible to perform much cheaper, less accurate simulations.  Inspired by the multigrid method, it is possible to use a combination of these to achieve the desired overall accuracy at a much lower cost.

Thu, 31 Oct 2024

14:00 - 15:00
Lecture Room 3

Theory to Enable Practical Quantum Advantage

Balint Koczor
(Oxford University)
Abstract

Quantum computers are becoming a reality and current generations of machines are already well beyond the 50-qubit frontier. However, hardware imperfections still overwhelm these devices and it is generally believed the fault-tolerant, error-corrected systems will not be within reach in the near term: a single logical qubit needs to be encoded into potentially thousands of physical qubits which is prohibitive.

 

Due to limited resources, in the near term, hybrid quantum-classical protocols are the most promising candidates for achieving early quantum advantage and these need to resort to quantum error mitigation techniques. I will explain the basic concepts and introduce hybrid quantum-classical protocols are the most promising candidates for achieving early quantum advantage. These have the potential to solve real-world problems---including optimisation or ground-state search---but they suffer from a large number of circuit repetitions required to extract information from the quantum state. I will finally identify the most likely areas where quantum computers may deliver a true advantage in the near term.

 

Bálint Koczor

Associate Professor in Quantum Information Theory

Mathematical Institute, University of Oxford

webpage

Thu, 24 Oct 2024

14:00 - 15:00
(This talk is hosted by Rutherford Appleton Laboratory)

Machine learning in solution of inverse problems: subjective perspective

Marta Betcke
(University College London)
Abstract

Following the 2012 breakthrough in deep learning for classification and visions problems, the last decade has seen tremendous raise of interest in machine learning in a wider mathematical research community from foundational research through field specific analysis to applications. 

As data is at the core of any inverse problem, it was a natural direction for the field to investigate how machine learning could aid various aspects of inversion yielding numerous approaches from somewhat ad-hoc but very effective like learned unrolled methods to provably convergent learned regularisers with everything in between. In this talk I will review some on these developments through a lens of the research of our group.   

 

Thu, 17 Oct 2024

14:00 - 15:00
Lecture Room 3

On the loss of orthogonality in low-synchronization variants of reorthogonalized block classical Gram-Schmidt

Kathryn Lund
(STFC Rutherford Appleton Laboratory)
Abstract
Interest in communication-avoiding orthogonalization schemes for high-performance computing has been growing recently.  We address open questions about the numerical stability of various block classical Gram-Schmidt variants that have been proposed in the past few years.  An abstract framework is employed, the flexibility of which allows for new rigorous bounds on the loss of orthogonality in these variants. We first analyse a generalization of (reorthogonalized) block classical Gram-Schmidt and show that a "strong'' intrablock orthogonalization routine is only needed for the very first block in order to maintain orthogonality on the level of the unit roundoff. 
Using this variant, which has four synchronization points per block column, we remove the synchronization points one at a time and analyse how each alteration affects the stability of the resulting method. Our analysis shows that the variant requiring only one synchronization per block column cannot be guaranteed to be stable in practice, as stability begins to degrade with the first reduction of synchronization points.
Our analysis of block methods also provides new theoretical results for the single-column case. In particular, it is proven that DCGS2 from Bielich, D. et al. {Par. Comput.} 112 (2022)] and CGS-2 from Swirydowicz, K. et al, {Num. Lin. Alg. Appl.} 28 (2021)] are as stable as Householder QR.  
Numerical examples from the BlockStab toolbox are included throughout, to help compare variants and illustrate the effects of different choices of intraorthogonalization subroutines.


 

Fri, 20 Sep 2024

14:00 - 15:00
TCC VC

Finite element approximation of eigenvalue problems

Prof Danielle Boffi
(KAUST - Computer, Electrical and Mathematical Sciences and Engineering - CEMSE)
Abstract

In this informal talk I will review some theoretical and practical aspects related to the finite element approximation of eigenvalue problems arising from PDEs.
The review will cover elliptic eigenvalue problems and eigenvalue problems in mixed form, with particular emphasis on the Maxwell eigenvalue problem.
Other topics can be discussed depending on the interests of the audience, including adaptive schemes, approximation of parametric problems, reduced order models.
 

Thu, 13 Jun 2024

14:00 - 15:00
Lecture Room 3

A New Two-Dimensional Model-Based Subspace Method for Large-Scale Unconstrained Derivative-Free Optimization: 2D-MoSub

Pengcheng Xie
(Chinese Academy of Sciences)
Abstract

This seminar will introduce 2D-MoSub, a derivative-free optimization method based on the subspace method and quadratic models, specifically tackling large-scale derivative-free problems. 2D-MoSub combines 2-dimensional quadratic interpolation models and trust-region techniques to update the points and explore the 2-dimensional subspace iteratively. Its framework includes constructing the interpolation set, building the quadratic interpolation model, performing trust-region trial steps, and updating the trust-region radius and subspace. Computation details and theoretical properties will be discussed. Numerical results demonstrate the advantage of 2D-MoSub.

 

Short Bio:
Pengcheng Xie, PhD (Chinese Academy of Sciences), is joining Lawrence Berkeley National Laboratory as a postdoctoral scholar specializing in mathematical optimization and numerical analysis. He has developed optimization methods, including 2D-MoSub and SUSD-TR. Pengcheng has published in major journals and presented at ISMP 2024 (upcoming), ICIAM 2023, and CSIAM 2022. He received the Hua Loo-keng scholarship in 2019 and the CAS-AMSS Presidential scholarship in 2023.
 

Thu, 06 Jun 2024

14:00 - 15:00
Lecture Room 3

Structure-preserving hybrid finite element methods

Ari Stern
(Washington University in St. Louis, USA)
Abstract

The classical finite element method uses piecewise-polynomial function spaces satisfying continuity and boundary conditions. Hybrid finite element methods, by contrast, drop these continuity and boundary conditions from the function spaces and instead enforce them weakly using Lagrange multipliers. The hybrid approach has several numerical and implementational advantages, which have been studied over the last few decades.

 

In this talk, we show how the hybrid perspective has yielded new insights—and new methods—in structure-preserving numerical PDEs. These include multisymplectic methods for Hamiltonian PDEs, charge-conserving methods for the Maxwell and Yang-Mills equations, and hybrid methods in finite element exterior calculus.

Thu, 30 May 2024

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

This seminar has been cancelled

Marta Betcke
(University College London)
Abstract

Joint work Marta Betcke and Bolin Pan

In photoacoustic tomography (PAT) with flat sensor, we routinely encounter two types of limited data. The first is due to using a finite sensor and is especially perceptible if the region of interest is large relatively to the sensor or located farther away from the sensor. In this talk we focus on the second type caused by a varying sensitivity of the sensor to the incoming wavefront direction which can be modelled as binary i.e. by a cone of sensitivity. Such visibility conditions result, in Fourier domain, in a restriction of the data to a bowtie, akin to the one corresponding to the range of the forward operator but further narrowed according to the angle of sensitivity. 

We show how we can separate the visible and invisible wavefront directions in PAT image and data using a directional frame like Curvelets, and how such decomposition allows for decoupling of the reconstruction involving application of expensive forward/adjoint solvers from the training problem. We present fast and stable approximate Fourier domain forward and adjoint operators for reconstruction of the visible coefficients for such limited angle problem and a tailored UNet matching both the multi-scale Curvelet decomposition and the partition into the visible/invisible directions for learning the invisible coefficients from a training set of similar data.

Thu, 23 May 2024

14:00 - 15:00
Lecture Room 3

The bilevel optimization renaissance through machine learning: lessons and challenges

Alain Zemkoho
(University of Southampton)
Abstract

Bilevel optimization has been part of machine learning for over 4 decades now, although perhaps not always in an obvious way. The interconnection between the two topics started appearing more clearly in publications since about 20 years now, and in the last 10 years, the number of machine learning applications of bilevel optimization has literally exploded. This rise of bilevel optimization in machine learning has been highly positive, as it has come with many innovations in the theoretical and numerical perspectives in understanding and solving the problem, especially with the rebirth of the implicit function approach, which seemed to have been abandoned at some point.
Overall, machine learning has set the bar very high for the whole field of bilevel optimization with regards to the development of numerical methods and the associated convergence analysis theory, as well as the introduction of efficient tools to speed up components such as derivative calculations among other things. However, it remains unclear how the techniques from the machine learning—based bilevel optimization literature can be extended to other applications of bilevel programming. 
For instance, many machine learning loss functions and the special problem structures enable the fulfillment of some qualification conditions that will fail for multiple other applications of bilevel optimization. In this talk, we will provide an overview of machine learning applications of bilevel optimization while giving a flavour of corresponding solution algorithms and their limitations. 
Furthermore, we will discuss possible paths for algorithms that can tackle more complicated machine learning applications of bilevel optimization, while also highlighting lessons that can be learned for more general bilevel programs.

Thu, 16 May 2024

14:00 - 15:00
Lecture Room 3

Multilevel Monte Carlo methods for the approximation of failure probability regions

Matteo Croci
(Basque Center for Applied Mathematics)
Abstract

In this talk, we consider the problem of approximating failure regions. More specifically, given a costly computational model with random parameters and a failure condition, our objective is to determine the parameter region in which the failure condition is likely to not be satisfied. In mathematical terms, this problem can be cast as approximating the level set of a probability density function. We solve this problem by dividing it into two: 1) The design of an efficient Monte Carlo strategy for probability estimation. 2) The construction of an efficient algorithm for level-set approximation. Following this structure, this talk is comprised of two parts:

In the first part, we present a new multi-output multilevel best linear unbiased estimator (MLBLUE) for approximating expectations. The advantage of this estimator is in its convenience and optimality: Given any set of computational models with known covariance structure, MLBLUE automatically constructs a provenly optimal estimator for any (finite) number of quantities of interest. Nevertheless, the optimality of MLBLUE is tied to its optimal set-up, which requires the solution of a nonlinear optimization problem. We show how the latter can be reformulated as a semi-definite program and thus be solved reliably and efficiently.

In the second part, we construct an adaptive level-set approximation algorithm for smooth functions corrupted by noise in $\mathbb{R}^d$. This algorithm only requires point value data and is thus compatible with Monte Carlo estimators. The algorithm is comprised of a criterion for level-set adaptivity combined with an a posteriori error estimator. Under suitable assumptions, we can prove that our algorithm will correctly capture the target level set at the same cost complexity of uniformly approximating a $(d-1)$-dimensional function.

Thu, 09 May 2024

14:00 - 15:00
Lecture Room 4

Fast optimistic methods for monotone equations and convex optimization problems

Radu Bot
(University of Vienna)
Further Information

 

Please note; the seminar is taking place in Lecture Room 4 on this occasion 

Abstract

In this talk, we discuss continuous in time dynamics for the problem of approaching the set of zeros of a single-valued monotone and continuous operator V . Such problems are motivated by minimax convexconcave and, in particular, by convex optimization problems with linear constraints. The central role is played by a second-order dynamical system that combines a vanishing damping term with the time derivative of V along the trajectory, which can be seen as an analogous of the Hessian-driven damping in case the operator is originating from a potential. We show that these methods exhibit fast convergence rates for kV (z(t))k as t ! +1, where z( ) denotes the generated trajectory, and for the restricted gap function, and that z( ) converges to a zero of the operator V . For the corresponding implicit and explicit discrete time models with Nesterov’s momentum, we prove that they share the asymptotic features of the continuous dynamics.

Extensions to variational inequalities and fixed-point problems are also addressed. The theoretical results are illustrated by numerical experiments on bilinear games and the training of generative adversarial networks.

Thu, 02 May 2024

14:00 - 15:00
Lecture Room 3

Mathematics: key enabling technology for scientific machine learning

Wil Schilders
(TU Eindhoven)
Abstract

Artificial Intelligence (AI) will strongly determine our future prosperity and well-being. Due to its generic nature, AI will have an impact on all sciences and business sectors, our private lives and society as a whole. AI is pre-eminently a multidisciplinary technology that connects scientists from a wide variety of research areas, from behavioural science and ethics to mathematics and computer science.

Without downplaying the importance of that variety, it is apparent that mathematics can and should play an active role. All the more so as, alongside the successes of AI, also critical voices are increasingly heard. As Robert Dijkgraaf (former director of the Princeton Institute of Advanced Studies) observed in May 2019: ”Artificial intelligence is in its adolescent phase, characterised by trial and error, self-aggrandisement, credulity and lack of systematic understanding.” Mathematics can contribute to the much-needed systematic understanding of AI, for example, greatly improving reliability and robustness of AI algorithms, understanding the operation and sensitivity of networks, reducing the need for abundant data sets, or incorporating physical properties into neural networks needed for super-fast and accurate simulations in the context of digital twinning.

Mathematicians absolutely recognize the potential of artificial intelligence, machine learning and (deep) neural networks for future developments in science, technology and industry. At the same time, a sound mathematical treatment is essential for all aspects of artificial intelligence, including imaging, speech recognition, analysis of texts or autonomous driving, implying it is essential to involve mathematicians in all these areas. In this presentation, we highlight the role of mathematics as a key enabling technology within the emerging field of scientific machine learning. Or, as I always say: ”Real intelligence is needed to make artificial intelligence work.”

 

Thu, 25 Apr 2024

14:00 - 15:00
Lecture Room 3

ESPIRA: Estimation of Signal Parameters via Iterative Rational Approximation

Nadiia Derevianko
(University of Göttingen)
Abstract

We introduce a new method - ESPIRA (Estimation of Signal Parameters via Iterative Rational Approximation) \cite{DP22,  DPP21} - for the recovery of complex exponential  sums
$$
f(t)=\sum_{j=1}^{M} \gamma_j \mathrm{e}^{\lambda_j t},
$$
that are determined by a finite number of parameters: the order $M$, weights $\gamma_j \in \mathbb{C} \setminus \{0\}$ and nodes  $\mathrm{e}^{\lambda_j} \in \mathbb{C}$ for $j=1,...,M$.  Our new recovery procedure is based on the observation that Fourier coefficients (or DFT coefficients) of exponential sums have a special rational structure.  To  reconstruct this structure in a stable way we use the AAA algorithm  proposed by Nakatsukasa et al.   We show that ESPIRA can be interpreted as a matrix pencil method applied to Loewner matrices. 

During the talk we will demonstrate that ESPIRA outperforms Prony-like methods such as ESPRIT and MPM for noisy data and for signal approximation by short exponential sums.  

 

Bibliography
N. Derevianko,  G.  Plonka, 
Exact reconstruction of extended exponential sums using rational approximation of their Fourier coefficients, Anal.  Appl.,  20(3),  2022,  543-577.


N. Derevianko,  G. Plonka,  M. Petz, 
From ESPRIT to ESPIRA: Estimation of signal parameters by iterative rational approximation,   IMA J. Numer. Anal.,  43(2),  2023, 789--827.  


Y. Nakatsukasa, O. Sète,   L.N. Trefethen,  The AAA algorithm for rational approximation.
SIAM J. Sci. Comput., 40(3),   2018,  A1494–A1522.  

Mon, 08 Apr 2024

11:00 - 12:00
Lecture Room 3

Heavy-Tailed Large Deviations and Sharp Characterization of Global Dynamics of SGDs in Deep Learning

Chang-Han Rhee
(Northwestern University, USA)
Abstract

While the typical behaviors of stochastic systems are often deceptively oblivious to the tail distributions of the underlying uncertainties, the ways rare events arise are vastly different depending on whether the underlying tail distributions are light-tailed or heavy-tailed. Roughly speaking, in light-tailed settings, a system-wide rare event arises because everything goes wrong a little bit as if the entire system has conspired up to provoke the rare event (conspiracy principle), whereas, in heavy-tailed settings, a system-wide rare event arises because a small number of components fail catastrophically (catastrophe principle). In the first part of this talk, I will introduce the recent developments in the theory of large deviations for heavy-tailed stochastic processes at the sample path level and rigorously characterize the catastrophe principle for such processes. 

The empirical success of deep learning is often attributed to the mysterious ability of stochastic gradient descents (SGDs) to avoid sharp local minima in the loss landscape, as sharp minima are believed to lead to poor generalization. To unravel this mystery and potentially further enhance such capability of SGDs, it is imperative to go beyond the traditional local convergence analysis and obtain a comprehensive understanding of SGDs' global dynamics within complex non-convex loss landscapes. In the second part of this talk, I will characterize the global dynamics of SGDs building on the heavy-tailed large deviations and local stability framework developed in the first part. This leads to the heavy-tailed counterparts of the classical Freidlin-Wentzell and Eyring-Kramers theories. Moreover, we reveal a fascinating phenomenon in deep learning: by injecting and then truncating heavy-tailed noises during the training phase, SGD can almost completely avoid sharp minima and hence achieve better generalization performance for the test data.  

 

This talk is based on the joint work with Mihail Bazhba, Jose Blanchet, Bohan Chen, Sewoong Oh, Zhe Su, Xingyu Wang, and Bert Zwart.

Thu, 07 Mar 2024

14:00 - 15:00
Lecture Room 3

Stabilized Lagrange-Galerkin schemes for viscous and viscoelastic flow problems

Hirofumi Notsu
(Kanazawa University)
Abstract

Many researchers are developing stable and accurate numerical methods for flow problems, which roughly belong to upwind methods or characteristics(-based) methods. 
The Lagrange-Galerkin method proposed and analyzed in, e.g., [O. Pironneau. NM, 1982] and [E. S\"uli. NM, 1988] is the finite element method combined with the idea of the method of characteristics; hence, it belongs to the characteristics(-based) methods. The advantages are the CFL-free robustness for convection-dominated problems and the symmetry of the resulting coefficient matrix. In this talk, we introduce stabilized Lagrange-Galerkin schemes of second order in time for viscous and viscoelastic flow problems, which employ the cheapest conforming P1-element with the help of pressure-stabilization [F. Brezzi and J. Pitk\"aranta. Vieweg+Teubner, 1984] for all the unknown functions, i.e., velocity, pressure, and conformation tensor, reducing the number of DOFs. 
Focusing on the recent developments of discretizations of the (non-conservative and conservative) material derivatives and the upper-convected time derivative, we present theoretical and numerical results.

Thu, 29 Feb 2024

14:00 - 15:00
Lecture Room 3

On the use of "conventional" unconstrained minimization solvers for training regression problems in scientific machine learning

Stefano Zampini
(King Abdullah University of Science and Technology (KAUST))
Abstract

In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications.  At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods.

However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization.

In this talk, we empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.

Thu, 22 Feb 2024

14:00 - 15:00
Lecture Room 3

Hierarchical adaptive low-rank format with applications to discretized PDEs

Leonardo Robol
(University of Pisa)
Abstract

A novel framework for hierarchical low-rank matrices is proposed that combines an adaptive hierarchical partitioning of the matrix with low-rank approximation. One typical application is the approximation of discretized functions on rectangular domains; the flexibility of the format makes it possible to deal with functions that feature singularities in small, localized regions. To deal with time evolution and relocation of singularities, the partitioning can be dynamically adjusted based on features of the underlying data. Our format can be leveraged to efficiently solve linear systems with Kronecker product structure, as they arise from discretized partial differential equations (PDEs). For this purpose, these linear systems are rephrased as linear matrix equations and a recursive solver is derived from low-rank updates of such equations. 
We demonstrate the effectiveness of our framework for stationary and time-dependent, linear and nonlinear PDEs, including the Burgers' and Allen–Cahn equations.

This is a joint work with Daniel Kressner and Stefano Massei.

Thu, 15 Feb 2024
14:00

Algorithmic Insurance

Agni Orfanoudaki
(Oxford University Saïd Business School)
Abstract

As machine learning algorithms get integrated into the decision-making process of companies and organizations, insurance products are being developed to protect their providers from liability risk. Algorithmic liability differs from human liability since it is based on data-driven models compared to multiple heterogeneous decision-makers and its performance is known a priori for a given set of data. Traditional actuarial tools for human liability do not consider these properties, primarily focusing on the distribution of historical claims. We propose, for the first time, a quantitative framework to estimate the risk exposure of insurance contracts for machine-driven liability, introducing the concept of algorithmic insurance. Our work provides ML model developers and insurance providers with a comprehensive risk evaluation approach for this new class of products. Thus, we set the foundations of a niche area of research at the intersection of the literature in operations, risk management, and actuarial science. Specifically, we present an optimization formulation to estimate the risk exposure of a binary classification model given a pre-defined range of premiums. Our approach outlines how properties of the model, such as discrimination performance, interpretability, and generalizability, can influence the insurance contract evaluation. To showcase a practical implementation of the proposed framework, we present a case study of medical malpractice in the context of breast cancer detection. Our analysis focuses on measuring the effect of the model parameters on the expected financial loss and identifying the aspects of algorithmic performance that predominantly affect the risk of the contract.

Paper Reference: Bertsimas, D. and Orfanoudaki, A., 2021. Pricing algorithmic insurance. arXiv preprint arXiv:2106.00839.

Paper link: https://arxiv.org/pdf/2106.00839.pdf

Thu, 08 Feb 2024
14:00
Lecture Room 3

From Chebfun3 to RTSMS: A journey into deterministic and randomized Tucker decompositions

Behnam Hashemi
(Leicester University)
Abstract
The Tucker decomposition is a family of representations that break up a given d-dimensional tensor into the multilinear product of a core tensor and a factor matrix along each of the d-modes. It is a useful tool in extracting meaningful insights from complex datasets and has found applications in various fields, including scientific computing, signal processing and machine learning. 
 In this talk we will first focus on the continuous framework and revisit how Tucker decomposition forms the foundation of Chebfun3 for numerical computing with 3D functions and the deterministic algorithm behind Chebfun3. The key insight is that separation of variables achieved via low-rank Tucker decomposition simplifies and speeds up lots of subsequent computations.
 We will then switch to the discrete framework and discuss a new algorithm called RTSMS (randomized Tucker with single-mode sketching). The single-mode sketching aspect of RTSMS allows utilizing simple sketch matrices which are substantially smaller than alternative methods leading to considerable performance gains. Within its least-squares strategy, RTSMS incorporates leverage scores for efficiency with Tikhonov regularization and iterative refinement for stability. RTSMS is demonstrated to be competitive with existing methods, sometimes outperforming them by a large margin.
We illustrate the benefits of Tucker decomposition via MATLAB demos solving problems from global optimization to video compression. RTSMS is joint work with Yuji Nakatsukasa.
Thu, 01 Feb 2024
14:00
Lecture Room 3

A strongly polynomial algorithm for the minimum-cost generalized flow problem

Laszlo Vegh
(LSE)
Abstract

We give a strongly polynomial algorithm for minimum cost generalized flow, and as a consequence, for all linear programs with at most two nonzero entries per row, or at most two nonzero entries per column. While strongly polynomial algorithms for the primal and dual feasibility problems have been known for a long time, various combinatorial approaches used for those problems did not seem to carry over to the minimum-cost variant.

Our approach is to show that the ‘subspace layered least squares’ interior point method, an earlier joint work with Allamigeon, Dadush, Loho and Natura requires only a strongly polynomial number of iterations for minimum cost generalized flow. We achieve this by bounding the straight line complexity, introduced in the same paper. The talk will give an overview of the interior point method as well as the combinatorial straight-line complexity analysis for the particular setting. This is joint work with Daniel Dadush, Zhuan Khye Koh, Bento Natura, and Neil Olver.

Thu, 25 Jan 2024

14:00 - 15:00
Lecture Room 3

Stress and flux-based finite element methods

Fleurianne Bertrand
(Chemnitz University of Technology)
Abstract

This talk explores recent advancements in stress and flux-based finite element methods. It focuses on addressing the limitations of traditional finite elements, in order to describe complex material behavior and engineer new metamaterials.

Stress and flux-based finite element methods are particularly useful in error estimation, laying the groundwork for adaptive refinement strategies. This concept builds upon the hypercircle theorem [1], which states that in a specific energy space, both the exact solution and any admissible stress field lie on a hypercircle. However, the construction of finite element spaces that satisfy admissible states for complex material behavior is not straightforward. It often requires a relaxation of specific properties, especially when dealing with non-symmetric stress tensors [2] or hyperelastic materials.

Alternatively, methods that directly approximate stresses can be employed, offering high accuracy of the stress fields and adherence to physical conservation laws. However, when approximating eigenvalues, this significant benefit for the solution's accuracy implies that the solution operator cannot be compact. To address this, the solution operator must be confined to a subset of the solution that excludes the stresses. Yet, due to compatibility conditions, the trial space for the other solution components typically does not yield the desired accuracy. The second part of this talk will therefore explore the Least-Squares method as a remedy to these challenges [3].

To conclude this talk, we will emphasize the integration of those methods within global solution strategies, with a particular focus on the challenges regarding model order reduction methods [4].

 

[1] W. Prager, J. Synge. Approximations in elasticity based on the concept of function space.

Quarterly of Applied Mathematics 5(3), 1947.

[2] FB, K. Bernhard, M. Moldenhauer, G. Starke. Weakly symmetric stress equilibration and a posteriori error estimation for linear elasticity, Numerical Methods for Partial Differential Equations 37(4), 2021.

[3] FB, D. Boffi. First order least-squares formulations for eigenvalue problems, IMA Journal of Numerical Analysis 42(2), 2023.

[4] FB, D. Boffi, A. Halim. A reduced order model for the finite element approximation of eigenvalue problems,Computer Methods in Applied Mechanics and Engineering 404, 2023.

 

Thu, 18 Jan 2024

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A preconditioner with low-rank corrections based on the Bregman divergence

Andreas Bock
(Danish Technical University)
Abstract

We present a general framework for preconditioning Hermitian positive definite linear systems based on the Bregman log determinant divergence. This divergence provides a measure of discrepancy between a preconditioner and a target matrix, giving rise to

the study of preconditioners given as the sum of a Hermitian positive definite matrix plus a low-rank correction. We describe under which conditions the preconditioner minimises the $\ell^2$ condition number of the preconditioned matrix, and obtain the low-rank 

correction via a truncated singular value decomposition (TSVD). Numerical results from variational data assimilation (4D-VAR) support our theoretical results.

 

We also apply the framework to approximate factorisation preconditioners with a low-rank correction (e.g. incomplete Cholesky plus low-rank). In such cases, the approximate factorisation error is typically indefinite, and the low-rank correction described by the Bregman divergence is generally different from one obtained as a TSVD. We compare these two truncations in terms of convergence of the preconditioned conjugate gradient method (PCG), and show numerous examples where PCG converges to a small tolerance using the proposed preconditioner, whereas PCG using a TSVD-based preconditioner fails. We also consider matrices arising from interior point methods for linear programming that do not admit such an incomplete factorisation by default, and present a robust incomplete Cholesky preconditioner based on the proposed methodology.

The talk is based on papers with Martin S. Andersen (DTU).

 

Thu, 30 Nov 2023
14:00
Lecture Room 3

Multilevel adaptivity for stochastic finite element methods

Alex Bespalov
(Birmingham University)
Abstract

This talk concerns the design and analysis of adaptive FEM-based solution strategies for partial differential equations (PDEs) with uncertain or parameter-dependent inputs. We present two conceptually different strategies: one is projection-based (stochastic Galerkin FEM) and the other is sampling-based (stochastic collocation FEM). These strategies have emerged and become popular as effective alternatives to Monte-Carlo sampling in the context of (forward) uncertainty quantification. Both stochastic Galerkin and stochastic collocation approximations are typically represented as finite (sparse) expansions in terms of a parametric polynomial basis with spatial coefficients residing in finite element spaces. The focus of the talk is on multilevel approaches where different spatial coefficients may reside in different finite element spaces and, therefore, the underlying spatial approximations are allowed to be refined independently from each other.

 

We start with a more familiar setting of projection-based methods, where exploiting the Galerkin orthogonality property and polynomial approximations in terms of an orthonormal basis facilitates the design and analysis of adaptive algorithms. We discuss a posteriori error estimation as well as the convergence and rate optimality properties of the generated adaptive multilevel Galerkin approximations for PDE problems with affine-parametric coefficients. We then show how these ideas of error estimation and multilevel adaptivity can be applied in a non-Galerkin setting of stochastic collocation FEM, in particular, for PDE problems with non-affine parameterization of random inputs and for problems with parameter-dependent local spatial features.

 

The talk is based on a series of joint papers with Dirk Praetorius (TU Vienna), Leonardo Rocchi (Birmingham), Michele Ruggeri (University of Strathclyde, Glasgow), David Silvester (Manchester), and Feng Xu (Manchester).