Forthcoming events in this series


Thu, 07 Mar 2024

14:00 - 15:00
Lecture Room 3

Stabilized Lagrange-Galerkin schemes for viscous and viscoelastic flow problems

Hirofumi Notsu
(Kanazawa University)
Abstract

Many researchers are developing stable and accurate numerical methods for flow problems, which roughly belong to upwind methods or characteristics(-based) methods. 
The Lagrange-Galerkin method proposed and analyzed in, e.g., [O. Pironneau. NM, 1982] and [E. S\"uli. NM, 1988] is the finite element method combined with the idea of the method of characteristics; hence, it belongs to the characteristics(-based) methods. The advantages are the CFL-free robustness for convection-dominated problems and the symmetry of the resulting coefficient matrix. In this talk, we introduce stabilized Lagrange-Galerkin schemes of second order in time for viscous and viscoelastic flow problems, which employ the cheapest conforming P1-element with the help of pressure-stabilization [F. Brezzi and J. Pitk\"aranta. Vieweg+Teubner, 1984] for all the unknown functions, i.e., velocity, pressure, and conformation tensor, reducing the number of DOFs. 
Focusing on the recent developments of discretizations of the (non-conservative and conservative) material derivatives and the upper-convected time derivative, we present theoretical and numerical results.

Thu, 29 Feb 2024

14:00 - 15:00
Lecture Room 3

On the use of "conventional" unconstrained minimization solvers for training regression problems in scientific machine learning

Stefano Zampini
(King Abdullah University of Science and Technology (KAUST))
Abstract

In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications.  At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods.

However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization.

In this talk, we empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.

Thu, 22 Feb 2024

14:00 - 15:00
Lecture Room 3

Hierarchical adaptive low-rank format with applications to discretized PDEs

Leonardo Robol
(University of Pisa)
Abstract

A novel framework for hierarchical low-rank matrices is proposed that combines an adaptive hierarchical partitioning of the matrix with low-rank approximation. One typical application is the approximation of discretized functions on rectangular domains; the flexibility of the format makes it possible to deal with functions that feature singularities in small, localized regions. To deal with time evolution and relocation of singularities, the partitioning can be dynamically adjusted based on features of the underlying data. Our format can be leveraged to efficiently solve linear systems with Kronecker product structure, as they arise from discretized partial differential equations (PDEs). For this purpose, these linear systems are rephrased as linear matrix equations and a recursive solver is derived from low-rank updates of such equations. 
We demonstrate the effectiveness of our framework for stationary and time-dependent, linear and nonlinear PDEs, including the Burgers' and Allen–Cahn equations.

This is a joint work with Daniel Kressner and Stefano Massei.

Thu, 15 Feb 2024
14:00

Algorithmic Insurance

Agni Orfanoudaki
(Oxford University Saïd Business School)
Abstract

As machine learning algorithms get integrated into the decision-making process of companies and organizations, insurance products are being developed to protect their providers from liability risk. Algorithmic liability differs from human liability since it is based on data-driven models compared to multiple heterogeneous decision-makers and its performance is known a priori for a given set of data. Traditional actuarial tools for human liability do not consider these properties, primarily focusing on the distribution of historical claims. We propose, for the first time, a quantitative framework to estimate the risk exposure of insurance contracts for machine-driven liability, introducing the concept of algorithmic insurance. Our work provides ML model developers and insurance providers with a comprehensive risk evaluation approach for this new class of products. Thus, we set the foundations of a niche area of research at the intersection of the literature in operations, risk management, and actuarial science. Specifically, we present an optimization formulation to estimate the risk exposure of a binary classification model given a pre-defined range of premiums. Our approach outlines how properties of the model, such as discrimination performance, interpretability, and generalizability, can influence the insurance contract evaluation. To showcase a practical implementation of the proposed framework, we present a case study of medical malpractice in the context of breast cancer detection. Our analysis focuses on measuring the effect of the model parameters on the expected financial loss and identifying the aspects of algorithmic performance that predominantly affect the risk of the contract.

Paper Reference: Bertsimas, D. and Orfanoudaki, A., 2021. Pricing algorithmic insurance. arXiv preprint arXiv:2106.00839.

Paper link: https://arxiv.org/pdf/2106.00839.pdf

Thu, 08 Feb 2024
14:00
Lecture Room 3

From Chebfun3 to RTSMS: A journey into deterministic and randomized Tucker decompositions

Behnam Hashemi
(Leicester University)
Abstract
The Tucker decomposition is a family of representations that break up a given d-dimensional tensor into the multilinear product of a core tensor and a factor matrix along each of the d-modes. It is a useful tool in extracting meaningful insights from complex datasets and has found applications in various fields, including scientific computing, signal processing and machine learning. 
 In this talk we will first focus on the continuous framework and revisit how Tucker decomposition forms the foundation of Chebfun3 for numerical computing with 3D functions and the deterministic algorithm behind Chebfun3. The key insight is that separation of variables achieved via low-rank Tucker decomposition simplifies and speeds up lots of subsequent computations.
 We will then switch to the discrete framework and discuss a new algorithm called RTSMS (randomized Tucker with single-mode sketching). The single-mode sketching aspect of RTSMS allows utilizing simple sketch matrices which are substantially smaller than alternative methods leading to considerable performance gains. Within its least-squares strategy, RTSMS incorporates leverage scores for efficiency with Tikhonov regularization and iterative refinement for stability. RTSMS is demonstrated to be competitive with existing methods, sometimes outperforming them by a large margin.
We illustrate the benefits of Tucker decomposition via MATLAB demos solving problems from global optimization to video compression. RTSMS is joint work with Yuji Nakatsukasa.
Thu, 01 Feb 2024
14:00
Lecture Room 3

A strongly polynomial algorithm for the minimum-cost generalized flow problem

Laszlo Vegh
(LSE)
Abstract

We give a strongly polynomial algorithm for minimum cost generalized flow, and as a consequence, for all linear programs with at most two nonzero entries per row, or at most two nonzero entries per column. While strongly polynomial algorithms for the primal and dual feasibility problems have been known for a long time, various combinatorial approaches used for those problems did not seem to carry over to the minimum-cost variant.

Our approach is to show that the ‘subspace layered least squares’ interior point method, an earlier joint work with Allamigeon, Dadush, Loho and Natura requires only a strongly polynomial number of iterations for minimum cost generalized flow. We achieve this by bounding the straight line complexity, introduced in the same paper. The talk will give an overview of the interior point method as well as the combinatorial straight-line complexity analysis for the particular setting. This is joint work with Daniel Dadush, Zhuan Khye Koh, Bento Natura, and Neil Olver.

Thu, 25 Jan 2024

14:00 - 15:00
Lecture Room 3

Stress and flux-based finite element methods

Fleurianne Bertrand
(Chemnitz University of Technology)
Abstract

This talk explores recent advancements in stress and flux-based finite element methods. It focuses on addressing the limitations of traditional finite elements, in order to describe complex material behavior and engineer new metamaterials.

Stress and flux-based finite element methods are particularly useful in error estimation, laying the groundwork for adaptive refinement strategies. This concept builds upon the hypercircle theorem [1], which states that in a specific energy space, both the exact solution and any admissible stress field lie on a hypercircle. However, the construction of finite element spaces that satisfy admissible states for complex material behavior is not straightforward. It often requires a relaxation of specific properties, especially when dealing with non-symmetric stress tensors [2] or hyperelastic materials.

Alternatively, methods that directly approximate stresses can be employed, offering high accuracy of the stress fields and adherence to physical conservation laws. However, when approximating eigenvalues, this significant benefit for the solution's accuracy implies that the solution operator cannot be compact. To address this, the solution operator must be confined to a subset of the solution that excludes the stresses. Yet, due to compatibility conditions, the trial space for the other solution components typically does not yield the desired accuracy. The second part of this talk will therefore explore the Least-Squares method as a remedy to these challenges [3].

To conclude this talk, we will emphasize the integration of those methods within global solution strategies, with a particular focus on the challenges regarding model order reduction methods [4].

 

[1] W. Prager, J. Synge. Approximations in elasticity based on the concept of function space.

Quarterly of Applied Mathematics 5(3), 1947.

[2] FB, K. Bernhard, M. Moldenhauer, G. Starke. Weakly symmetric stress equilibration and a posteriori error estimation for linear elasticity, Numerical Methods for Partial Differential Equations 37(4), 2021.

[3] FB, D. Boffi. First order least-squares formulations for eigenvalue problems, IMA Journal of Numerical Analysis 42(2), 2023.

[4] FB, D. Boffi, A. Halim. A reduced order model for the finite element approximation of eigenvalue problems,Computer Methods in Applied Mechanics and Engineering 404, 2023.

 

Thu, 18 Jan 2024

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A preconditioner with low-rank corrections based on the Bregman divergence

Andreas Bock
(Danish Technical University)
Abstract

We present a general framework for preconditioning Hermitian positive definite linear systems based on the Bregman log determinant divergence. This divergence provides a measure of discrepancy between a preconditioner and a target matrix, giving rise to

the study of preconditioners given as the sum of a Hermitian positive definite matrix plus a low-rank correction. We describe under which conditions the preconditioner minimises the $\ell^2$ condition number of the preconditioned matrix, and obtain the low-rank 

correction via a truncated singular value decomposition (TSVD). Numerical results from variational data assimilation (4D-VAR) support our theoretical results.

 

We also apply the framework to approximate factorisation preconditioners with a low-rank correction (e.g. incomplete Cholesky plus low-rank). In such cases, the approximate factorisation error is typically indefinite, and the low-rank correction described by the Bregman divergence is generally different from one obtained as a TSVD. We compare these two truncations in terms of convergence of the preconditioned conjugate gradient method (PCG), and show numerous examples where PCG converges to a small tolerance using the proposed preconditioner, whereas PCG using a TSVD-based preconditioner fails. We also consider matrices arising from interior point methods for linear programming that do not admit such an incomplete factorisation by default, and present a robust incomplete Cholesky preconditioner based on the proposed methodology.

The talk is based on papers with Martin S. Andersen (DTU).

 

Thu, 30 Nov 2023
14:00
Lecture Room 3

Multilevel adaptivity for stochastic finite element methods

Alex Bespalov
(Birmingham University)
Abstract

This talk concerns the design and analysis of adaptive FEM-based solution strategies for partial differential equations (PDEs) with uncertain or parameter-dependent inputs. We present two conceptually different strategies: one is projection-based (stochastic Galerkin FEM) and the other is sampling-based (stochastic collocation FEM). These strategies have emerged and become popular as effective alternatives to Monte-Carlo sampling in the context of (forward) uncertainty quantification. Both stochastic Galerkin and stochastic collocation approximations are typically represented as finite (sparse) expansions in terms of a parametric polynomial basis with spatial coefficients residing in finite element spaces. The focus of the talk is on multilevel approaches where different spatial coefficients may reside in different finite element spaces and, therefore, the underlying spatial approximations are allowed to be refined independently from each other.

 

We start with a more familiar setting of projection-based methods, where exploiting the Galerkin orthogonality property and polynomial approximations in terms of an orthonormal basis facilitates the design and analysis of adaptive algorithms. We discuss a posteriori error estimation as well as the convergence and rate optimality properties of the generated adaptive multilevel Galerkin approximations for PDE problems with affine-parametric coefficients. We then show how these ideas of error estimation and multilevel adaptivity can be applied in a non-Galerkin setting of stochastic collocation FEM, in particular, for PDE problems with non-affine parameterization of random inputs and for problems with parameter-dependent local spatial features.

 

The talk is based on a series of joint papers with Dirk Praetorius (TU Vienna), Leonardo Rocchi (Birmingham), Michele Ruggeri (University of Strathclyde, Glasgow), David Silvester (Manchester), and Feng Xu (Manchester).

Thu, 23 Nov 2023
14:00
Lecture Room 3

Making SGD parameter-free

Oliver Hinder
(University of Pittsburgh)
Abstract

We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting. In contrast, the best previously known rates for parameter-free SCO are based on online parameter-free regret bounds, which contain unavoidable excess logarithmic terms compared to their known-parameter counterparts. Our algorithm is conceptually simple, has high-probability guarantees, and is also partially adaptive to unknown gradient norms, smoothness, and strong convexity. At the heart of our results is a novel parameter-free certificate for the step size of stochastic gradient descent (SGD), and a time-uniform concentration result that assumes no a-priori bounds on SGD iterates.

Additionally, we present theoretical and numerical results for a dynamic step size schedule for SGD based on a variant of this idea. On a broad range of vision and language transfer learning tasks our methods performance is close to that of SGD with tuned learning rate. Also, a per-layer variant of our algorithm approaches the performance of tuned ADAM.

This talk is based on papers with Yair Carmon and Maor Ivgi.

Thu, 16 Nov 2023

14:00 - 15:00
Lecture Room 3

Finite element schemes and mesh smoothing for geometric evolution problems

Bjorn Stinner
(University of Warwick)
Abstract

Geometric evolutions can arise as simple models or fundamental building blocks in various applications with moving boundaries and time-dependent domains, such as grain boundaries in materials or deforming cell boundaries. Mesh-based methods require adaptation and smoothing, particularly in the case of strong deformations. We consider finite element schemes based on classical approaches for geometric evolution equations but augmented with the gradient of the Dirichlet energy or a variant of it, which is known to produce a tangential mesh movement beneficial for the mesh quality. We focus on the one-dimensional case, where convergence of semi-discrete schemes can be proved, and discuss two cases. For networks forming triple junctions, it is desirable to keep the impact of any additional, mesh smoothing terms on the geometric evolution as small as possible, which can be achieved with a perturbation approach. Regarding the elastic flow of curves, the Dirichlet energy can serve as a replacement of the usual penalty in terms of the length functional in that, modulo rescaling, it yields the same minimisers in the long run.

Thu, 09 Nov 2023
14:00
Rutherford Appleton Laboratory, nr Didcot

Numerical shape optimization: a bit of theory and a bit of practice

Alberto Paganini
(University of Leicester)
Further Information

Please note this seminar is held at Rutherford Appleton Laboratory (RAL)

Rutherford Appleton Laboratory
Harwell Campus
Didcot
OX11 0QX

How to get to RAL

 

Abstract

We use the term shape optimization when we want to find a minimizer of an objective function that assigns real values to shapes of domains. Solving shape optimization problems can be quite challenging, especially when the objective function is constrained to a PDE, in the sense that evaluating the objective function for a given domain shape requires first solving a boundary value problem stated on that domain. The main challenge here is that shape optimization methods must employ numerical methods capable of solving a boundary value problem on a domain that changes after each iteration of the optimization algorithm.

 

The first part of this talk will provide a gentle introduction to shape optimization. The second part of this talk will highlight how the finite element framework leads to automated numerical shape optimization methods, as realized in the open-source library fireshape. The talk will conclude with a brief overview of some academic and industrial applications of shape optimization.

 

 

Thu, 02 Nov 2023
14:00
Lecture Room 3

Recent Developments in the Numerical Solution of PDE-Constrained Optimization Problems

John Pearson
(Edinburgh University)
Abstract

Optimization problems subject to PDE constraints constitute a mathematical tool that can be applied to a wide range of scientific processes, including fluid flow control, medical imaging, option pricing, biological and chemical processes, and electromagnetic inverse problems, to name a few. These problems involve minimizing some function arising from a particular physical objective, while at the same time obeying a system of PDEs which describe the process. It is necessary to obtain accurate solutions to such problems within a reasonable CPU time, in particular for time-dependent problems, for which the “all-at-once” solution can lead to extremely large linear systems.

 

In this talk we consider iterative methods, in particular Krylov subspace methods, to solve such systems, accelerated by fast and robust preconditioning strategies. In particular, we will survey several new developments, including block preconditioners for fluid flow control problems, a circulant preconditioning framework for solving certain optimization problems constrained by fractional differential equations, and multiple saddle-point preconditioners for block tridiagonal linear systems. We will illustrate the benefit of using these new approaches through a range of numerical experiments.

 

This talk is based on work with Santolo Leveque (Scuola Normale Superiore, Pisa), Spyros Pougkakiotis (Yale University), Jacek Gondzio (University of Edinburgh), and Andreas Potschka (TU Clausthal).

Thu, 26 Oct 2023
14:00
Lecture Room 3

Algebraic domain-decomposition preconditioners for the solution of linear systems

Tyrone Rees
(Rutherford Appleton Laboratory)
Abstract

The need to solve linear systems of equations is ubiquitous in scientific computing. Powerful methods for preconditioning such systems have been developed in cases where we can exploit knowledge of the origin of the linear system; a recent example from the solution of systems from PDEs is the Gen-EO domain decomposition method which works well, but requires a non-trival amount of knowledge of the underlying problem to implement.  

In this talk I will present a new spectral coarse space that can be constructed in a fully-algebraic way, in contrast to most existing spectral coarse spaces, and will give a theoretical convergence result for Hermitian positive definite diagonally dominant matrices. Numerical experiments and comparisons against state-of-the-art preconditioners in the multigrid community show that the resulting two-level Schwarz preconditioner is efficient, especially for non-self-adjoint operators. Furthermore, in this case, our proposed preconditioner outperforms state-of-the-art preconditioners.

This is joint work with Hussam Al Daas, Pierre Jolivet and Jennifer Scott.

Thu, 19 Oct 2023

14:00 - 15:00
Lecture Room 3

Randomized Least Squares Optimization and its Incredible Utility for Large-Scale Tensor Decomposition

Tammy Kolda
(mathsci.ai)
Abstract

Randomized least squares is a promising method but not yet widely used in practice. We show an example of its use for finding low-rank canonical polyadic (CP) tensor decompositions for large sparse tensors. This involves solving a sequence of overdetermined least problems with special (Khatri-Rao product) structure.

In this work, we present an application of randomized algorithms to fitting the CP decomposition of sparse tensors, solving a significantly smaller sampled least squares problem at each iteration with probabilistic guarantees on the approximation errors. We perform sketching through leverage score sampling, crucially relying on the fact that the problem structure enable efficient sampling from overestimates of the leverage scores with much less work. We discuss what it took to make the algorithm practical, including general-purpose improvements.

Numerical results on real-world large-scale tensors show the method is faster than competing methods without sacrificing accuracy.

*This is joint work with Brett Larsen, Stanford University.

Thu, 12 Oct 2023

14:00 - 15:00
Lecture Room 3

Hermitian preconditioning for a class of non-Hermitian linear systems

Nicole Spillane
(Ecole Polytechnique (CMAP))
Abstract

This work considers weighted and preconditioned GMRES. The objective is to provide a way of choosing the preconditioner and the inner product, also called weight, that ensure fast convergence. The main focus of the article is on Hermitian preconditioning (even for non-Hermitian problems).

It is indeed proposed to choose a Hermitian preconditioner H, and to apply GMRES in the inner product induced by H. If moreover, the problem matrix A is positive definite, then a new convergence bound is proved that depends only on how well H preconditions the Hermitian part of A, and on a measure of how non-Hermitian A is. In particular, if a scalable preconditioner is known for the Hermitian part of A, then the proposed method is also scalable. I will also illustrate this result numerically.

Thu, 15 Jun 2023

14:00 - 15:00
Lecture Room 3

26 Years at Oxford

Nick Trefethen
(Oxford University)
Abstract

I will reflect on my time as Professor of Numerical Analysis.

Thu, 08 Jun 2023
14:00
L3

Condition numbers of tensor decompositions

Nick Vannieuwenhoven
(KU Leuven)
Abstract

Tensor decomposition express a tensor as a linear combination of elementary tensors. They have applications in chemometrics, computer science, machine learning, psychometrics, and signal processing. Their uniqueness properties render them suitable for data analysis tasks in which the elementary tensors are the quantities of interest. However, in applications, the idealized mathematical model is corrupted by measurement errors. For a robust interpretation of the data, it is therefore imperative to quantify how sensitive these elementary tensors are to perturbations of the whole tensor. I will give an overview of recent results on the condition number of tensor decompositions, established with my collaborators C. Beltran, P. Breiding, and N. Dewaele.

Thu, 01 Jun 2023

14:00 - 15:00
Lecture Room 6

Data-driven reduced-order modeling through rational approximation and balancing: Loewner matrix approaches

Victor Gosea
(MPI Magdeburg)
Abstract

Data-driven reduced-order modeling aims at constructing models describing the underlying dynamics of unknown systems from measurements. This has become an increasingly preeminent discipline in the last few years. It is an essential tool in situations when explicit models in the form of state space formulations are not available, yet abundant input/output data are, motivating the need for data-driven modeling. Depending on the underlying physics, dynamical systems can inherit differential structures leading to specific physical interpretations. In this work, we concentrate on systems that are described by differential equations and possess linear dynamics. Extensions to more complicated, nonlinear dynamics are also possible and will be briefly covered here if time permits.

The methods developed in our study use rational approximation based on Loewner matrices. Starting with the approach by Antoulas and Anderson in '86, and moving forward to the one by Mayo and Antoulas in '07, the Loewner framework (LF) has become an established methodology in the model reduction and reduced-order modeling community. It is a data-driven approach in the sense that what is needed to compute the reduced models is solely data, i.e., samples of the system's transfer function. As opposed to conventional intrusive methods that require an actual large-scale model to reduce (described by many differential equations), the LF only needs measurements in compressed format. In the former category of approaches, we mention balanced truncation (BT), arguably one of the most prevalent model reduction methods. Introduced in the early 80s, this method constructs reduced-order models (ROMs) by using balancing and truncating steps (with respect to classical system theory concepts such as controllability and observability). We show that BT can be reinterpreted as a data-driven approach, by using again the Loewner matrix as a central ingredient. By making use of quadrature approximations of certain system theoretical quantities (infinite Gramian matrices), a novel method called QuadBT (quadrature-based BT) is introduced by G., Gugercin, and Beattie in '22. We show parallels with the LF and, if time permits, certain recent extensions of QuadBT. Finally, all theoretical considerations are validated on various numerical test cases.

 

Thu, 25 May 2023

14:00 - 15:00
Lecture Room 3

Balancing Inexactness in Matrix Computations

Erin Carson
(Charles University)
Abstract


On supercomputers that exist today, achieving even close to the peak performance is incredibly difficult if not impossible for many applications. Techniques designed to improve the performance of matrix computations - making computations less expensive by reorganizing an algorithm, making intentional approximations, and using lower precision - all introduce what we can generally call ``inexactness''. The questions to ask are then:

1. With all these various sources of inexactness involved, does a given algorithm still get close enough to the right answer?
2. Given a user constraint on required accuracy, how can we best exploit and balance different types of inexactness to improve performance?

Studying the combination of different sources of inexactness can thus reveal not only limitations, but also new opportunities for developing algorithms for matrix computations that are both fast and provably accurate. We present few recent results toward this goal, icluding mixed precision randomized decompositions and mixed precision sparse approximate inverse preconditioners.

Thu, 18 May 2023
14:00
L3

Recent advances in mixed finite element approximation for poroelasticity

Arbaz Khan
(IIT Roorkee)
Abstract

Linear poroelasticity models have important applications in biology and geophysics. In particular, the well-known Biot consolidation model describes the coupled interaction between the linear response of a porous elastic medium saturated with fluid and a diffusive fluid flow within it, assuming small deformations. This is the starting point for modeling human organs in computational medicine and for modeling the mechanics of permeable
rock in geophysics. Finite element methods for Biot’s consolidation model have been widely studied over the past four decades.
In the first part of the talk, we discuss a posteriori error estimators for locking-free mixed finite element approximation of Biot’s consolidation model. The simplest of these is a conventional residual-based estimator. We establish bounds relating the estimated and true errors, and show that these are independent of the physical parameters. The other two estimators require the solution of local problems. These local problem estimators are also shown to be reliable, efficient and robust. Numerical results are presented that
validate the theoretical estimates, and illustrate the effectiveness of the estimators in guiding adaptive solution algorithms.
In the second part of talk, we discuss a novel locking-free stochastic Galerkin mixed finite element method for the Biot consolidation model with uncertain Young’s modulus and hydraulic conductivity field. After introducing a five-field mixed variational formulation of the standard Biot consolidation model, we discuss stochastic Galerkin mixed finite element approximation, focusing on the issue of well-posedness and efficient linear algebra for the discretized system. We introduce a new preconditioner for use with MINRES and
establish eigenvalue bounds. Finally, we present specific numerical examples to illustrate the efficiency of our numerical solution approach.

Finally, we discuss some remarks related to non-conforming approximation of Biot’s consolidation model.


References:
1. A. Khan, D. J. Silvester, Robust a posteriori error estimation for mixed finite
element approximation of linear poroelsticity, IMA Journal of Numerical Analysis, Oxford University Press, 41 (3), 2021, 2000-2025.
2. A. Khan, C. E. Powell, Parameter-robust stochastic Galerkin approxination for linear poroelasticity with uncertain inputs, SIAM Journal on Scientific Computing (SISC), 43 (4), 2021, B855-B883.
3. A. Khan, P. Zanotti, A nonsymmetric approach and a quasi-optimal and robust discretization for the Biot’s model. Mathematics of Computation, 91 (335), 2022, 1143-1170.
4. V. Anaya, A. Khan, D. Mora, R. Ruiz-Baier, Robust a posteriori error analysis for rotation-based formulations of the elasticity/poroelasticity coupling, SIAM Journal
on Scientific Computing (SISC), 2022.

Thu, 11 May 2023

14:00 - 15:00
Lecture Room 3

A coordinate descent algorithm on the Stiefel manifold for deep neural network training

Estelle Massart
(UC Louvain)
Abstract

We propose to use stochastic Riemannian coordinate descent on the Stiefel manifold for deep neural network training. The algorithm rotates successively two columns of the matrix, an operation that can be efficiently implemented as a multiplication by a Givens matrix. In the case when the coordinate is selected uniformly at random at each iteration, we prove the convergence of the proposed algorithm under standard assumptions on the loss function, stepsize and minibatch noise. Experiments on benchmark deep neural network training problems are presented to demonstrate the effectiveness of the proposed algorithm.

Thu, 27 Apr 2023

14:00 - 15:00
(This talk is hosted by Rutherford Appleton Laboratory)

All-at-once preconditioners for ocean data assimilation

Jemima Tabeart
(University of Oxford)
Abstract

Correlation operators are used in data assimilation algorithms
to weight the contribution of prior and observation information.
Efficient implementation of these operators is therefore crucial for
operational implementations. Diffusion-based correlation operators are popular in ocean data assimilation, but can require a large number of serial matrix-vector products. An all-at-once formulation removes this requirement, and offers the opportunity to exploit modern computer architectures. High quality preconditioners for the all-at-once approach are well-known, but impossible to apply in practice for the
high-dimensional problems that occur in oceanography. In this talk we
consider a nested preconditioning approach which retains many of the
beneficial properties of the ideal analytic preconditioner while
remaining affordable in terms of memory and computational resource.

Thu, 09 Mar 2023

14:00 - 15:00
Lecture Room 3

Supersmoothness of multivariate splines

Michael Floater
Abstract

Polynomial splines over simplicial meshes in R^n (triangulations in 2D, tetrahedral meshes in 3D, and so on) sometimes have extra orders of smoothness at a vertex. This property is known as supersmoothness, and plays a role both in the construction of macroelements and in the finite element method.
Supersmoothness depends both on the number of simplices that meet at the vertex and their geometric configuration.

In this talk we review what is known about supersmoothness of polynomial splines and then discuss the more general setting of splines whose individual pieces are any infinitely smooth functions.

This is joint work with Kaibo Hu.

 

Thu, 02 Mar 2023

14:00 - 15:00
Lecture Room 3

Finite element computations for modelling skeletal joints

Jonathan Whiteley
(Oxford University)
Abstract

Skeletal joints are often modelled as two adjacent layers of poroviscoelastic cartilage that are permitted to slide past each other.  The talk will begin by outlining a mathematical model that may be used, focusing on two unusual features of the model: (i) the solid component of the poroviscoelastic body has a charged surface that ionises the fluid within the pores, generating a swelling pressure; and (ii) appropriate conditions are required at the interface between the two adjacent layers of cartilage.  The remainder of the talk will then address various theoretical and practical issues in computing a finite element solution of the governing equations.