Tue, 08 Nov 2022

14:00 - 14:30
L3

Computing functions of matrices via composite rational functions

Yuji Nakatsukasa
((Oxford University))
Abstract

Most algorithms for computing a matrix function $f(A)$ are based on finding a rational (or polynomial) approximant $r(A)≈f(A)$ to the scalar function on the spectrum of $A$. These functions are often in a composite form, that is, $f(z)≈r(z)=r_k(⋯r_2(r_1(z)))$ (where $k$ is the number of compositions, which is often the iteration count, and proportional to the computational cost); this way $r$ is a rational function whose degree grows exponentially in $k$. I will review algorithms that fall into this category and highlight the remarkable power of composite (rational) functions.

Tue, 08 Nov 2022

14:30 - 15:00
L3

Rational approximation of functions with branch point singularities

Astrid Herremans
(KU Leuven)
Abstract

Rational functions are able to approximate functions containing branch point singularities with a root-exponential convergence rate. These appear for example in the solution of boundary value problems on domains containing corners or edges. Results from Newman in 1964 indicate that the poles of the optimal rational approximant are exponentially clustered near the branch point singularities. Trefethen and collaborators use this knowledge to linearize the approximation problem by fixing the poles in advance, giving rise to the Lightning approximation. The resulting approximation set is however highly ill-conditioned, which raises the question of stability. We show that augmenting the approximation set with polynomial terms greatly improves stability. This observation leads to a  decoupling of the approximation problem into two regimes, related to the singular and the smooth behaviour of the function. In addition, adding polynomial terms to the approximation set can result in a significant increase in convergence speed. The convergence rate is however very sensitive to the speed at which the clustered poles approach the singularity.

Tue, 11 Oct 2022

14:30 - 15:00
L3

Fooled by optimality

Nick Trefethen
(University of Oxford)
Abstract

An occupational hazard of mathematicians is the investigation of objects that are "optimal" in a mathematically precise sense, yet may be far from optimal in practice. This talk will discuss an extreme example of this effect: Gauss-Hermite quadrature on the real line. For large numbers of quadrature points, Gauss-Hermite quadrature is a very poor method of integration, much less efficient than simply truncating the interval and applying Gauss-Legendre quadrature or the periodic trapezoidal rule. We will present a theorem quantifying this difference and explain where the standard notion of optimality has failed.

Thu, 20 Oct 2022

14:00 - 15:00
L3

Twenty examples of AAA approximation

Nick Trefethen
(University of Oxford)
Abstract

For the first time, a method has become available for fast computation of near-best rational approximations on arbitrary sets in the real line or complex plane: the AAA algorithm (Nakatsukasa-Sète-T. 2018).  After a brief presentation of the algorithm this talk will focus on twenty demonstrations of the kinds of things we can do, all across applied mathematics, with a black-box rational approximation tool.
 

Thu, 26 Jan 2023
14:00
L3

Learning State-Space Models of Dynamical Systems from Data

Peter Benner
(MPI Magdeburg)
Abstract

Learning dynamical models from data plays a vital role in engineering design, optimization, and predictions. Building models describing the dynamics of complex processes (e.g., weather dynamics, reactive flows, brain/neural activity, etc.) using empirical knowledge or first principles is frequently onerous or infeasible. Therefore, system identification has evolved as a scientific discipline for this task since the 1960ies. Due to the obvious similarity of approximating unknown functions by artificial neural networks, system identification was an early adopter of machine learning methods. In the first part of the talk, we will review the development in this area until now.

For complex systems, identifying the full dynamics using system identification may still lead to high-dimensional models. For engineering tasks like optimization and control synthesis as well as in the context of digital twins, such learned models might still be computationally too challenging in the aforementioned multi-query scenarios. Therefore, it is desirable to identify compact approximate models from the available data. In the second part of this talk, we will therefore exploit that the dynamics of high-fidelity models often evolve in lowdimensional manifolds. We will discuss approaches learning representations of these lowdimensional manifolds using several ideas, including the lifting principle and autoencoders. In particular, we will focus on learning state-space representations that can be used in classical tools for computational engineering. Several numerical examples will illustrate the performance and limitations of the suggested approaches.

Thu, 17 Nov 2022

14:00 - 15:00
L3

Ten years of Direct Multisearch

Ana Custodio
(NOVA School of Science and Technology)
Abstract

Direct Multisearch (DMS) is a well-known multiobjective derivative-free optimization class of methods, with competitive computational implementations that are often successfully used for benchmark of new algorithms and in practical applications. As a directional direct search method, its structure is organized in a search step and a poll step, being the latter responsible for its convergence. A first implementation of DMS was released in 2010. Since then, the algorithmic class has continued to be analyzed from the theoretical point of view and new improvements have been proposed for the numerical implementation. Worst-case-complexity bounds have been derived, a search step based on polynomial models has been defined, and parallelization strategies have successfully improved the numerical performance of the code, which has also shown to be competitive for multiobjective derivative-based problems. In this talk we will survey the algorithmic structure of this class of optimization methods, the main theoretical properties associated to it and report numerical experiments that validate its numerical competitiveness.

Thu, 24 Nov 2022

14:00 - 15:00
L3

Nonlinear and dispersive waves in a basin: theory and numerical analysis

Dimitrios Mitsotakis
(Victoria University of Wellington)
Abstract

Surface water waves of significant interest, such as tsunamis and solitary waves, are nonlinear and dispersive waves. Unluckily, the equations derived from first principles that describe the propagation of surface water waves, known as Euler's equations, are immensely hard to study. For this reason, several approximate systems have been proposed as mathematical alternatives. We show that among the numerous simplified systems of PDEs of water wave theory there is only one that is provably well-posed (in Hadamard’s sense) in bounded domains with slip-wall boundary conditions. We also show that the particular well-posed system obeys most of the physical laws that acceptable water wave equations must obey, and it is consistent with the Euler equations. For the numerical solution of our system we rely on a Galerkin/finite element method based on Nitsche's method for which we have proved its convergence. Validation with laboratory data is also presented.

Subscribe to L3