Thu, 26 Jan 2023
14:00
L3

Learning State-Space Models of Dynamical Systems from Data

Peter Benner
(MPI Magdeburg)
Abstract

Learning dynamical models from data plays a vital role in engineering design, optimization, and predictions. Building models describing the dynamics of complex processes (e.g., weather dynamics, reactive flows, brain/neural activity, etc.) using empirical knowledge or first principles is frequently onerous or infeasible. Therefore, system identification has evolved as a scientific discipline for this task since the 1960ies. Due to the obvious similarity of approximating unknown functions by artificial neural networks, system identification was an early adopter of machine learning methods. In the first part of the talk, we will review the development in this area until now.

For complex systems, identifying the full dynamics using system identification may still lead to high-dimensional models. For engineering tasks like optimization and control synthesis as well as in the context of digital twins, such learned models might still be computationally too challenging in the aforementioned multi-query scenarios. Therefore, it is desirable to identify compact approximate models from the available data. In the second part of this talk, we will therefore exploit that the dynamics of high-fidelity models often evolve in lowdimensional manifolds. We will discuss approaches learning representations of these lowdimensional manifolds using several ideas, including the lifting principle and autoencoders. In particular, we will focus on learning state-space representations that can be used in classical tools for computational engineering. Several numerical examples will illustrate the performance and limitations of the suggested approaches.

Thu, 17 Nov 2022

14:00 - 15:00
L3

Ten years of Direct Multisearch

Ana Custodio
(NOVA School of Science and Technology)
Abstract

Direct Multisearch (DMS) is a well-known multiobjective derivative-free optimization class of methods, with competitive computational implementations that are often successfully used for benchmark of new algorithms and in practical applications. As a directional direct search method, its structure is organized in a search step and a poll step, being the latter responsible for its convergence. A first implementation of DMS was released in 2010. Since then, the algorithmic class has continued to be analyzed from the theoretical point of view and new improvements have been proposed for the numerical implementation. Worst-case-complexity bounds have been derived, a search step based on polynomial models has been defined, and parallelization strategies have successfully improved the numerical performance of the code, which has also shown to be competitive for multiobjective derivative-based problems. In this talk we will survey the algorithmic structure of this class of optimization methods, the main theoretical properties associated to it and report numerical experiments that validate its numerical competitiveness.

Thu, 24 Nov 2022

14:00 - 15:00
L3

Nonlinear and dispersive waves in a basin: theory and numerical analysis

Dimitrios Mitsotakis
(Victoria University of Wellington)
Abstract

Surface water waves of significant interest, such as tsunamis and solitary waves, are nonlinear and dispersive waves. Unluckily, the equations derived from first principles that describe the propagation of surface water waves, known as Euler's equations, are immensely hard to study. For this reason, several approximate systems have been proposed as mathematical alternatives. We show that among the numerous simplified systems of PDEs of water wave theory there is only one that is provably well-posed (in Hadamard’s sense) in bounded domains with slip-wall boundary conditions. We also show that the particular well-posed system obeys most of the physical laws that acceptable water wave equations must obey, and it is consistent with the Euler equations. For the numerical solution of our system we rely on a Galerkin/finite element method based on Nitsche's method for which we have proved its convergence. Validation with laboratory data is also presented.

Thu, 03 Nov 2022

14:00 - 15:00
L3

Algebraic Spectral Multilevel Domain Decomposition Preconditioners

Hussam Al Daas
(STFC Rutherford Appleton Laboratory)
Abstract

Solving sparse linear systems is omnipresent in scientific computing. Direct approaches based on matrix factorization are very robust, and since they can be used as a black-box, it is easy for other software to use them. However, the memory requirement of direct approaches scales poorly with the problem size, and the algorithms underpinning sparse direct solvers software are poorly suited to parallel computation. Multilevel Domain decomposition (MDD) methods are among the most efficient iterative methods for solving sparse linear systems. One of the main technical difficulties in using efficient MDD methods (and most other efficient preconditioners) is that they require information from the underlying problem which prohibits them from being used as a black-box. This was the motivation to develop the widely used algebraic multigrid for example. I will present a series of recently developed robust and fully algebraic MDD methods, i.e., that can be constructed given only the coefficient matrix and guarantee a priori prescribed convergence rate. The series consists of preconditioners for sparse least-squares problems, sparse SPD matrices, general sparse matrices, and saddle-point systems. Numerical experiments illustrate the effectiveness, wide applicability, scalability of the proposed preconditioners. A comparison of each one against state-of-the-art preconditioners is also presented.

Thu, 19 Jan 2023

14:00 - 15:00
L3

Bridging the divide: from matrix to tensor algebra for optimal approximation and compression

Misha Kilmer
(Tufts University)
Abstract

Tensors, also known as multiway arrays, have become ubiquitous as representations for operators or as convenient schemes for storing data. Yet, when it comes to compressing these objects or analyzing the data stored in them, the tendency is to ``flatten” or ``matricize” the data and employ traditional linear algebraic tools, ignoring higher dimensional correlations/structure that could have been exploited. Impediments to the development of equivalent tensor-based approaches stem from the fact that familiar concepts, such as rank and orthogonal decomposition, have no straightforward analogues and/or lead to intractable computational problems for tensors of order three and higher.

In this talk, we will review some of the common tensor decompositions and discuss their theoretical and practical limitations. We then discuss a family of tensor algebras based on a new definition of tensor-tensor products. Unlike other tensor approaches, the framework we derive based around this tensor-tensor product allows us to generalize in a very elegant way all classical algorithms from linear algebra. Furthermore, under our framework, tensors can be decomposed in a natural (e.g. ‘matrix-mimetic’) way with provable approximation properties and with provable benefits over traditional matrix approximation. In addition to several examples from recent literature illustrating the advantages of our tensor-tensor product framework in practice, we highlight interesting open questions and directions for future research.

Thu, 09 Jun 2022

14:00 - 15:00
Virtual

Maximizing the Spread of Symmetric Non-Negative Matrices

John Urschel
(Institute for Advanced Study)
Abstract

The spread of a matrix is defined as the diameter of its spectrum. In this talk, we consider the problem of maximizing the spread of a symmetric non-negative matrix with bounded entries and discuss a number of recent results. This optimization problem is closely related to a pair of conjectures in spectral graph theory made by Gregory, Kirkland, and Hershkowitz in 2001, which were recently resolved by Breen, Riasanovsky, Tait, and Urschel. This talk will give a light overview of the approach used in this work, with a strong focus on ideas, many of which can be abstracted to more general matrix optimization problems.

Thu, 02 Jun 2022

14:00 - 15:00
Virtual

Balanced truncation for Bayesian inference

Elizabeth Qian
(Caltech)
Abstract

We consider the Bayesian inverse problem of inferring the initial condition of a linear dynamical system from noisy output measurements taken after the initial time. In practical applications, the large dimension of the dynamical system state poses a computational obstacle to computing the exact posterior distribution. Balanced truncation is a system-theoretic method for model reduction which obtains an efficient reduced-dimension dynamical system by projecting the system operators onto state directions which simultaneously maximize energies defined by reachability and observability Gramians. We show that in our inference setting, the prior covariance and Fisher information matrices can be naturally interpreted as reachability and observability Gramians, respectively. We use these connections to propose a balancing approach to model reduction for the inference setting. The resulting reduced model then inherits stability properties and error bounds from system theory, and yields an optimal posterior covariance approximation. 

Thu, 16 Jun 2022

14:00 - 15:00
L5

Recent results on finite element methods for incompressible flow at high Reynolds number

Erik Burman
(University College London)
Abstract

The design and analysis of finite element methods for high Reynolds flow remains a challenging task, not least because of the difficulties associated with turbulence. In this talk we will first revisit some theoretical results on interior penalty methods using equal order interpolation for smooth solutions of the Navier-Stokes’ equations at high Reynolds number and show some recent computational results for turbulent flows.

Then we will focus on so called pressure robust methods, i.e. methods where the smoothness of the pressure does not affect the upper bound of error estimates for the velocity of the Stokes’ system. We will discuss how convection can be stabilized for such methods in the high Reynolds regime and, for the lowest order case, show an interesting connection to turbulence modelling.

 

Thu, 19 May 2022

14:00 - 15:00
L3

Single-Shot X-FEL Imaging, Stochastic Tomography, and Optimization on Measure Spaces

Russell Luke
Abstract


Motivated by the problem of reconstructing the electron density of a molecule from pulsed X-ray diffraction images (about 10e+9 per reconstruction), we develop a framework for analyzing the convergence to invariant measures of random fixed point iterations built from mappings that, while expansive, nevertheless possess attractive fixed points.  Building on techniques that we have established for determining rates of convergence of numerical methods for inconsistent nonconvex
feasibility, we lift the relevant regularities to the setting of probability spaces to arrive at a convergence analysis for noncontractive Markov operators.  This approach has many other applications, for instance the analysis of distributed randomized algorithms.
We illustrate the approach on the problem of solving linear systems with finite precision arithmetic.

 

Thu, 10 Nov 2022

14:00 - 15:00
L3

Primal dual methods for Wasserstein gradient flows

José Carrillo
(University of Oxford)
Abstract

Combining the classical theory of optimal transport with modern operator splitting techniques, I will present a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media,materials science, and biological swarming. Using the JKO scheme, along with the Benamou-Brenier dynamical characterization of the Wasserstein distance, we reduce computing the solution of these evolutionary PDEs to solving a sequence of fully discrete minimization problems, with strictly convex objective function and linear constraint. We compute the minimizer of these fully discrete problems by applying a recent, provably convergent primal dual splitting scheme for three operators. By leveraging the PDE’s underlying variational structure, ourmethod overcomes traditional stability issues arising from the strong nonlinearity and degeneracy, and it is also naturally positivity preserving and entropy decreasing. Furthermore, by transforming the traditional linear equality constraint, as has appeared in previous work, into a linear inequality constraint, our method converges in fewer iterations without sacrificing any accuracy. We prove that minimizers of the fully discrete problem converge to minimizers of the continuum JKO problem as the discretization is refined, and in the process, we recover convergence results for existing numerical methods for computing Wasserstein geodesics. Simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our numerical method will be shown.

Subscribe to Computational Mathematics and Applications Seminar