Forthcoming events in this series


Thu, 18 Nov 2021
14:00
L4

Infinite-Dimensional Spectral Computations

Matt Colbrook
(University of Cambridge)
Abstract

Computing spectral properties of operators is fundamental in the sciences, with applications in quantum mechanics, signal processing, fluid mechanics, dynamical systems, etc. However, the infinite-dimensional problem is infamously difficult (common difficulties include spectral pollution and dealing with continuous spectra). This talk introduces classes of practical resolvent-based algorithms that rigorously compute a zoo of spectral properties of operators on Hilbert spaces. We also discuss how these methods form part of a broader programme on the foundations of computation. The focus will be computing spectra with error control and spectral measures, for general discrete and differential operators. Analogous to eigenvalues and eigenvectors, these objects “diagonalise” operators in infinite dimensions through the spectral theorem. The first is computed by an algorithm that approximates resolvent norms. The second is computed by building convolutions of appropriate rational functions with the measure via the resolvent operator (solving shifted linear systems). The final part of the talk provides purely data-driven algorithms that compute the spectral properties of Koopman operators, with convergence guarantees, from snapshot data. Koopman operators “linearise” nonlinear dynamical systems, the price being a reduction to an infinite-dimensional spectral problem (c.f. “Koopmania”, describing their surge in popularity). The talk will end with applications of these new methods in several thousand state-space dimensions.

Thu, 11 Nov 2021
14:00
Virtual

A Fast, Stable QR Algorithm for the Diagonalization of Colleague Matrices

Vladimir Rokhlin
(Yale University)
Abstract

 

The roots of a function represented by its Chebyshev expansion are known to be the eigenvalues of the so-called colleague matrix, which is a Hessenberg matrix that is the sum of a symmetric tridiagonal matrix and a rank 1 perturbation. The rootfinding problem is thus reformulated as an eigenproblem, making the computation of the eigenvalues of such matrices a subject of significant practical interest. To obtain the roots with the maximum possible accuracy, the eigensolver used must posess a somewhat subtle form of stability.

In this talk, I will discuss a recently constructed algorithm for the diagonalization of colleague matrices, satisfying the relevant stability requirements.  The scheme has CPU time requirements proportional to n^2, with n the dimensionality of the problem; the storage requirements are proportional to n. Furthermore, the actual CPU times (and storage requirements) of the procedure are quite acceptable, making it an approach of choice even for small-scale problems. I will illustrate the performance of the algorithm with several numerical examples.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

 

Thu, 04 Nov 2021
14:00
L4

Rational approximation and beyond, or, What I did during the pandemic

Nick Trefethen
(Mathematical Institute (University of Oxford))
Abstract

The past few years have been an exciting time for my work related to rational approximation.  This talk will present four developments:

1. AAA approximation (2016, with Nakatsukasa & Sète)
2. Root-exponential convergence and tapered exponential clustering (2020, with Nakatsukasa & Weideman)
3. Lightning (2017-2020, with Gopal & Brubeck)
4. Log-lightning (2020-21, with Nakatsukasa & Baddoo)

Two other topics will not be discussed:

X. AAA-Lawson approximation (2018, with Nakatsukasa)
Y. AAA-LS approximation (2021, with Costa)

Thu, 28 Oct 2021
14:00
Virtual

Randomized FEAST Algorithm for Generalized Hermitian Eigenvalue Problems with Probabilistic Error Analysis

Agnieszka Międlar
(University of Kansas)
Further Information

This talk is hosted by the Computational Mathematics Group of the Rutherford Appleton Laboratory.

Abstract

Randomized NLA methods have recently gained popularity because of their easy implementation, computational efficiency, and numerical robustness. We propose a randomized version of a well-established FEAST eigenvalue algorithm that enables computing the eigenvalues of the Hermitian matrix pencil $(\textbf{A},\textbf{B})$ located in the given real interval $\mathcal{I} \subset [\lambda_{min}, \lambda_{max}]$. In this talk, we will present deterministic as well as probabilistic error analysis of the accuracy of approximate eigenpair and subspaces obtained using the randomized FEAST algorithm. First, we derive bounds for the canonical angles between the exact and the approximate eigenspaces corresponding to the eigenvalues contained in the interval $\mathcal{I}$. Then, we present bounds for the accuracy of the eigenvalues and the corresponding eigenvectors. This part of the analysis is independent of the particular distribution of an initial subspace, therefore we denote it as deterministic. In the case of the starting guess being a Gaussian random matrix, we provide more informative, probabilistic error bounds. Finally, we will illustrate numerically the effectiveness of all the proposed error bounds.

 

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 21 Oct 2021
14:00
Virtual

Randomized Methods for Sublinear Time Low-Rank Matrix Approximation

Cameron Musco
(University of Massachusetts)
Abstract

I will discuss recent advances in sampling methods for positive semidefinite (PSD) matrix approximation. In particular, I will show how new techniques based on recursive leverage score sampling yield a surprising algorithmic result: we give a method for computing a near optimal k-rank approximation to any n x n PSD matrix in O(n * k^2) time. When k is not too large, our algorithm runs in sublinear time -- i.e. it does not need to read all entries of the matrix. This result illustrates the ability of randomized methods to exploit the structure of PSD matrices and go well beyond what is possible with traditional algorithmic techniques. I will discuss a number of current research directions and open questions, focused on applications of randomized methods to sublinear time algorithms for structured matrix problems.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 14 Oct 2021
14:00
Virtual

What is the role of a neuron?

David Bau
(MIT)
Abstract

 

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

 

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

 

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

 

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 17 Jun 2021

14:00 - 15:00
Virtual

Primal-dual Newton methods, with application to viscous fluid dynamics

Georg Stadler
(New York University)
Abstract

I will discuss modified Newton methods for solving nonlinear systems of PDEs. These methods introduce additional variables before deriving the Newton linearization. These variables can then often be eliminated analytically before solving the Newton system, such that existing solvers can be adapted easily and the computational cost does not increase compared to a standard Newton method. The resulting algorithms yield favorable convergence properties. After illustrating the ideas on a simple example, I will show its application for the solution of incompressible Stokes flow problems with viscoplastic constitutive relation, where the additionally introduced variable is the stress tensor. These models are commonly used in earth science models. This is joint work with Johann Rudi (Argonne) and Melody Shih (NYU).

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 10 Jun 2021
14:00
Virtual

53 Matrix Factorizations, generalized Cartan, and Random Matrix Theory

Alan Edelman
(MIT)
Further Information

Joint seminar with the Random Matrix Theory group

Abstract

An insightful exercise might be to ask what is the most important idea in linear algebra. Our first answer would not be eigenvalues or linearity, it would be “matrix factorizations.” We will discuss a blueprint to generate 53 inter-related matrix factorizations (times 2) most of which appear to be new. The underlying mathematics may be traced back to Cartan (1927), Harish-Chandra (1956), and Flensted-Jensen (1978) . We will discuss the interesting history. One anecdote is that Eugene Wigner (1968) discovered factorizations such as the SVD in passing in a way that was buried and only eight authors have referenced that work. Ironically Wigner referenced Sigurður Helgason (1962) but Wigner did not recognize his results in Helgason's book. This work also extends upon and completes open problems posed by Mackey² & Tisseur (2003/2005).

Classical results of Random Matrix Theory concern exact formulas from the Hermite, Laguerre, Jacobi, and Circular distributions. Following an insight from Freeman Dyson (1970), Zirnbauer (1996) and Duenez (2004/5) linked some of these classical ensembles to Cartan's theory of Symmetric Spaces. One troubling fact is that symmetric spaces alone do not cover all of the Jacobi ensembles. We present a completed theory based on the generalized Cartan distribution. Furthermore, we show how the matrix factorization obtained by the generalized Cartan decomposition G=K₁AK₂ plays a crucial role in sampling algorithms and the derivation of the joint probability density of A.

Joint work with Sungwoo Jeong

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 03 Jun 2021
14:00
Virtual

Distributing points by minimizing energy for constructing approximation formulas with variable transformation

Ken'ichiro Tanaka
(University of Tokyo)
Abstract


In this talk, we present some effective methods for distributing points for approximating analytic functions with prescribed decay on a strip region including the real axis. Such functions appear when we use numerical methods with variable transformations. Typical examples of such methods are provided by single-exponential (SE) or double-exponential (DE) sinc formulas, in which variable transformations yield single- or double-exponential decay of functions on the real axis. It has been known that the formulas are nearly optimal on a Hardy space with a single- or double-exponential weight on the strip region, which is regarded as a space of transformed functions by the variable transformations.

Recently, we have proposed new approximation formulas that outperform the sinc formulas. For constructing them, we use an expression of the error norm (a.k.a. worst-case error) of an n-point interpolation operator in the weighted Hardy space. The expression is closely related to potential theory, and optimal points for interpolation correspond to an equilibrium measure of an energy functional with an external field. Since a discrete version of the energy becomes convex in the points under a mild condition about the weight, we can obtain good points with a standard optimization technique. Furthermore, with the aid of the formulation with the energy, we can find approximate distributions of the points theoretically.

[References]
- K. Tanaka, T. Okayama, M. Sugihara: Potential theoretic approach to design of accurate formulas for function approximation in symmetric weighted Hardy spaces, IMA Journal of Numerical Analysis Vol. 37 (2017), pp. 861-904.

- K. Tanaka, M. Sugihara: Design of accurate formulas for approximating functions in weighted Hardy spaces by discrete energy minimization, IMA Journal of Numerical Analysis Vol. 39 (2019), pp. 1957-1984.

- S. Hayakawa, K. Tanaka: Convergence analysis of approximation formulas for analytic functions via duality for potential energy minimization, arXiv:1906.03133.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 27 May 2021
14:00
Virtual

Algebraic multigrid methods for GPUs

Ulrike Meier Yang
(Lawrence Livermore National Laboratory)
Abstract

Computational science is facing several major challenges with rapidly changing highly complex heterogeneous computer architectures. To meet these challenges and yield fast and efficient performance, solvers need to be easily portable. Algebraic multigrid (AMG) methods have great potential to achieve good performance, since they have shown excellent numerical scalability for a variety of problems. However, their implementation on emerging computer architectures, which favor structure, presents new challenges. To face these difficulties, we have considered modularization of AMG, that is breaking AMG components into smaller kernels to improve portability as well as the development of new algorithms to replace components that are not suitable for GPUs. Another way to achieve performance on accelerators is to increase structure in algorithms. This talk will discuss new algorithmic developments, including a new class of interpolation operators that consists of simple matrix operations for unstructured AMG and efforts to develop a semi-structured AMG method.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 20 May 2021
14:00
Virtual

The bubble transform and the de Rham complex

Ragnar Winther
(University of Oslo)
Abstract

The bubble transform was a concept introduced by Richard Falk and me in a paper published in The Foundations of Computational Mathematics in 2016. From a simplicial mesh of a bounded domain in $R^n$ we constructed a map which decomposes scalar valued functions into a sum of local bubbles supported on appropriate macroelements.The construction is done without reference to any finite element space, but has the property that the standard continuous piecewise polynomial spaces are invariant. Furthermore, the transform is bounded in $L^2$ and $H^1$, and as a consequence we obtained a new tool for the understanding of finite element spaces of arbitrary polynomial order. The purpose of this talk is to review the previous results, and to discuss how to generalize the construction to differential forms such that the corresponding properties hold. In particular, the generalized transform will be defined such that it commutes with the exterior derivative.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 06 May 2021
14:00
Virtual

A proximal quasi-Newton trust-region method for nonsmooth regularized optimization

Dominique Orban
(École Polytechnique Montréal)
Abstract

We develop a trust-region method for minimizing the sum of a smooth term f and a nonsmooth term h, both of which can be nonconvex. Each iteration of our method minimizes a possibly nonconvex model of f+h in a trust region. The model coincides with f+h in value and subdifferential at the center. We establish global convergence to a first-order stationary point when f satisfies a smoothness condition that holds, in particular, when it has Lipschitz-continuous gradient, and h is proper and lower semi-continuous. The model of h is required to be proper, lower-semi-continuous and prox-bounded. Under these weak assumptions, we establish a worst-case O(1/ε^2) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization. We detail a special instance in which we use a limited-memory quasi-Newton model of f and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method. We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing. Our trust-region algorithm exhibits promising performance and compares favorably with linesearch proximal quasi-Newton methods based on convex models.

This is joint work with Aleksandr Aravkin and Robert Baraldi.

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 29 Apr 2021
14:00

Regularity, stability and passivity distances for dissipative Hamiltonian systems

Volker Mehrmann
(TU Berlin)
Abstract

Dissipative Hamiltonian systems are an important class of dynamical systems that arise in all areas of science and engineering. They are a special case of port-Hamiltonian control systems. When the system is linearized arround a stationary solution one gets a linear dissipative Hamiltonian typically differential-algebraic system. Despite the fact that the system looks unstructured at first sight, it has remarkable properties.  Stability and passivity are automatic, spectral structures for purely imaginary eigenvalues, eigenvalues at infinity, and even singular blocks in the Kronecker canonical form are very restricted and furthermore the structure leads to fast and efficient iterative solution methods for asociated linear systems. When port-Hamiltonian systems are subject to (structured) perturbations, then it is important to determine the minimal allowed perturbations so that these properties are not preserved. The computation of these structured distances to instability, non-passivity, or non-regularity, is typically a very hard non-convex optimization problem. However, in the context of dissipative Hamiltonian systems, the computation becomes much easier and can even be implemented efficiently for large scale problems in combination with model reduction techniques. We will discuss these distances and the computational methods and illustrate the results via an industrial problem.

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 11 Mar 2021
14:00
Virtual

Structured matrix approximations via tensor decompositions

Arvind Saibaba
(North Carolina State University)
Abstract

We provide a computational framework for approximating a class of structured matrices (e.g., block Toeplitz, block banded). Our approach has three steps: map the structured matrix to tensors, use tensor compression algorithms, and map the compressed tensors back to obtain two different matrix representations --- sum of Kronecker products and block low-rank format. The use of tensor decompositions enable us to uncover latent structure in the matrices and lead to computationally efficient algorithms. The resulting matrix approximations are memory efficient, easy to compute with, and preserve the error due to the tensor compression in the Frobenius norm. While our framework is quite general, we illustrate the potential of our method on structured matrices from three applications: system identification, space-time covariance matrices, and image deblurring.

Joint work with Misha Kilmer (Tufts University)

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 04 Mar 2021
14:00
Virtual

Optimization on manifolds: introduction and newsflashes

Pierre-Antoine Absil
(UC Louvain)
Abstract

This talk concerns applications of differential geometry in numerical optimization. They arise when the optimization problem can be formulated as finding an optimum of a real-valued cost function defined on a smooth nonlinear search space. Oftentimes, the search space is a "matrix manifold", in the sense that its points admit natural representations in the form of matrices. In most cases, the matrix manifold structure is due either to the presence of certain nonlinear constraints (such as orthogonality or rank constraints), or to invariance properties in the cost function that need to be factored out in order to obtain a nondegenerate optimization problem. Manifolds that come up in practical applications include the rotation group SO(3) (generation of rigid body motions from sample points), the set of fixed-rank matrices (low-rank models, e.g., in collaborative filtering), the set of 3x3 symmetric positive-definite matrices (interpolation of diffusion tensors), and the shape manifold (morphing).

In the recent years, the practical importance of optimization problems on manifolds has stimulated the development of geometric optimization algorithms that exploit the differential structure of the manifold search space. In this talk, we give an overview of geometric optimization algorithms and their applications, with an emphasis on the underlying geometric concepts and on the numerical efficiency of the algorithm implementations.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 25 Feb 2021
14:00
Virtual

Big data is low rank

Madeleine Udell
(Cornell University)
Abstract

Data scientists are often faced with the challenge of understanding a high dimensional data set organized as a table. These tables may have columns of different (sometimes, non-numeric) types, and often have many missing entries. In this talk, we discuss how to use low rank models to analyze these big messy data sets. Low rank models perform well --- indeed, suspiciously well — across a wide range of data science applications, including applications in social science, medicine, and machine learning. In this talk, we introduce the mathematics of low rank models, demonstrate a few surprising applications of low rank models in data science, and present a simple mathematical explanation for their effectiveness.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 18 Feb 2021
14:00
Virtual

The reference map technique for simulating complex materials and multi-body interactions

Chris Rycroft
(Harvard University)
Abstract

Conventional computational methods often create a dilemma for fluid-structure interaction problems. Typically, solids are simulated using a Lagrangian approach with grid that moves with the material, whereas fluids are simulated using an Eulerian approach with a fixed spatial grid, requiring some type of interfacial coupling between the two different perspectives. Here, a fully Eulerian method for simulating structures immersed in a fluid will be presented. By introducing a reference map variable to model finite-deformation constitutive relations in the structures on the same grid as the fluid, the interfacial coupling problem is highly simplified. The method is particularly well suited for simulating soft, highly-deformable materials and many-body contact problems, and several examples will be presented.

 

This is joint work with Ken Kamrin (MIT).

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 11 Feb 2021

14:00 - 15:00
Virtual

From design to numerical analysis of partial differential equations: a unified mathematical framework

Annalisa Buffa
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

Computer-based simulation of partial differential equations (PDEs) involves approximating the unknowns and relies on suitable description of geometrical entities such as the computational domain and its properties. The Finite Element Method (FEM) is by large the most popular technique for the computer-based simulation of PDEs and hinges on the assumption that discretized domain and unknown fields are both represented by piecewise polynomials, on tetrahedral or hexahedral partitions. In reality, the simulation of PDEs is a brick within a workflow where, at the beginning, the geometrical entities are created, described and manipulated with a geometry processor, often through Computer-Aided Design systems (CAD), and then used for the simulation of the mechanical behaviour of the designed object. This workflow is often repeated many times as part of a shape optimisation loop. Within this loop, the use of FEM on CAD geometries (which are mainly represented through their boundaries) calls then for (re-) meshing and re-interpolation techniques that often require human intervention and result in inaccurate solutions and lack of robustness of the whole process. In my talk, I will present the mathematical counterpart of this problem, I will discuss the mismatch in the mathematical representations of geometries and PDEs unknowns and introduce a promising framework where geometric objects and PDEs unknowns are represented in a compatible way. Within this framework, the challenges to be addressed in order to construct robust PDE solvers are many and I will discuss some of them. Mathematical results will besupported by numerical validation.

Thu, 04 Feb 2021
14:00
Virtual

Modeling composite structures with defects

Anne Reinarz
(University of Durham)
Abstract

Composite materials make up over 50% of recent aircraft constructions. They are manufactured from very thin fibrous layers  (~10^-4 m) and even  thinner resin interfaces (~10^-5 m). To achieve the required strength, a particular layup sequence of orientations of the anisotropic fibrous layers is used. During manufacturing, small localised defects in the form of misaligned fibrous layers can occur in composite materials, adding an additional level of complexity. After FE discretisation the model exhibits multiple scales and large spatial variations in model parameters. Thus the resultant linear system of equations can be very ill-conditioned and extremely large. The limitations of commercially available modelling tools for solving these problems has led us to the implementation of a robust and scalable preconditioner called GenEO for parallel Krylov solvers. I will discuss using the GenEO coarse space as an effective multiscale model for the fine-scale displacement and stress fields. For the coarse space construction, GenEO computes generalised eigenvectors of the local stiffness matrices on the overlapping subdomains and builds an approximate coarse space by combining the smallest energy eigenvectors on each subdomain via a partition of unity.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 28 Jan 2021
14:00
Virtual

Spontaneous periodic orbits in the Navier-Stokes flow via computer-assisted proofs

Jean-Philippe Lessard
(McGill University)
Abstract
In this talk, we introduce a general method to obtain constructive proofs of existence of periodic orbits in the forced autonomous Navier-Stokes equations on the three-torus. After introducing a zero finding problem posed on a Banach space of geometrically decaying Fourier coefficients, a Newton-Kantorovich theorem is applied to obtain the (computer-assisted) proofs of existence. As applications, we present proofs of existence of spontaneous periodic orbits in the Navier-Stokes equations with Taylor-Green forcing.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 21 Jan 2021
14:00
Virtual

Domain specific languages for convex optimization

Stephen Boyd
(Stanford University)
Abstract

Specialized languages for describing convex optimization problems, and associated parsers that automatically transform them to canonical form, have greatly increased the use of convex optimization in applications. These systems allow users to rapidly prototype applications based on solving convex optimization problems, as well as generate code suitable for embedded applications. In this talk I will describe the general methods used in such systems.

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 03 Dec 2020
14:00
Virtual

Reconstructing Signals with Simple Fourier Transforms

Haim Avron
(Tel Aviv University)
Abstract

Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. In practice, we are often interested in signals with ``simple'' Fourier structure -- e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies. More broadly, any prior knowledge about a signal's Fourier power spectrum can constrain its complexity.  Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct.

We formalize this intuition by showing that, roughly speaking, a continuous signal from a given class can be approximately reconstructed using a number of samples equal to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction.

Surprisingly, we also show that, up to logarithmic factors, a universal non-uniform sampling strategy can achieve this optimal complexity for any class of signals. We present a simple, efficient, and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the state-of-the-art. At the same time, it gives the first computationally and sample efficient solution to a broad range of problems, including multiband signal reconstruction and common kriging and Gaussian process regression tasks.

Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and column-based matrix reconstruction to the approximation of continuous linear operators that arise in the signal fitting problem. We believe that these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods.

This is joint work with Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker and Amir Zandieh

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 26 Nov 2020
14:00
Virtual

Why should we care about Steklov eigenproblems?

Nilima Nigam
(Simon Fraser University)
Abstract

Steklov eigenproblems and their variants (where the spectral parameter appears in the boundary condition) arise in a range of useful applications. For instance, understanding some properties of the mixed Steklov-Neumann eigenfunctions tells us why we shouldn't use coffee cups for expensive brandy. 

In this talk I'll present a high-accuracy discretization strategy for computing Steklov eigenpairs. The strategy can be used to study questions in spectral geometry, spectral optimization and to the solution of elliptic boundary value problems with Robin boundary conditions.

--

A link for the talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

 

Thu, 19 Nov 2020
14:00
Virtual

A foundation for automated high performance scientific machine learning

Chris Rackauckas
(MIT)
Abstract

Scientific machine learning is a burgeoning discipline for mixing machine learning into scientific simulation. Use cases of this field include automated discovery of physical equations and accelerating physical simulators. However, making the analyses of this field automated will require building a set of tools that handle stiff and ill-conditioned models without requiring user tuning. The purpose of this talk is to demonstrate how the methods and tools of scientific machine learning can be consolidated to give a single high performance and robust software stack. We will start by describing universal differential equations, a flexible mathematical object which is able to represent methodologies for equation discovery, 100-dimensional differential equation solvers, and discretizations of physics-informed neural networks. Then we will showcase how adjoint sensitivity analysis on the universal differential equation solving process gives rise to efficient and stiffly robust training methodologies for a large variety of scientific machine learning problems. With this understanding of differentiable programming we will describe how the Julia SciML Software Organization is utilizing this foundation to provide high performance tools for deploying battery powered airplanes, improving the energy efficiency of buildings, allow for navigation via the Earth's magnetic field, and more.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 12 Nov 2020
14:00
Virtual

High order Whitney forms on simplices

Francesca Rapetti
(University of Nice Sophia-Antipolis)
Abstract

Whitney elements on simplices are perhaps the most widely used finite elements in computational electromagnetics. They offer the simplest construction of polynomial discrete differential forms on simplicial complexes. Their associated degrees of freedom (dofs) have a very clear physical meaning and give a recipe for discretizing physical balance laws, e.g., Maxwell’s equations. As interest grew for the use of high order schemes, such as hp-finite element or spectral element methods, higher-order extensions of Whitney forms have become an important computational tool, appreciated for their better convergence and accuracy properties. However, it has remained unclear what kind of cochains such elements should be associated with: Can the corresponding dofs be assigned to precise geometrical elements of the mesh, just as, for instance, a degree of freedom for the space of Whitney 1-forms belongs to a specific edge? We address this localization issue. Why is this an issue? The existing constructions of high order extensions of Whitney elements follow the traditional FEM path of using higher and higher “moments” to define the needed dofs. As a result, such high order finite k-elements in d dimensions include dofs associated to q-simplices, with k < q ≤ d, whose physical interpretation is obscure. The present paper offers an approach based on the so-called “small simplices”, a set of subsimplices obtained by homothetic contractions of the original mesh simplices, centered at mesh nodes (or more generally, when going up in degree, at points of the principal lattice of each original simplex). Degrees of freedom of the high-order Whitney k-forms are then associated with small simplices of dimension k only.  We provide an explicit  basis for these elements on simplices and we justify this approach from a geometric point of view (in the spirit of Hassler Whitney's approach, still successful 30 years after his death).   

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 05 Nov 2020
14:00
Virtual

Modeling and simulation of fluidic surfaces

Maxim Olshanskii
(University of Houston)
Abstract

We briefly review mathematical models of viscous deformable interfaces (such as plasma membranes) leading to fluid equations posed on (evolving) 2D surfaces embedded in $R^3$. We further report on some recent advances in understanding and numerical simulation of the resulting fluid systems using an unfitted finite element method.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

 

Thu, 29 Oct 2020
14:00
Virtual

An algorithm for constructing efficient discretizations for integral equations near corners

Kirill Serkh
(University of Toronto)
Abstract

It has long been known that many elliptic partial differential equations can be reformulated as Fredholm integral equations of the second kind on the boundaries of their domains. The kernels of the resulting integral equations are weakly singular, which has historically made their numerical solution somewhat onerous, requiring the construction of detailed and typically sub-optimal quadrature formulas. Recently, a numerical algorithm for constructing generalized Gaussian quadratures was discovered which, given 2n essentially arbitrary functions, constructs a unique n-point quadrature that integrates them to machine precision, solving the longstanding problem posed by singular kernels.

When the domains have corners, the solutions themselves are also singular. In fact, they are known to be representable, to order n, by a linear combination (expansion) of n known singular functions. In order to solve the integral equation accurately, it is necessary to construct a discretization such that the mapping (in the L^2-sense) from the values at the discretization points to the corresponding n expansion coefficients is well-conditioned. In this talk, we present exactly such an algorithm, which is optimal in the sense that, given n essentially arbitrary functions, it produces n discretization points, and for which the resulting interpolation formulas have condition numbers extremely close to one.

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 22 Oct 2020
14:00
Virtual

A new block preconditioner for implicit Runge-Kutta methods for parabolic PDE

Victoria Howle
(Texas Tech University)
Abstract

In this talk, we introduce a new preconditioner for the large, structured systems appearing in implicit Runge–Kutta time integration of parabolic partial differential equations. This preconditioner is based on a block LDU factorization with algebraic multigrid subsolves for scalability.

We compare our preconditioner in condition number and eigenvalue distribution, and through numerical experiments, with others in the literature. In experiments run with implicit Runge–Kutta stages up to s = 7, we find that the new preconditioner outperforms the others, with the improvement becoming more pronounced as the spatial discretization is refined and as temporal order is increased.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 15 Oct 2020
14:00
Virtual

Generalized Gaussian quadrature as a tool for discretizing singular integral equations

Jim Bremer
(UC Davis)
Abstract

 

One of the standard methods for the solution of elliptic boundary value problems calls for reformulating them as systems of integral equations.  The integral operators that arise in this fashion typically have singular kernels, and, in many cases of interest, the solutions of these equations are themselves singular.  This makes the accurate discretization of the systems of integral equations arising from elliptic boundary value problems challenging.

Over the last decade, Generalized Gaussian quadrature rules, which are n-point quadrature rules that are exact for a collection of 2n functions, have emerged as one of the most effective tools for discretizing singular integral equations. Among other things, they have been used to accelerate the discretization of singular integral operators on curves, to enable the accurate discretization of singular integral operators on complex surfaces and to greatly reduce the cost of representing the (singular) solutions of integral equations given on planar domains with corners.

We will first briefly outline a standard method for the discretization of integral operators given on curves which is highly amenable to acceleration through generalized Gaussian quadratures. We will then describe a numerical procedure for the construction of Generalized Gaussian quadrature rules.

Much of this is joint work with Zydrunas Gimbutas (NIST Boulder) and Vladimir Rokhlin (Yale University).

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please send email to @email.

Thu, 11 Jun 2020

14:00 - 15:00

Dense networks that do not synchronize and sparse ones that do.

Alex Townsend
(Cornell)
Abstract

Consider a network of identical phase oscillators with sinusoidal coupling. How likely are the oscillators to globally synchronize, starting from random initial phases? One expects that dense networks have a strong tendency to synchronize and the basin of attraction for the synchronous state to be the whole phase space. But, how dense is dense enough? In this (hopefully) entertaining Zoom talk, we use techniques from numerical linear algebra and computational Algebraic geometry to derive the densest known networks that do not synchronize and the sparsest networks that do. This is joint work with Steven Strogatz and Mike Stillman.


[To be added to our seminars mailing list, or to receive a Zoom invitation for a particular seminar, please contact @email.]

Thu, 04 Jun 2020

14:00 - 15:00

Do Galerkin methods converge for the classical 2nd kind boundary integral equations in polyhedra and Lipschitz domains?

Simon Chandler-Wilde
(Reading University)
Abstract

The boundary integral equation method is a popular method for solving elliptic PDEs with constant coefficients, and systems of such PDEs, in bounded and unbounded domains. An attraction of the method is that it reduces solution of the PDE in the domain to solution of a boundary integral equation on the boundary of the domain, reducing the dimensionality of the problem. Second kind integral equations, featuring the double-layer potential operator, have a long history in analysis and numerical analysis. They provided, through C. Neumann, the first existence proof to the Laplace Dirichlet problem in 3D, have been an important analysis tool for PDEs through the 20th century, and are popular computationally because of their excellent conditioning and convergence properties for large classes of domains. A standard numerical method, in particular for boundary integral equations, is the Galerkin method, and the standard convergence analysis starts with a proof that the relevant operator is coercive, or a compact perturbation of a coercive operator, in the relevant function space. A long-standing open problem is whether this property holds for classical second kind boundary integral equations on general non-smooth domains. In this talk we give an overview of the various concepts and methods involved, reformulating the problem as a question about numerical ranges. We solve this open problem through counterexamples, presenting examples of 2D Lipschitz domains and 3D Lipschitz polyhedra for which coercivity does not hold. This is joint work with Prof Euan Spence, Bath.

 

[To be added to our seminars mailing list, or to receive a Zoom invitation for a particular seminar, please contact @email.]

Thu, 28 May 2020

14:00 - 15:00

Robust preconditioners for non-Newtonian fluids and magnetohydrodynamics

Patrick Farrell
(Oxford University)
Abstract

We discuss two recent extensions of work on Reynolds-robust preconditioners for the Navier-Stokes equations, to non-Newtonian fluids and to the equations of magnetohydrodynamics.  We model non-Newtonian fluids by means of an implicit constitutive relation between stress and strain. This framework is broadly applicable and allows for proofs of convergence under quite general assumptions. Since the stress cannot in general be solved for in terms of the strain, a three-field stress-velocity-pressure formulation is adopted. By combining the augmented Lagrangian approach with a kernel-capturing space decomposition, we derive a preconditioner that is observed to be robust to variations in rheological parameters in both two and three dimensions.  In the case of magnetohydrodynamics, we consider the stationary incompressible resistive Newtonian equations, and solve a four-field formulation for the velocity, pressure, magnetic field and electric field. A structure-preserving discretisation is employed that enforces both div(u) = 0 and div(B) = 0 pointwise. The basic idea of the solver is to split the fluid and electromagnetic parts and to employ our existing Navier-Stokes solver in the Schur complement. We present results in two dimensions that exhibit robustness with respect to both the fluids and magnetic Reynolds numbers, and describe ongoing work to extend the solver to three dimensions.

[To be added to our seminars mailing list, or to receive a Zoom invitation for a particular seminar, please contact @email.]

Thu, 21 May 2020

14:00 - 15:00

System Interpolation with Loewner Pencils: Background, Pseudospectra, and Nonlinear Eigenvalue Problems

Mark Embree
(Virginia Tech)
Abstract

In 2007, Andrew Mayo and Thanos Antoulas proposed a rational interpolation algorithm to solve a basic problem in control theory: given samples of the transfer function of a dynamical system, construct a linear time-invariant system that realizes these samples.  The resulting theory enables a wide range of data-driven modeling, and has seen diverse applications and extensions.  We will introduce these ideas from a numerical analyst's perspective, show how the selection of interpolation points can be guided by a Sylvester equation and pseudospectra of matrix pencils, and mention an application of these ideas to a contour algorithm for the nonlinear eigenvalue problem. (This talk involves collaborations with Michael Brennan (MIT), Serkan Gugercin (Virginia Tech), and Cosmin Ionita (MathWorks).)

[To be added to our seminars mailing list, or to receive a Zoom invitation for a particular seminar, please contact @email.]

Thu, 12 Mar 2020

14:00 - 15:00
L4

The Statistical Finite Element Method

Mark Girolami
(University of Cambridge)
Abstract

The finite element method (FEM) is one of the great triumphs of applied mathematics, numerical analysis and software development. Recent developments in sensor and signalling technologies enable the phenomenological study of systems. The connection between sensor data and FEM is restricted to solving inverse problems placing unwarranted faith in the fidelity of the mathematical description of the system. If one concedes mis-specification between generative reality and the FEM then a framework to systematically characterise this uncertainty is required. This talk will present a statistical construction of the FEM which systematically blends mathematical description with observations.

>

Thu, 27 Feb 2020

14:00 - 15:00
L4

Randomised algorithms for solving systems of linear equations

Gunnar Martinsson
(University of Texas at Austin)
Abstract

The task of solving large scale linear algebraic problems such as factorising matrices or solving linear systems is of central importance in many areas of scientific computing, as well as in data analysis and computational statistics. The talk will describe how randomisation can be used to design algorithms that in many environments have both better asymptotic complexities and better practical speed than standard deterministic methods.

The talk will in particular focus on randomised algorithms for solving large systems of linear equations. Both direct solution techniques based on fast factorisations of the coefficient matrix, and techniques based on randomised preconditioners, will be covered.

Note: There is a related talk in the Random Matrix Seminar on Tuesday Feb 25, at 15:30 in L4. That talk describes randomised methods for computing low rank approximations to matrices. The two talks are independent, but the Tuesday one introduces some of the analytical framework that supports the methods described here.

Thu, 20 Feb 2020

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Learning with nonlinear Perron eigenvectors

Francesco Tudisco
(Gran Sasso Science Institute GSSI)
Abstract

In this talk I will present a Perron-Frobenius type result for nonlinear eigenvector problems which allows us to compute the global maximum of a class of constrained nonconvex optimization problems involving multihomogeneous functions.

I will structure the talk into three main parts:

First, I will motivate the optimization of homogeneous functions from a graph partitioning point of view, showing an intriguing generalization of the famous Cheeger inequality.

Second, I will define the concept of multihomogeneous function and I will state our main Perron-Frobenious theorem. This theorem exploits the connection between optimization of multihomogeneous functions and nonlinear eigenvectors to provide an optimization scheme that has global convergence guarantees.

Third, I will discuss a few example applications in network science and machine learning that require the optimization of multihomogeneous functions and that can be solved using nonlinear Perron eigenvectors.

 

 

Thu, 13 Feb 2020

14:00 - 15:00
L4

Numerical real algebraic geometry and applications

Jonathan Hauenstein
(University of Notre Dame)
Abstract

Systems of nonlinear polynomial equations arise in a variety of fields in mathematics, science, and engineering.  Many numerical techniques for solving and analyzing solution sets of polynomial equations over the complex numbers, collectively called numerical algebraic geometry, have been developed over the past several decades.  However, since real solutions are the only solutions of interest in many applications, there is a current emphasis on developing new methods for computing and analyzing real solution sets.  This talk will summarize some numerical real algebraic geometric approaches as well as recent successes of these methods for solving a variety of problems in science and engineering.

Thu, 06 Feb 2020

14:00 - 15:00
L4

Quantifying the Estimation Error of Principal Component

Raphael Hauser
(University of Oxford)
Abstract

(Joint work with: Jüri Lember, Heinrich Matzinger, Raul Kangro)

Principal component analysis is an important pattern recognition and dimensionality reduction tool in many applications and are computed as eigenvectors

of a maximum likelihood covariance that approximates a population covariance. The eigenvectors are often used to extract structural information about the variables (or attributes) of the studied population. Since PCA is based on the eigen-decomposition of the proxy covariance rather than the ground-truth, it is important to understand the approximation error in each individual eigenvector as a function of the number of available samples. The combination of recent results of Koltchinskii & Lounici [8] and Yu, Wang & Samworth [11] yields such bounds. In the presented work we sharpen these bounds and show that eigenvectors can often be reconstructed to a required accuracy from a sample of strictly smaller size order.

Thu, 30 Jan 2020

14:00 - 15:00
L4

Using shared and distributed memory in the solution of large sparse systems

Iain Duff
(Rutherford Appleton Laboratory)
Abstract

We discuss the design of algorithms and codes for the solution of large sparse systems of linear equations on extreme scale computers that are characterized by having many nodes with multi-core CPUs or GPUs. We first use two approaches to get good single node performance. For symmetric systems we use task-based algorithms based on an assembly tree representation of the factorization. We then use runtime systems for scheduling the computation on both multicore CPU nodes and GPU nodes [6]. In this work, we are also concerned with the efficient parallel implementation of the solve phase using the computed sparse factors, and we show impressive results relative to other state-of-the-art codes [3]. Our second approach was to design a new parallel threshold Markowitz algorithm [4] based on Luby’s method [7] for obtaining a maximal independent set in an undirected graph. This is a significant extension since our graph model is a directed graph. We then extend the scope of both these approaches to exploit distributed memory parallelism. In the first case, we base our work on the block Cimmino algorithm [1] using the ABCD software package coded by Zenadi in Toulouse [5, 8]. The kernel for this algorithm is the direct factorization of a symmetric indefinite submatrix for which we use the above symmetric code. To extend the unsymmetric code to distributed memory, we use the Zoltan code from Sandia [2] to partition the matrix to singly bordered block diagonal form and then use the above unsymmetric code on the blocks on the diagonal. In both cases, we illustrate the added parallelism obtained from combining the distributed memory parallelism with the high single-node performance and show that our codes out-perform other state-of-the-art codes. This work is joint with a number of people. We developed the algorithms and codes in an EU Horizon 2020 Project, called NLAFET, that finished on 30 April 2019. Coworkers in this were: Sebastien Cayrols, Jonathan Hogg, Florent Lopez, and Stojce ´ ∗@email 1 Nakov. Collaborators in the block Cimmino part of the project were: Philippe Leleux, Daniel Ruiz, and Sukru Torun. Our codes available on the github repository https://github.com/NLAFET.

References [1] M. ARIOLI, I. S. DUFF, J. NOAILLES, AND D. RUIZ, A block projection method for sparse matrices, SIAM J. Scientific and Statistical Computing, 13 (1992), pp. 47–70. [2] E. BOMAN, K. DEVINE, L. A. FISK, R. HEAPHY, B. HENDRICKSON, C. VAUGHAN, U. CATALYUREK, D. BOZDAG, W. MITCHELL, AND J. TERESCO, Zoltan 3.0: Parallel Partitioning, Load-balancing, and Data Management Services; User’s Guide, Sandia National Laboratories, Albuquerque, NM, 2007. Tech. Report SAND2007-4748W http://www.cs.sandia. gov/Zoltan/ug_html/ug.html. [3] S. CAYROLS, I. S. DUFF, AND F. LOPEZ, Parallelization of the solve phase in a task-based Cholesky solver using a sequential task flow model, Int. J. of High Performance Computing Applications, To appear (2019). NLAFET Working Note 20. RAL-TR-2018-008. [4] T. A. DAVIS, I. S. DUFF, AND S. NAKOV, Design and implementation of a parallel Markowitz threshold algorithm, Technical Report RAL-TR-2019-003, Rutherford Appleton Laboratory, Oxfordshire, England, 2019. NLAFET Working Note 22. Submitted to SIMAX. [5] I. S. DUFF, R. GUIVARCH, D. RUIZ, AND M. ZENADI, The augmented block Cimmino distributed method, SIAM J. Scientific Computing, 37 (2015), pp. A1248–A1269. [6] I. S. DUFF, J. HOGG, AND F. LOPEZ, A new sparse symmetric indefinite solver using a posteriori threshold pivoting, SIAM J. Scientific Computing, To appear (2019). NLAFET Working Note 21. RAL-TR-2018-012. [7] M. LUBY, A simple parallel algorithm for the maximal independent set problem, SIAM J. Computing, 15 (1986), pp. 1036–1053. [8] M. ZENADI, The solution of large sparse linear systems on parallel computers using a hybrid implementation of the block Cimmino method., These de Doctorat, ´ Institut National Polytechnique de Toulouse, Toulouse, France, decembre 2013.

Thu, 23 Jan 2020

14:00 - 15:00
L4

Computational boundary element methods with Bempp

Timo Betcke
(UCL)
Abstract

Boundary integral equations are an elegant tool to model and simulate a range of physical phenomena in bounded and unbounded domains.

While mathematically well understood, the numerical implementation (e.g. via boundary element methods) still poses a number of computational challenges, from the efficient assembly of the underlying linear systems up to the fast preconditioned solution in complex applications. In this talk we provide an overview of some of these challenges and demonstrate the efficient implementation of boundary element methods on modern CPU and GPU architectures. As part of the talk we will present a number of practical examples using the Bempp-cl boundary element software, our next generation boundary element package, that has been developed in Python and supports modern vectorized CPU instruction sets and a number of GPU types.

Thu, 28 Nov 2019

14:00 - 15:00
L4

Minimizing convex quadratics with variable precision Krylov methods

Philippe Toint
(University of Namur)
Abstract

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients method are derived, the necessary quantities occurring in the theoretical bounds estimated and a new practical algorithm derived. Numerical experiments suggest that the new method has significant potential, including in the steadily more important context of multi-precision computations.

Thu, 21 Nov 2019

14:00 - 15:00
L4

Krylov methods for the solution of the nonlinear eigenvalue problem

Karl Meerbergen
(Catholic University of Leuven)
Abstract

Everybody is familiar with the concept of eigenvalues of a matrix. In this talk, we consider the nonlinear eigenvalue problem. These are problems for which the eigenvalue parameter appears in a nonlinear way in the equation. In physics, the Schroedinger equation for determining the bound states in a semiconductor device, introduces terms with square roots of different shifts of the eigenvalue. In mechanical and civil engineering, new materials often have nonlinear damping properties. For the vibration analysis of such materials, this leads to nonlinear functions of the eigenvalue in the system matrix.

One particular example is the sandwhich beam problem, where a layer of damping material is sandwhiched between two layers of steel. Another example is the stability analysis of the Helmholtz equation with a noise excitation produced by burners in a combustion chamber. The burners lead to a boundary condition with delay terms (exponentials of the eigenvalue).


We often receive the question: “How can we solve a nonlinear eigenvalue problem?” This talk explains the different steps to be taken for using Krylov methods. The general approach works as follows: 1) approximate the nonlinearity by a rational function; 2) rewrite this rational eigenvalue problem as a linear eigenvalue problem and then 3) solve this by a Krylov method. We explain each of the three steps.

Thu, 14 Nov 2019

14:00 - 15:00
L4

On the preconditioning of coupled multi-physics problems

Massimiliano Ferronato
(University of Padua)
Abstract

The fully coupled numerical simulation of different physical processes, which can typically occur
at variable time and space scales, is often a very challenging task. A common feature of such models is that
their discretization gives rise to systems of linearized equations with an inherent block structure, which
reflects the properties of the set of governing PDEs. The efficient solution of a sequence of systems with
matrices in a block form is usually one of the most time- and memory-demanding issue in a coupled simulation.
This effort can be carried out by using either iteratively coupled schemes or monolithic approaches, which
tackle the problem of the system solution as a whole.

This talk aims to discuss recent advances in the monolithic solution of coupled multi-physics problems, with
application to poromechanical simulations in fractured porous media. The problem is addressed either by proper
sparse approximations of the Schur complements or special splittings that can partially uncouple the variables
related to different physical processes. The selected approaches can be included in a more general preconditioning
framework that can help accelerate the convergence of Krylov subspace solvers. The generalized preconditioner
relies on approximately decoupling the different processes, so as to address each single-physics problem
independently of the others. The objective is to provide an algebraic framework that can be employed as a
general ``black-box'' tool and can be regarded as a common starting point to be later specialized for the
particular multi-physics problem at hand.

Numerical experiments, taken from real-world examples of poromechanical problems and fractured media, are used to
investigate the behaviour and the performance of the proposed strategies.

Thu, 07 Nov 2019

14:00 - 15:00
L4

A posteriori error analysis for domain decomposition

Simon Tavener
(Colorado State University)
Abstract

Domain decomposition methods are widely employed for the numerical solution of partial differential equations on parallel computers. We develop an adjoint-based a posteriori error analysis for overlapping multiplicative Schwarz domain decomposition and for overlapping additive Schwarz. In both cases the numerical error in a user-specified functional of the solution (quantity of interest), is decomposed into a component that arises due to the spatial discretization and a component that results from of the finite iteration between the subdomains. The spatial discretization error can be further decomposed in to the errors arising on each subdomain. This decomposition of the total error can then be used as part of a two-stage approach to construct a solution strategy that efficiently reduces the error in the quantity of interest.