Tue, 23 Nov 2021
14:00
L3

Numerical approximation of viscous contact problems in glaciology

Gonzalo Gonzalez
(University of Oxford)
Abstract

Viscous contact problems describe the time evolution of fluid flows in contact with a surface from which they can detach. These type of problems arise in glaciology when, for example, modelling the evolution of the grounding line of a marine ice sheet or the formation of a subglacial cavity. Such problems are generally modelled as a time dependent viscous Stokes flow with a free boundary and contact boundary conditions. Although these applications are of great importance in glaciology, a systematic study of the numerical approximation of viscous contact problems has not been carried out yet. In this talk, I will present some of the challenges that arise when approximating these problems and some of the ideas we have come up with for overcoming them.

Tue, 12 Oct 2021
14:30
L3

A proposal for the convergence analysis of parallel-in-time algorithms on nonlinear problems

Gian Antonucci
(University of Oxford)
Abstract

Over the last few decades, scientists have conducted extensive research on parallelisation in time, which appears to be a promising way to provide additional parallelism when parallelisation in space saturates before all parallel resources have been used. For the simulations of interest to the Culham Centre of Fusion Energy (CCFE), however, time parallelisation is highly non-trivial, because the exponential divergence of nearby trajectories makes it hard for time-parallel numerical integration to achieve convergence. In this talk we present our results for the convergence analysis of parallel-in-time algorithms on nonlinear problems, focussing on what is widely accepted to be the prototypical parallel-in-time method, the Parareal algorithm. Next, we introduce a new error function to measure convergence based on the maximal Lyapunov exponents, and show how it improves the overall parallel speedup when compared to the traditional check used in the literature. We conclude by mentioning how the above tools can help us design and analyse a novel algorithm for the long-time integration of chaotic systems that uses time-parallel algorithms as a sub-procedure.

Tue, 12 Oct 2021
14:00
L3

Preconditioning for normal equations and least squares

Andy Wathen
(University of Oxford)
Abstract

The solution of systems of linear(ized) equations lies at the heart of many problems in Scientific Computing. In particular for large systems, iterative methods are a primary approach. For many symmetric (or self-adjoint) systems, there are effective solution methods based on the Conjugate Gradient method (for definite problems) or minres (for indefinite problems) in combination with an appropriate preconditioner, which is required in almost all cases. For nonsymmetric systems there are two principal lines of attack: the use of a nonsymmetric iterative method such as gmres, or tranformation into a symmetric problem via the normal equations. In either case, an appropriate preconditioner is generally required. We consider the possibilities here, particularly the idea of preconditioning the normal equations via approximations to the original nonsymmetric matrix. We highlight dangers that readily arise in this approach. Our comments also apply in the context of linear least squares problems as we will explain.

Thu, 04 Nov 2021
14:00
L4

Rational approximation and beyond, or, What I did during the pandemic

Nick Trefethen
(Mathematical Institute (University of Oxford))
Abstract

The past few years have been an exciting time for my work related to rational approximation.  This talk will present four developments:

1. AAA approximation (2016, with Nakatsukasa & Sète)
2. Root-exponential convergence and tapered exponential clustering (2020, with Nakatsukasa & Weideman)
3. Lightning (2017-2020, with Gopal & Brubeck)
4. Log-lightning (2020-21, with Nakatsukasa & Baddoo)

Two other topics will not be discussed:

X. AAA-Lawson approximation (2018, with Nakatsukasa)
Y. AAA-LS approximation (2021, with Costa)

Thu, 11 Nov 2021
14:00
Virtual

A Fast, Stable QR Algorithm for the Diagonalization of Colleague Matrices

Vladimir Rokhlin
(Yale University)
Abstract

 

The roots of a function represented by its Chebyshev expansion are known to be the eigenvalues of the so-called colleague matrix, which is a Hessenberg matrix that is the sum of a symmetric tridiagonal matrix and a rank 1 perturbation. The rootfinding problem is thus reformulated as an eigenproblem, making the computation of the eigenvalues of such matrices a subject of significant practical interest. To obtain the roots with the maximum possible accuracy, the eigensolver used must posess a somewhat subtle form of stability.

In this talk, I will discuss a recently constructed algorithm for the diagonalization of colleague matrices, satisfying the relevant stability requirements.  The scheme has CPU time requirements proportional to n^2, with n the dimensionality of the problem; the storage requirements are proportional to n. Furthermore, the actual CPU times (and storage requirements) of the procedure are quite acceptable, making it an approach of choice even for small-scale problems. I will illustrate the performance of the algorithm with several numerical examples.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

 

Thu, 02 Dec 2021
14:00
Virtual

Variational and phase-field models of brittle fracture: Past successes and current issues

Blaise Bourdin
(McMaster University)
Abstract

Variational phase-field models of fracture have been at the center of a multidisciplinary effort involving a large community of mathematicians, mechanicians, engineers, and computational scientists over the last 25 years or so.

I will start with a modern interpretation of Griffith's classical criterion as a variational principle for a free discontinuity energy and will recall some of the milestones in its analysis. Then, I will introduce the phase-field approximation per se and describe its numerical implementation. I illustrate how phase-field models have led to major breakthroughs in the predictive simulation of fracture in complex situations.

I then will turn my attention to current issues, with a specific emphasis on crack nucleation in nominally brittle materials. I will recall the fundamental incompatibility between Griffith’s theory and nucleation criteria based on a stress yield surface: the strength vs. toughness paradox. I will then present several attempts at addressing this issue within the realm of phase-fracture and discuss their respective strengths and weaknesses. 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 25 Nov 2021
14:00
Virtual

Adaptive multilevel delayed acceptance

Tim Dodwell
(University of Exeter)
Abstract

Uncertainty Quantification through Markov Chain Monte Carlo (MCMC) can be prohibitively expensive for target probability densities with expensive likelihood functions, for instance when the evaluation it involves solving a Partial Differential Equation (PDE), as is the case in a wide range of engineering applications. Multilevel Delayed Acceptance (MLDA) with an Adaptive Error Model (AEM) is a novel approach, which alleviates this problem by exploiting a hierarchy of models, with increasing complexity and cost, and correcting the inexpensive models on-the-fly. The method has been integrated within the open-source probabilistic programming package PyMC3 and is available in the latest development version.

In this talk I will talk about the problems with the Multilevel Markov Chain Monte Carlo (Dodwell et al. 2015). In so we will prove detailed balance for Adaptive Multilevel Delayed Acceptance, as well as showing that multilevel variance reduction can be achieved without bias, not possible in the original MLMCMC framework.

I will talk about our implementation in the latest version of pymc3, and demonstrate how for classical inverse problem benchmarks the AMLDA sampler offers huge computational savings (> factor of 100 fold speed up).

Finally I will talk heuristically about new / future research, in which we seek to develop parallel strategies for this inherently sequential sampler, as well as point to interesting applied application areas in which the method is proving particular effective.

 

--

This talk will be in person.

Thu, 18 Nov 2021
14:00
L4

Infinite-Dimensional Spectral Computations

Matt Colbrook
(University of Cambridge)
Abstract

Computing spectral properties of operators is fundamental in the sciences, with applications in quantum mechanics, signal processing, fluid mechanics, dynamical systems, etc. However, the infinite-dimensional problem is infamously difficult (common difficulties include spectral pollution and dealing with continuous spectra). This talk introduces classes of practical resolvent-based algorithms that rigorously compute a zoo of spectral properties of operators on Hilbert spaces. We also discuss how these methods form part of a broader programme on the foundations of computation. The focus will be computing spectra with error control and spectral measures, for general discrete and differential operators. Analogous to eigenvalues and eigenvectors, these objects “diagonalise” operators in infinite dimensions through the spectral theorem. The first is computed by an algorithm that approximates resolvent norms. The second is computed by building convolutions of appropriate rational functions with the measure via the resolvent operator (solving shifted linear systems). The final part of the talk provides purely data-driven algorithms that compute the spectral properties of Koopman operators, with convergence guarantees, from snapshot data. Koopman operators “linearise” nonlinear dynamical systems, the price being a reduction to an infinite-dimensional spectral problem (c.f. “Koopmania”, describing their surge in popularity). The talk will end with applications of these new methods in several thousand state-space dimensions.

Thu, 21 Oct 2021
14:00
Virtual

Randomized Methods for Sublinear Time Low-Rank Matrix Approximation

Cameron Musco
(University of Massachusetts)
Abstract

I will discuss recent advances in sampling methods for positive semidefinite (PSD) matrix approximation. In particular, I will show how new techniques based on recursive leverage score sampling yield a surprising algorithmic result: we give a method for computing a near optimal k-rank approximation to any n x n PSD matrix in O(n * k^2) time. When k is not too large, our algorithm runs in sublinear time -- i.e. it does not need to read all entries of the matrix. This result illustrates the ability of randomized methods to exploit the structure of PSD matrices and go well beyond what is possible with traditional algorithmic techniques. I will discuss a number of current research directions and open questions, focused on applications of randomized methods to sublinear time algorithms for structured matrix problems.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 14 Oct 2021
14:00
Virtual

What is the role of a neuron?

David Bau
(MIT)
Abstract

 

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

 

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

 

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

 

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Subscribe to