Thu, 02 Dec 2021
14:00
Virtual

Variational and phase-field models of brittle fracture: Past successes and current issues

Blaise Bourdin
(McMaster University)
Abstract

Variational phase-field models of fracture have been at the center of a multidisciplinary effort involving a large community of mathematicians, mechanicians, engineers, and computational scientists over the last 25 years or so.

I will start with a modern interpretation of Griffith's classical criterion as a variational principle for a free discontinuity energy and will recall some of the milestones in its analysis. Then, I will introduce the phase-field approximation per se and describe its numerical implementation. I illustrate how phase-field models have led to major breakthroughs in the predictive simulation of fracture in complex situations.

I then will turn my attention to current issues, with a specific emphasis on crack nucleation in nominally brittle materials. I will recall the fundamental incompatibility between Griffith’s theory and nucleation criteria based on a stress yield surface: the strength vs. toughness paradox. I will then present several attempts at addressing this issue within the realm of phase-fracture and discuss their respective strengths and weaknesses. 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 25 Nov 2021
14:00
Virtual

Adaptive multilevel delayed acceptance

Tim Dodwell
(University of Exeter)
Abstract

Uncertainty Quantification through Markov Chain Monte Carlo (MCMC) can be prohibitively expensive for target probability densities with expensive likelihood functions, for instance when the evaluation it involves solving a Partial Differential Equation (PDE), as is the case in a wide range of engineering applications. Multilevel Delayed Acceptance (MLDA) with an Adaptive Error Model (AEM) is a novel approach, which alleviates this problem by exploiting a hierarchy of models, with increasing complexity and cost, and correcting the inexpensive models on-the-fly. The method has been integrated within the open-source probabilistic programming package PyMC3 and is available in the latest development version.

In this talk I will talk about the problems with the Multilevel Markov Chain Monte Carlo (Dodwell et al. 2015). In so we will prove detailed balance for Adaptive Multilevel Delayed Acceptance, as well as showing that multilevel variance reduction can be achieved without bias, not possible in the original MLMCMC framework.

I will talk about our implementation in the latest version of pymc3, and demonstrate how for classical inverse problem benchmarks the AMLDA sampler offers huge computational savings (> factor of 100 fold speed up).

Finally I will talk heuristically about new / future research, in which we seek to develop parallel strategies for this inherently sequential sampler, as well as point to interesting applied application areas in which the method is proving particular effective.

 

--

This talk will be in person.

Thu, 18 Nov 2021
14:00
L4

Infinite-Dimensional Spectral Computations

Matt Colbrook
(University of Cambridge)
Abstract

Computing spectral properties of operators is fundamental in the sciences, with applications in quantum mechanics, signal processing, fluid mechanics, dynamical systems, etc. However, the infinite-dimensional problem is infamously difficult (common difficulties include spectral pollution and dealing with continuous spectra). This talk introduces classes of practical resolvent-based algorithms that rigorously compute a zoo of spectral properties of operators on Hilbert spaces. We also discuss how these methods form part of a broader programme on the foundations of computation. The focus will be computing spectra with error control and spectral measures, for general discrete and differential operators. Analogous to eigenvalues and eigenvectors, these objects “diagonalise” operators in infinite dimensions through the spectral theorem. The first is computed by an algorithm that approximates resolvent norms. The second is computed by building convolutions of appropriate rational functions with the measure via the resolvent operator (solving shifted linear systems). The final part of the talk provides purely data-driven algorithms that compute the spectral properties of Koopman operators, with convergence guarantees, from snapshot data. Koopman operators “linearise” nonlinear dynamical systems, the price being a reduction to an infinite-dimensional spectral problem (c.f. “Koopmania”, describing their surge in popularity). The talk will end with applications of these new methods in several thousand state-space dimensions.

Thu, 21 Oct 2021
14:00
Virtual

Randomized Methods for Sublinear Time Low-Rank Matrix Approximation

Cameron Musco
(University of Massachusetts)
Abstract

I will discuss recent advances in sampling methods for positive semidefinite (PSD) matrix approximation. In particular, I will show how new techniques based on recursive leverage score sampling yield a surprising algorithmic result: we give a method for computing a near optimal k-rank approximation to any n x n PSD matrix in O(n * k^2) time. When k is not too large, our algorithm runs in sublinear time -- i.e. it does not need to read all entries of the matrix. This result illustrates the ability of randomized methods to exploit the structure of PSD matrices and go well beyond what is possible with traditional algorithmic techniques. I will discuss a number of current research directions and open questions, focused on applications of randomized methods to sublinear time algorithms for structured matrix problems.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 14 Oct 2021
14:00
Virtual

What is the role of a neuron?

David Bau
(MIT)
Abstract

 

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

 

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

 

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

 

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Subscribe to