Forthcoming events in this series

Thu, 16 Jun 2022

14:00 - 15:00
L5

### Recent results on finite element methods for incompressible flow at high Reynolds number

Erik Burman
(University College London)
Abstract

The design and analysis of finite element methods for high Reynolds flow remains a challenging task, not least because of the difficulties associated with turbulence. In this talk we will first revisit some theoretical results on interior penalty methods using equal order interpolation for smooth solutions of the Navier-Stokes’ equations at high Reynolds number and show some recent computational results for turbulent flows.

Then we will focus on so called pressure robust methods, i.e. methods where the smoothness of the pressure does not affect the upper bound of error estimates for the velocity of the Stokes’ system. We will discuss how convection can be stabilized for such methods in the high Reynolds regime and, for the lowest order case, show an interesting connection to turbulence modelling.

Thu, 09 Jun 2022

14:00 - 15:00
Virtual

### Maximizing the Spread of Symmetric Non-Negative Matrices

John Urschel
Abstract

The spread of a matrix is defined as the diameter of its spectrum. In this talk, we consider the problem of maximizing the spread of a symmetric non-negative matrix with bounded entries and discuss a number of recent results. This optimization problem is closely related to a pair of conjectures in spectral graph theory made by Gregory, Kirkland, and Hershkowitz in 2001, which were recently resolved by Breen, Riasanovsky, Tait, and Urschel. This talk will give a light overview of the approach used in this work, with a strong focus on ideas, many of which can be abstracted to more general matrix optimization problems.

Thu, 02 Jun 2022

14:00 - 15:00
Virtual

### Balanced truncation for Bayesian inference

Elizabeth Qian
(Caltech)
Abstract

We consider the Bayesian inverse problem of inferring the initial condition of a linear dynamical system from noisy output measurements taken after the initial time. In practical applications, the large dimension of the dynamical system state poses a computational obstacle to computing the exact posterior distribution. Balanced truncation is a system-theoretic method for model reduction which obtains an efficient reduced-dimension dynamical system by projecting the system operators onto state directions which simultaneously maximize energies defined by reachability and observability Gramians. We show that in our inference setting, the prior covariance and Fisher information matrices can be naturally interpreted as reachability and observability Gramians, respectively. We use these connections to propose a balancing approach to model reduction for the inference setting. The resulting reduced model then inherits stability properties and error bounds from system theory, and yields an optimal posterior covariance approximation.

Thu, 26 May 2022

14:00 - 15:00
L3

### Propagation and stability of stress-affected transformation fronts in solids

Mikhail Poluektov
(University of Warwick)
Abstract

There is a wide range of problems in continuum mechanics that involve transformation fronts, which are non-stationary interfaces between two different phases in a phase-transforming or a chemically-transforming material. From the mathematical point of view, the considered problems are represented by systems of non-linear PDEs with discontinuities across non-stationary interfaces, kinetics of which depend on the solution of the PDEs. Such problems have a significant industrial relevance – an example of a transformation front is the localised stress-affected chemical reaction in Li-ion batteries with Si-based anodes. Since the kinetics of the transformation fronts depends on the continuum fields, the transformation front propagation can be decelerated and even blocked by the mechanical stresses. This talk will focus on three topics: (1) the stability of the transformation fronts in the vicinity of the equilibrium position for the chemo-mechanical problem, (2) a fictitious-domain finite-element method (CutFEM) for solving non-linear PDEs with transformation fronts and (3) an applied problem of Si lithiation.

Thu, 19 May 2022

14:00 - 15:00
L3

### Single-Shot X-FEL Imaging, Stochastic Tomography, and Optimization on Measure Spaces

Russell Luke
Abstract

Motivated by the problem of reconstructing the electron density of a molecule from pulsed X-ray diffraction images (about 10e+9 per reconstruction), we develop a framework for analyzing the convergence to invariant measures of random fixed point iterations built from mappings that, while expansive, nevertheless possess attractive fixed points.  Building on techniques that we have established for determining rates of convergence of numerical methods for inconsistent nonconvex
feasibility, we lift the relevant regularities to the setting of probability spaces to arrive at a convergence analysis for noncontractive Markov operators.  This approach has many other applications, for instance the analysis of distributed randomized algorithms.
We illustrate the approach on the problem of solving linear systems with finite precision arithmetic.

Thu, 12 May 2022

14:00 - 15:00
L3

### Direct solvers for elliptic PDEs

Gunnar Martinsson
(Univerity of Texas at Austin)
Abstract

That the linear systems arising upon the discretization of elliptic PDEs can be solved efficiently is well-known, and iterative solvers that often attain linear complexity (multigrid, Krylov methods, etc) have proven very successful. Interestingly, it has recently been demonstrated that it is often possible to directly compute an approximate inverse to the coefficient matrix in linear (or close to linear) time. The talk will argue that such direct solvers have several compelling qualities, including improved stability and robustness, the ability to solve certain problems that have remained intractable to iterative methods, and dramatic improvements in speed in certain environments.

After a general introduction to the field, particular attention will be paid to a set of recently developed randomized algorithms that construct data sparse representations of large dense matrices that arise in scientific computations. These algorithms are entirely black box, and interact with the linear operator to be compressed only via the matrix-vector multiplication.

Thu, 05 May 2022

14:00 - 15:00
L3

### Finite elements for metrics and curvature

Snorre Christiansen
(University of Oslo)
Abstract
In space dimension 2 we present a finite element complex for the deformation operator acting on vectorfields and the linearized curvature operator acting on symmetric 2 by 2 matrices. We also present the tools that were used in the construction, namely the BGG diagram chase and the framework of finite element systems. For this general framework we can prove a de Rham theorem on cohomology groups in the flat case and a Bianchi identity in the case with curvature.
Thu, 28 Apr 2022

14:00 - 15:00
L3

### An SDP approach for tensor product approximation of linear operators on matrix spaces

Andre Uschmajew
(Max Planck Institute Leipzig)
Abstract

Tensor structured linear operators play an important role in matrix equations and low-rank modelling. Motivated by this we consider the problem of approximating a matrix by a sum of Kronecker products. It is known that an optimal approximation in Frobenius norm can be obtained from the singular value decomposition of a rearranged matrix, but when the goal is to approximate the matrix as a linear map, an operator norm would be a more appropriate error measure. We present an alternating optimization approach for the corresponding approximation problem in spectral norm that is based on semidefinite programming, and report on its practical performance for small examples.
This is joint work with Venkat Chandrasekaran and Mareike Dressler.

Thu, 10 Mar 2022

14:00 - 15:00

### Mathematical modelling and partial differential equations in biology and data science

Lisa Maria Kreusser
(University of Bath)
Abstract

The recent, rapid advances in modern biology and data science have opened up a whole range of challenging mathematical problems. In this talk I will discuss a class of interacting particle models with anisotropic repulsive-attractive interaction forces. These models are motivated by the simulation of fingerprint databases, which are required in forensic science and biometric applications. In existing models, the forces are isotropic and particle models lead to non-local aggregation PDEs with radially symmetric potentials. The central novelty in the models I consider is an anisotropy induced by an underlying tensor field. This innovation does not only lead to the ability to describe real-world phenomena more accurately, but also renders their analysis significantly harder compared to their isotropic counterparts. I will discuss the role of anisotropic interaction in these models, present a stability analysis of line patterns, and show numerical results for the simulation of fingerprints. I will also outline how very similar models can be used in data classification, where it is desirable to assign labels to points in a point cloud, given that a certain number of points is already correctly labeled.

Thu, 03 Mar 2022

14:00 - 15:00
Virtual

### Bayesian approximation error applied to parameter and state dimension reduction in the context of large-scale ice sheet inverse problems

Noémi Petra
(University of California Merced)
Abstract

Solving large-scale Bayesian inverse problems governed by complex models suffers from the twin difficulties of the high dimensionality of the uncertain parameters and computationally expensive forward models. In this talk, we focus on 1. reducing the computational cost when solving these problems (via joint parameter and state dimension reduction) and 2. accounting for the error due to using a reduced order forward model (via Bayesian Approximation Error (BAE)).  To reduce the parameter dimension, we exploit the underlying problem structure (e.g., local sensitivity of the data to parameters, the smoothing properties of the forward model, the fact that the data contain limited information about the (infinite-dimensional) parameter field, and the covariance structure of the prior) and identify a likelihood-informed parameter subspace that shows where the change from prior to posterior is most significant. For the state dimension reduction, we employ a proper orthogonal decomposition (POD) combined with the discrete empirical interpolation method (DEIM) to approximate the nonlinear term in the forward model. We illustrate our approach with a model ice sheet inverse problem governed by the nonlinear Stokes equation for which the basal sliding coefficient field (a parameter that appears in a Robin boundary condition at the base of the geometry) is inferred from the surface ice flow velocity. The results show the potential to make the exploration of the full posterior distribution of the parameter or subsequent predictions more tractable.

This is joint work with Ki-Tae Kim (UC Merced), Benjamin Peherstorfer (NYU) and Tiangang Cui (Monash University).

Thu, 24 Feb 2022
14:00
Virtual

### Paving a Path for Polynomial Preconditioning in Parallel Computing

Jennifer Loe
(Sandia National Laboratories)
Abstract

Polynomial preconditioning for linear solvers is well-known but not frequently used in current scientific applications.  Furthermore, polynomial preconditioning has long been touted as well-suited for parallel computing; does this claim still hold in our new world of GPU-dominated machines?  We give details of the GMRES polynomial preconditioner and discuss its simple implementation, its properties such as eigenvalue remapping, and choices such as the starting vector and added roots.  We compare polynomial preconditioned GMRES to related methods such as FGMRES and full GMRES without restarting. We conclude with initial evaluations of the polynomial preconditioner for parallel and GPU computing, further discussing how polynomial preconditioning can become useful to real-word applications.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 17 Feb 2022
14:00
Virtual

### K-Spectral Sets

Anne Greenbaum
(University of Washington)
Abstract

Let $A$ be an $n$ by $n$ matrix or a bounded linear operator on a complex Hilbert space $(H, \langle \cdot , \cdot \rangle , \| \cdot \|)$. A closed set $\Omega \subset \mathbb{C}$ is a $K$-spectral set for $A$ if the spectrum of $A$ is contained in $\Omega$ and if, for all rational functions $f$ bounded in $\Omega$, the following inequality holds:
$\| f(A) \| \leq K \| f \|_{\Omega} ,$
where $\| \cdot \|$ on the left denotes the norm in $H$ and $\| \cdot \|_{\Omega}$ on the right denotes the $\infty$-norm on $\Omega$. A simple way to obtain a $K$ value for a given set $\Omega$ is to use the Cauchy integral formula and replace the norm of the integral by the integral of the resolvent norm:
$f(A) = \frac{1}{2 \pi i} \int_{\partial \Omega} ( \zeta I - A )^{-1} f( \zeta )\,d \zeta \Rightarrow \| f(A) \| \leq \frac{1}{2 \pi} \left( \int_{\partial \Omega} \| ( \zeta I - A )^{-1} \|~| d \zeta | \right) \| f \|_{\Omega} .$
Thus one can always take
$K = \frac{1}{2 \pi} \int_{\partial \Omega} \| ( \zeta I - A )^{-1} \| | d \zeta | .$
In M. Crouzeix and A. Greenbaum, Spectral sets: numerical range and beyond, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 1087-1101, different bounds on $K$ were derived.  I will show how these compare to that from the Cauchy integral formula for a variety of applications.  In case $A$ is a matrix and $\Omega$ is simply connected, we can numerically compute what we believe to be the optimal value for $K$ (and, at least, is a lower bound on $K$).  I will show how these values compare with the proven bounds as well.

(joint with  Michel Crouzeix and Natalie Wellen)

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 10 Feb 2022
14:00
Virtual

### Linear and Sublinear Time Spectral Density Estimation

Chris Musco
(New York University)
Abstract

I will discuss new work on practically popular algorithms, including the kernel polynomial method (KPM) and moment matching method, for approximating the spectral density (eigenvalue distribution) of an n x n symmetric matrix A. We will see that natural variants of these algorithms achieve strong worst-case approximation guarantees: they can approximate any spectral density to epsilon accuracy in the Wasserstein-1 distance with roughly O(1/epsilon) matrix-vector multiplications with A. Moreover, we will show that the methods are robust to *in accuracy* in these matrix-vector multiplications, which allows them to be combined with any approximation multiplication algorithm. As an application, we develop a randomized sublinear time algorithm for approximating the spectral density of a normalized graph adjacency or Laplacian matrices. The talk will cover the main tools used in our work, which include random importance sampling methods and stability results for computing orthogonal polynomials via three-term recurrence relations.

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 03 Feb 2022
14:00
L3

### Multigrid for climate- and weather prediction

Eike Mueller
(University of Bath)
Abstract

Climate- and weather prediction centres such as the Met Office rely on efficient numerical methods for simulating large scale atmospheric flow. One computational bottleneck in many models is the repeated solution of a large sparse system of linear equations. Preconditioning this system is particularly challenging for state-of-the-art discretisations, such as (mimetic) finite elements or Discontinuous Galerkin (DG) methods. In this talk I will present recent work on developing efficient multigrid preconditioners for practically relevant modelling codes. As reported in a REF2021 Industrial Impact Case Study, multigrid has already led to runtime savings of around 10%-15% for operational global forecasts with the Unified Model. Multigrid also shows superior performance in the Met Office next-generation LFRic model, which is based on a non-trivial finite element discretisation.

Thu, 27 Jan 2022
14:00
Virtual

### Approximation and discretization beyond a basis: theory and applications

Daan Huybrechs
(KU Leuven)
Abstract

Function approximation, as a goal in itself or as an ingredient in scientific computing, typically relies on having a basis. However, in many cases of interest an obvious basis is not known or is not easily found. Even if it is, alternative representations may exist with much fewer degrees of freedom, perhaps by mimicking certain features of the solution into the “basis functions" such as known singularities or phases of oscillation. Unfortunately, such expert knowledge typically doesn’t match well with the mathematical properties of a basis: it leads instead to representations which are either incomplete or overcomplete. In turn, this makes a problem potentially unsolvable or ill-conditioned. We intend to show that overcomplete representations, in spite of inherent ill-conditioning, often work wonderfully well in numerical practice. We explore a theoretical foundation for this phenomenon, use it to devise ground rules for practitioners, and illustrate how the theory and its ramifications manifest themselves in a number of applications.

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 20 Jan 2022
14:00
Virtual

### Eigenvalue Bounds for Double Saddle-Point Systems

Chen Greif
(University of British Colombia)
Abstract

We use energy estimates to derive new bounds on the eigenvalues of a generic form of double saddle-point matrices, with and without regularization terms. Results related to inertia and algebraic multiplicity of eigenvalues are also presented. The analysis includes eigenvalue bounds for preconditioned matrices based on block-diagonal Schur complement-based preconditioners, and it is shown that in this case the eigenvalues are clustered within a few intervals bounded away from zero. The analytical observations are linked to a few multiphysics problems of interest. This is joint work with Susanne Bradley.

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 02 Dec 2021
14:00
Virtual

### Variational and phase-field models of brittle fracture: Past successes and current issues

Blaise Bourdin
(McMaster University)
Abstract

Variational phase-field models of fracture have been at the center of a multidisciplinary effort involving a large community of mathematicians, mechanicians, engineers, and computational scientists over the last 25 years or so.

I will start with a modern interpretation of Griffith's classical criterion as a variational principle for a free discontinuity energy and will recall some of the milestones in its analysis. Then, I will introduce the phase-field approximation per se and describe its numerical implementation. I illustrate how phase-field models have led to major breakthroughs in the predictive simulation of fracture in complex situations.

I then will turn my attention to current issues, with a specific emphasis on crack nucleation in nominally brittle materials. I will recall the fundamental incompatibility between Griffith’s theory and nucleation criteria based on a stress yield surface: the strength vs. toughness paradox. I will then present several attempts at addressing this issue within the realm of phase-fracture and discuss their respective strengths and weaknesses.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 25 Nov 2021
14:00
Virtual

Tim Dodwell
(University of Exeter)
Abstract

Uncertainty Quantification through Markov Chain Monte Carlo (MCMC) can be prohibitively expensive for target probability densities with expensive likelihood functions, for instance when the evaluation it involves solving a Partial Differential Equation (PDE), as is the case in a wide range of engineering applications. Multilevel Delayed Acceptance (MLDA) with an Adaptive Error Model (AEM) is a novel approach, which alleviates this problem by exploiting a hierarchy of models, with increasing complexity and cost, and correcting the inexpensive models on-the-fly. The method has been integrated within the open-source probabilistic programming package PyMC3 and is available in the latest development version.

In this talk I will talk about the problems with the Multilevel Markov Chain Monte Carlo (Dodwell et al. 2015). In so we will prove detailed balance for Adaptive Multilevel Delayed Acceptance, as well as showing that multilevel variance reduction can be achieved without bias, not possible in the original MLMCMC framework.

I will talk about our implementation in the latest version of pymc3, and demonstrate how for classical inverse problem benchmarks the AMLDA sampler offers huge computational savings (> factor of 100 fold speed up).

Finally I will talk heuristically about new / future research, in which we seek to develop parallel strategies for this inherently sequential sampler, as well as point to interesting applied application areas in which the method is proving particular effective.

--

This talk will be in person.

Thu, 18 Nov 2021
14:00
L4

### Infinite-Dimensional Spectral Computations

Matt Colbrook
(University of Cambridge)
Abstract

Computing spectral properties of operators is fundamental in the sciences, with applications in quantum mechanics, signal processing, fluid mechanics, dynamical systems, etc. However, the infinite-dimensional problem is infamously difficult (common difficulties include spectral pollution and dealing with continuous spectra). This talk introduces classes of practical resolvent-based algorithms that rigorously compute a zoo of spectral properties of operators on Hilbert spaces. We also discuss how these methods form part of a broader programme on the foundations of computation. The focus will be computing spectra with error control and spectral measures, for general discrete and differential operators. Analogous to eigenvalues and eigenvectors, these objects “diagonalise” operators in infinite dimensions through the spectral theorem. The first is computed by an algorithm that approximates resolvent norms. The second is computed by building convolutions of appropriate rational functions with the measure via the resolvent operator (solving shifted linear systems). The final part of the talk provides purely data-driven algorithms that compute the spectral properties of Koopman operators, with convergence guarantees, from snapshot data. Koopman operators “linearise” nonlinear dynamical systems, the price being a reduction to an infinite-dimensional spectral problem (c.f. “Koopmania”, describing their surge in popularity). The talk will end with applications of these new methods in several thousand state-space dimensions.

Thu, 11 Nov 2021
14:00
Virtual

### A Fast, Stable QR Algorithm for the Diagonalization of Colleague Matrices

(Yale University)
Abstract

The roots of a function represented by its Chebyshev expansion are known to be the eigenvalues of the so-called colleague matrix, which is a Hessenberg matrix that is the sum of a symmetric tridiagonal matrix and a rank 1 perturbation. The rootfinding problem is thus reformulated as an eigenproblem, making the computation of the eigenvalues of such matrices a subject of significant practical interest. To obtain the roots with the maximum possible accuracy, the eigensolver used must posess a somewhat subtle form of stability.

In this talk, I will discuss a recently constructed algorithm for the diagonalization of colleague matrices, satisfying the relevant stability requirements.  The scheme has CPU time requirements proportional to n^2, with n the dimensionality of the problem; the storage requirements are proportional to n. Furthermore, the actual CPU times (and storage requirements) of the procedure are quite acceptable, making it an approach of choice even for small-scale problems. I will illustrate the performance of the algorithm with several numerical examples.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 04 Nov 2021
14:00
L4

### Rational approximation and beyond, or, What I did during the pandemic

Nick Trefethen
(Mathematical Institute (University of Oxford))
Abstract

The past few years have been an exciting time for my work related to rational approximation.  This talk will present four developments:

1. AAA approximation (2016, with Nakatsukasa & Sète)
2. Root-exponential convergence and tapered exponential clustering (2020, with Nakatsukasa & Weideman)
3. Lightning (2017-2020, with Gopal & Brubeck)
4. Log-lightning (2020-21, with Nakatsukasa & Baddoo)

Two other topics will not be discussed:

X. AAA-Lawson approximation (2018, with Nakatsukasa)
Y. AAA-LS approximation (2021, with Costa)

Thu, 28 Oct 2021
14:00
Virtual

### Randomized FEAST Algorithm for Generalized Hermitian Eigenvalue Problems with Probabilistic Error Analysis

Agnieszka Międlar
(University of Kansas)
Further Information

This talk is hosted by the Computational Mathematics Group of the Rutherford Appleton Laboratory.

Abstract

Randomized NLA methods have recently gained popularity because of their easy implementation, computational efficiency, and numerical robustness. We propose a randomized version of a well-established FEAST eigenvalue algorithm that enables computing the eigenvalues of the Hermitian matrix pencil $(\textbf{A},\textbf{B})$ located in the given real interval $\mathcal{I} \subset [\lambda_{min}, \lambda_{max}]$. In this talk, we will present deterministic as well as probabilistic error analysis of the accuracy of approximate eigenpair and subspaces obtained using the randomized FEAST algorithm. First, we derive bounds for the canonical angles between the exact and the approximate eigenspaces corresponding to the eigenvalues contained in the interval $\mathcal{I}$. Then, we present bounds for the accuracy of the eigenvalues and the corresponding eigenvectors. This part of the analysis is independent of the particular distribution of an initial subspace, therefore we denote it as deterministic. In the case of the starting guess being a Gaussian random matrix, we provide more informative, probabilistic error bounds. Finally, we will illustrate numerically the effectiveness of all the proposed error bounds.

---

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 21 Oct 2021
14:00
Virtual

### Randomized Methods for Sublinear Time Low-Rank Matrix Approximation

Cameron Musco
(University of Massachusetts)
Abstract

I will discuss recent advances in sampling methods for positive semidefinite (PSD) matrix approximation. In particular, I will show how new techniques based on recursive leverage score sampling yield a surprising algorithmic result: we give a method for computing a near optimal k-rank approximation to any n x n PSD matrix in O(n * k^2) time. When k is not too large, our algorithm runs in sublinear time -- i.e. it does not need to read all entries of the matrix. This result illustrates the ability of randomized methods to exploit the structure of PSD matrices and go well beyond what is possible with traditional algorithmic techniques. I will discuss a number of current research directions and open questions, focused on applications of randomized methods to sublinear time algorithms for structured matrix problems.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 14 Oct 2021
14:00
Virtual

### What is the role of a neuron?

David Bau
(MIT)
Abstract

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 17 Jun 2021

14:00 - 15:00
Virtual