How to deal with resistance? This is the headline question these days with regards to COVID vaccines. But it is an important question also in cancer therapy. Over the past century, oncology has come a long way, but all too often cancers still recur due to the emergence of drug-resistant tumour cells. How to tackle these cells is one of the key questions in cancer research. The main strategy so far has been the development of new drugs to which the resistant cells are still sensitive.

Fri, 05 Mar 2021
16:00
Virtual

Global Anomalies on the Hilbert space

Diego Delmastro
(Perimeter Institute)
Abstract

 I will be reviewing our recent article arXiv:2101.02218 where we propose a simple method for detecting global (a.k.a. non-perturbative) anomalies for generic quantum field theories. The basic idea is to study how the symmetries are realized on the Hilbert space of the theory. I will present several elementary examples where everything can be solved explicitly. After that, we will use these results to make non-trivial predictions about strongly interacting theories.

Thu, 29 Apr 2021

16:00 - 17:00
Virtual

Nonlinear Independent Component Analysis: Identifiability, Self-Supervised Learning, and Likelihood

Aapo Hyvärinen
(University of Helsinki)
Further Information
Abstract

Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, especially in the form of independent component analysis (ICA). However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Recently, we have shown that this problem can be solved by using additional information, in particular in the form of temporal structure or some additional observed variable. Our methods were originally based on "self-supervised" learning increasingly used in deep learning, but in more recent work, we have provided likelihood-based approaches. In particular, we have developed computational methods for efficient maximization of the likelihood for two variants of the model, based on variational inference or Riemannian relative gradients, respectively.

Tue, 01 Jun 2021
14:30
Virtual

Order-preserving mixed-precision Runge-Kutta methods

Matteo Croci
(Mathematical Institute (University of Oxford))
Abstract

Mixed-precision algorithms combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy. In this talk we focus on explicit stabilised Runge-Kutta (ESRK) methods for parabolic PDEs as they are especially amenable to a mixed-precision treatment. However, some of the concepts we present can be extended more generally to Runge-Kutta (RK) methods in general.

Consider the problem $y' = f(t,y)$ and let $u$ be the roundoff unit of the low-precision used. Standard mixed-precision schemes perform all evaluations of $f$ in reduced-precision to improve efficiency. We show that while this approach has many benefits, it harms the convergence order of the method leading to a limiting accuracy of $O(u)$.

In this talk we present a more accurate alternative: a scheme, which we call $q$-order-preserving, that is unaffected by this limiting behaviour. The idea is simple: by using $q$ high-precision evaluations of $f$ we can hope to retain a limiting convergence order of $O(\Delta t^{q})$. However, the practical design of these order-preserving schemes is less straight-forward.

We specifically focus on ESRK schemes as these are low-order schemes that employ a much larger number of stages than dictated by their convergence order so as to maximise stability. As such, these methods require most of the computational effort to be spent for stability rather than for accuracy purposes. We present new $s$-stage order $1$ and $2$ RK-Chebyshev and RK-Legendre methods that are provably full-order preserving. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 01 Jun 2021
14:00
Virtual

Why are numerical algorithms accurate at large scale and low precisions?

Theo Mary
(Sorbonne Université)
Abstract

Standard worst-case rounding error bounds of most numerical linear algebra algorithms grow linearly with the problem size and the machine precision. These bounds suggest that numerical algorithms could be inaccurate at large scale and/or at low precisions, but fortunately they are pessimistic. We will review recent advances in probabilistic rounding error analyses, which have attracted renewed interest due to the emergence of low precisions on modern hardware as well as the rise of stochastic rounding.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Fri, 26 Feb 2021
16:00
Virtual

Fermionic CFTs

Philip Boyle Smith
(Cambridge)
Abstract

There has been a recent uptick in interest in fermionic CFTs. These mildly generalise the usual notion of CFT to allow dependence on a background spin structure. I will discuss how this generalisation manifests itself in the symmetries, anomalies, and boundary conditions of the theory, using the series of unitary Virasoro minimal models as an example.

Take a mathematician with an endless curiosity about the world around him & the capacity of his subject to interpret it, & you have Series 3 of our #WhatsonYourMind films: a Sam Howison Special featuring geometry, flying spiders, tennis, rain, Pascal's mystic hexagram &, of course, Professor Pointyhead.

Editor's note: #WhatsonYourMind is the opportunity for Oxford Mathematicians to let it all out in 58 seconds (2 seconds for credits).

Thu, 17 Jun 2021

13:00 - 14:00
Virtual

Modulation of synchronization in neural networks by a slowly varying ionic current

Sue Ann Campbell
(University of Waterloo)
Further Information

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah.

Abstract

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah

Thu, 10 Jun 2021
14:00
Virtual

53 Matrix Factorizations, generalized Cartan, and Random Matrix Theory

Alan Edelman
(MIT)
Further Information

Joint seminar with the Random Matrix Theory group

Abstract

An insightful exercise might be to ask what is the most important idea in linear algebra. Our first answer would not be eigenvalues or linearity, it would be “matrix factorizations.” We will discuss a blueprint to generate 53 inter-related matrix factorizations (times 2) most of which appear to be new. The underlying mathematics may be traced back to Cartan (1927), Harish-Chandra (1956), and Flensted-Jensen (1978) . We will discuss the interesting history. One anecdote is that Eugene Wigner (1968) discovered factorizations such as the SVD in passing in a way that was buried and only eight authors have referenced that work. Ironically Wigner referenced Sigurður Helgason (1962) but Wigner did not recognize his results in Helgason's book. This work also extends upon and completes open problems posed by Mackey² & Tisseur (2003/2005).

Classical results of Random Matrix Theory concern exact formulas from the Hermite, Laguerre, Jacobi, and Circular distributions. Following an insight from Freeman Dyson (1970), Zirnbauer (1996) and Duenez (2004/5) linked some of these classical ensembles to Cartan's theory of Symmetric Spaces. One troubling fact is that symmetric spaces alone do not cover all of the Jacobi ensembles. We present a completed theory based on the generalized Cartan distribution. Furthermore, we show how the matrix factorization obtained by the generalized Cartan decomposition G=K₁AK₂ plays a crucial role in sampling algorithms and the derivation of the joint probability density of A.

Joint work with Sungwoo Jeong

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Subscribe to