Thu, 29 Apr 2021

16:00 - 17:00
Virtual

Nonlinear Independent Component Analysis: Identifiability, Self-Supervised Learning, and Likelihood

Aapo Hyvärinen
(University of Helsinki)
Further Information
Abstract

Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, especially in the form of independent component analysis (ICA). However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Recently, we have shown that this problem can be solved by using additional information, in particular in the form of temporal structure or some additional observed variable. Our methods were originally based on "self-supervised" learning increasingly used in deep learning, but in more recent work, we have provided likelihood-based approaches. In particular, we have developed computational methods for efficient maximization of the likelihood for two variants of the model, based on variational inference or Riemannian relative gradients, respectively.

Tue, 01 Jun 2021
14:30
Virtual

Order-preserving mixed-precision Runge-Kutta methods

Matteo Croci
(Mathematical Institute (University of Oxford))
Abstract

Mixed-precision algorithms combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy. In this talk we focus on explicit stabilised Runge-Kutta (ESRK) methods for parabolic PDEs as they are especially amenable to a mixed-precision treatment. However, some of the concepts we present can be extended more generally to Runge-Kutta (RK) methods in general.

Consider the problem $y' = f(t,y)$ and let $u$ be the roundoff unit of the low-precision used. Standard mixed-precision schemes perform all evaluations of $f$ in reduced-precision to improve efficiency. We show that while this approach has many benefits, it harms the convergence order of the method leading to a limiting accuracy of $O(u)$.

In this talk we present a more accurate alternative: a scheme, which we call $q$-order-preserving, that is unaffected by this limiting behaviour. The idea is simple: by using $q$ high-precision evaluations of $f$ we can hope to retain a limiting convergence order of $O(\Delta t^{q})$. However, the practical design of these order-preserving schemes is less straight-forward.

We specifically focus on ESRK schemes as these are low-order schemes that employ a much larger number of stages than dictated by their convergence order so as to maximise stability. As such, these methods require most of the computational effort to be spent for stability rather than for accuracy purposes. We present new $s$-stage order $1$ and $2$ RK-Chebyshev and RK-Legendre methods that are provably full-order preserving. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 01 Jun 2021
14:00
Virtual

Why are numerical algorithms accurate at large scale and low precisions?

Theo Mary
(Sorbonne Université)
Abstract

Standard worst-case rounding error bounds of most numerical linear algebra algorithms grow linearly with the problem size and the machine precision. These bounds suggest that numerical algorithms could be inaccurate at large scale and/or at low precisions, but fortunately they are pessimistic. We will review recent advances in probabilistic rounding error analyses, which have attracted renewed interest due to the emergence of low precisions on modern hardware as well as the rise of stochastic rounding.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Fri, 26 Feb 2021
16:00
Virtual

Fermionic CFTs

Philip Boyle Smith
(Cambridge)
Abstract

There has been a recent uptick in interest in fermionic CFTs. These mildly generalise the usual notion of CFT to allow dependence on a background spin structure. I will discuss how this generalisation manifests itself in the symmetries, anomalies, and boundary conditions of the theory, using the series of unitary Virasoro minimal models as an example.

Take a mathematician with an endless curiosity about the world around him & the capacity of his subject to interpret it, & you have Series 3 of our #WhatsonYourMind films: a Sam Howison Special featuring geometry, flying spiders, tennis, rain, Pascal's mystic hexagram &, of course, Professor Pointyhead.

Editor's note: #WhatsonYourMind is the opportunity for Oxford Mathematicians to let it all out in 58 seconds (2 seconds for credits).

Thu, 17 Jun 2021

13:00 - 14:00
Virtual

Modulation of synchronization in neural networks by a slowly varying ionic current

Sue Ann Campbell
(University of Waterloo)
Further Information

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah.

Abstract

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah

Thu, 10 Jun 2021
14:00
Virtual

53 Matrix Factorizations, generalized Cartan, and Random Matrix Theory

Alan Edelman
(MIT)
Further Information

Joint seminar with the Random Matrix Theory group

Abstract

An insightful exercise might be to ask what is the most important idea in linear algebra. Our first answer would not be eigenvalues or linearity, it would be “matrix factorizations.” We will discuss a blueprint to generate 53 inter-related matrix factorizations (times 2) most of which appear to be new. The underlying mathematics may be traced back to Cartan (1927), Harish-Chandra (1956), and Flensted-Jensen (1978) . We will discuss the interesting history. One anecdote is that Eugene Wigner (1968) discovered factorizations such as the SVD in passing in a way that was buried and only eight authors have referenced that work. Ironically Wigner referenced Sigurður Helgason (1962) but Wigner did not recognize his results in Helgason's book. This work also extends upon and completes open problems posed by Mackey² & Tisseur (2003/2005).

Classical results of Random Matrix Theory concern exact formulas from the Hermite, Laguerre, Jacobi, and Circular distributions. Following an insight from Freeman Dyson (1970), Zirnbauer (1996) and Duenez (2004/5) linked some of these classical ensembles to Cartan's theory of Symmetric Spaces. One troubling fact is that symmetric spaces alone do not cover all of the Jacobi ensembles. We present a completed theory based on the generalized Cartan distribution. Furthermore, we show how the matrix factorization obtained by the generalized Cartan decomposition G=K₁AK₂ plays a crucial role in sampling algorithms and the derivation of the joint probability density of A.

Joint work with Sungwoo Jeong

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 03 Jun 2021
14:00
Virtual

Distributing points by minimizing energy for constructing approximation formulas with variable transformation

Ken'ichiro Tanaka
(University of Tokyo)
Abstract


In this talk, we present some effective methods for distributing points for approximating analytic functions with prescribed decay on a strip region including the real axis. Such functions appear when we use numerical methods with variable transformations. Typical examples of such methods are provided by single-exponential (SE) or double-exponential (DE) sinc formulas, in which variable transformations yield single- or double-exponential decay of functions on the real axis. It has been known that the formulas are nearly optimal on a Hardy space with a single- or double-exponential weight on the strip region, which is regarded as a space of transformed functions by the variable transformations.

Recently, we have proposed new approximation formulas that outperform the sinc formulas. For constructing them, we use an expression of the error norm (a.k.a. worst-case error) of an n-point interpolation operator in the weighted Hardy space. The expression is closely related to potential theory, and optimal points for interpolation correspond to an equilibrium measure of an energy functional with an external field. Since a discrete version of the energy becomes convex in the points under a mild condition about the weight, we can obtain good points with a standard optimization technique. Furthermore, with the aid of the formulation with the energy, we can find approximate distributions of the points theoretically.

[References]
- K. Tanaka, T. Okayama, M. Sugihara: Potential theoretic approach to design of accurate formulas for function approximation in symmetric weighted Hardy spaces, IMA Journal of Numerical Analysis Vol. 37 (2017), pp. 861-904.

- K. Tanaka, M. Sugihara: Design of accurate formulas for approximating functions in weighted Hardy spaces by discrete energy minimization, IMA Journal of Numerical Analysis Vol. 39 (2019), pp. 1957-1984.

- S. Hayakawa, K. Tanaka: Convergence analysis of approximation formulas for analytic functions via duality for potential energy minimization, arXiv:1906.03133.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Fri, 12 Mar 2021

12:00 - 13:00

The Metric is All You Need (for Disentangling)

David Pfau
(DeepMind)
Abstract

Learning a representation from data that disentangles different factors of variation is hypothesized to be a critical ingredient for unsupervised learning. Defining disentangling is challenging - a "symmetry-based" definition was provided by Higgins et al. (2018), but no prescription was given for how to learn such a representation. We present a novel nonparametric algorithm, the Geometric Manifold Component Estimator (GEOMANCER), which partially answers the question of how to implement symmetry-based disentangling. We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known and each factor manifold has nontrivial holonomy – for example, rotation in 3D. Our algorithm works by estimating the subspaces that are invariant under random walk diffusion, giving an approximation to the de Rham decomposition from differential geometry. We demonstrate the efficacy of GEOMANCER on several complex synthetic manifolds. Our work reduces the question of whether unsupervised disentangling is possible to the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.

 

Subscribe to