Fri, 26 Feb 2021
16:00
Virtual

Fermionic CFTs

Philip Boyle Smith
(Cambridge)
Abstract

There has been a recent uptick in interest in fermionic CFTs. These mildly generalise the usual notion of CFT to allow dependence on a background spin structure. I will discuss how this generalisation manifests itself in the symmetries, anomalies, and boundary conditions of the theory, using the series of unitary Virasoro minimal models as an example.

Take a mathematician with an endless curiosity about the world around him & the capacity of his subject to interpret it, & you have Series 3 of our #WhatsonYourMind films: a Sam Howison Special featuring geometry, flying spiders, tennis, rain, Pascal's mystic hexagram &, of course, Professor Pointyhead.

Editor's note: #WhatsonYourMind is the opportunity for Oxford Mathematicians to let it all out in 58 seconds (2 seconds for credits).

Thu, 17 Jun 2021

13:00 - 14:00
Virtual

Modulation of synchronization in neural networks by a slowly varying ionic current

Sue Ann Campbell
(University of Waterloo)
Further Information

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah.

Abstract

Synchronized activity of neurons is important for many aspects of brain function. Synchronization is affected by both network-level parameters, such as connectivity between neurons, and neuron-level parameters, such as firing rate. Many of these parameters are not static but may vary slowly in time. In this talk we focus on neuron-level parameters. Our work centres on the neurotransmitter acetylcholine, which has been shown to modulate the firing properties of several types of neurons through its affect on potassium currents such as the muscarine-sensitive M-current.  In the brain, levels of acetylcholine change with activity.  For example, acetylcholine is higher during waking and REM sleep and lower during slow wave sleep. We will show how the M-current affects the bifurcation structure of a generic conductance-based neural model and how this determines synchronization properties of the model.  We then use phase-model analysis to study the effect of a slowly varying M-current on synchronization.  This is joint work with Victoria Booth, Xueying Wang and Isam Al-Darbasah

Thu, 10 Jun 2021
14:00
Virtual

53 Matrix Factorizations, generalized Cartan, and Random Matrix Theory

Alan Edelman
(MIT)
Further Information

Joint seminar with the Random Matrix Theory group

Abstract

An insightful exercise might be to ask what is the most important idea in linear algebra. Our first answer would not be eigenvalues or linearity, it would be “matrix factorizations.” We will discuss a blueprint to generate 53 inter-related matrix factorizations (times 2) most of which appear to be new. The underlying mathematics may be traced back to Cartan (1927), Harish-Chandra (1956), and Flensted-Jensen (1978) . We will discuss the interesting history. One anecdote is that Eugene Wigner (1968) discovered factorizations such as the SVD in passing in a way that was buried and only eight authors have referenced that work. Ironically Wigner referenced Sigurður Helgason (1962) but Wigner did not recognize his results in Helgason's book. This work also extends upon and completes open problems posed by Mackey² & Tisseur (2003/2005).

Classical results of Random Matrix Theory concern exact formulas from the Hermite, Laguerre, Jacobi, and Circular distributions. Following an insight from Freeman Dyson (1970), Zirnbauer (1996) and Duenez (2004/5) linked some of these classical ensembles to Cartan's theory of Symmetric Spaces. One troubling fact is that symmetric spaces alone do not cover all of the Jacobi ensembles. We present a completed theory based on the generalized Cartan distribution. Furthermore, we show how the matrix factorization obtained by the generalized Cartan decomposition G=K₁AK₂ plays a crucial role in sampling algorithms and the derivation of the joint probability density of A.

Joint work with Sungwoo Jeong

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 03 Jun 2021
14:00
Virtual

Distributing points by minimizing energy for constructing approximation formulas with variable transformation

Ken'ichiro Tanaka
(University of Tokyo)
Abstract


In this talk, we present some effective methods for distributing points for approximating analytic functions with prescribed decay on a strip region including the real axis. Such functions appear when we use numerical methods with variable transformations. Typical examples of such methods are provided by single-exponential (SE) or double-exponential (DE) sinc formulas, in which variable transformations yield single- or double-exponential decay of functions on the real axis. It has been known that the formulas are nearly optimal on a Hardy space with a single- or double-exponential weight on the strip region, which is regarded as a space of transformed functions by the variable transformations.

Recently, we have proposed new approximation formulas that outperform the sinc formulas. For constructing them, we use an expression of the error norm (a.k.a. worst-case error) of an n-point interpolation operator in the weighted Hardy space. The expression is closely related to potential theory, and optimal points for interpolation correspond to an equilibrium measure of an energy functional with an external field. Since a discrete version of the energy becomes convex in the points under a mild condition about the weight, we can obtain good points with a standard optimization technique. Furthermore, with the aid of the formulation with the energy, we can find approximate distributions of the points theoretically.

[References]
- K. Tanaka, T. Okayama, M. Sugihara: Potential theoretic approach to design of accurate formulas for function approximation in symmetric weighted Hardy spaces, IMA Journal of Numerical Analysis Vol. 37 (2017), pp. 861-904.

- K. Tanaka, M. Sugihara: Design of accurate formulas for approximating functions in weighted Hardy spaces by discrete energy minimization, IMA Journal of Numerical Analysis Vol. 39 (2019), pp. 1957-1984.

- S. Hayakawa, K. Tanaka: Convergence analysis of approximation formulas for analytic functions via duality for potential energy minimization, arXiv:1906.03133.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Fri, 12 Mar 2021

12:00 - 13:00

The Metric is All You Need (for Disentangling)

David Pfau
(DeepMind)
Abstract

Learning a representation from data that disentangles different factors of variation is hypothesized to be a critical ingredient for unsupervised learning. Defining disentangling is challenging - a "symmetry-based" definition was provided by Higgins et al. (2018), but no prescription was given for how to learn such a representation. We present a novel nonparametric algorithm, the Geometric Manifold Component Estimator (GEOMANCER), which partially answers the question of how to implement symmetry-based disentangling. We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known and each factor manifold has nontrivial holonomy – for example, rotation in 3D. Our algorithm works by estimating the subspaces that are invariant under random walk diffusion, giving an approximation to the de Rham decomposition from differential geometry. We demonstrate the efficacy of GEOMANCER on several complex synthetic manifolds. Our work reduces the question of whether unsupervised disentangling is possible to the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.

 

Fri, 05 Mar 2021

12:00 - 13:00

Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations

Ke Ye
(Chinese Academy of Sciences)
Abstract

Low rank orthogonal tensor approximation (LROTA) is an important problem in tensor computations and their applications. A classical and widely used algorithm is the alternating polar decomposition method (APD). In this talk, I will first give very a brief introduction to tensors and their decompositions. After that, an improved version named iAPD of the classical APD will be proposed and all the following four fundamental properties of iAPD will be discussed : (i) the algorithm converges globally and the whole sequence converges to a KKT point without any assumption; (ii) it exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual O(1/k) for first order methods in optimization; (iii) more importantly, it converges R-linearly for a generic tensor without any assumption; (iv) for almost all LROTA problems, iAPD reduces to APD after finitely many iterations if it converges to a local minimizer. If time permits, I will also present some numerical experiments.

Fri, 26 Feb 2021

12:00 - 13:00

The magnitude of point-cloud data (cancelled)

Nina Otter
(UCLA)
Abstract

Magnitude is an isometric invariant of metric spaces that was introduced by Tom Leinster in 2010, and is currently the object of intense research, since it has been shown to encode many invariants of a metric space such as volume, dimension, and capacity.

Magnitude homology is a homology theory for metric spaces that has been introduced by Hepworth-Willerton and Leinster-Shulman, and categorifies magnitude in a similar way as the singular homology of a topological space categorifies its Euler characteristic.

In this talk I will first introduce magnitude and magnitude homology. I will then give an overview of existing results and current research in this area, explain how magnitude homology is related to persistent homology, and finally discuss new stability results for magnitude and how it can be used to study point cloud data.

This talk is based on  joint work in progress with Miguel O’Malley and Sara Kalisnik, as well as the preprint https://arxiv.org/abs/1807.01540.

Applications are now open for the University’s 2021 graduate access programmes: UNIQ+, & UNIQ+Digital.

Our graduate access programmes, open to all students in the UK, are designed to encourage and support talented undergraduates who would find continuing into postgraduate study a challenge for reasons other than their academic ability. 

Enabling mathematical cultures: introduction
Löwe, B Martin, U Pease, A Synthese volume 198 issue Suppl 26 6225-6231 (21 Nov 2021)
Subscribe to