Fri, 24 May 2019

14:00 - 15:30
L6

Diabatic vortices: a simple model of tropical cyclones and the martian polar vortex

Prof. Richard Scott
(University of St Andrews)
Abstract

In this talk, we will consider how two very different atmospheric phenomena, the terrestrial tropical cyclone and the martian polar vortex, can be described within a single simplified dynamical framework based on the forced shallow water equations. Dynamical forcings include angular momentum transport by secondary (transverse) circulations and local heating due to latent heat release. The forcings act in very different ways in the two systems but in both cases lead to distinct annular distributions of potential vorticity, with a local vorticity maximum at a finite radius surrounding a central minimum.  In both systems, the resulting vorticity distributions are subject to shear instability and the degree of eddy growth versus annular persistence can be examined explicitly under different forcing scenarios.

Fri, 10 May 2019

14:00 - 15:30
L6

Scattering of inertia-gravity waves in geostrophic turbulence

Prof. Jacques Vanneste
(University of Edinburgh)
Abstract

Inertia-gravity waves (IGWs) are ubiquitous in the ocean and the atmosphere. Once generated (by tides, topography, convection and other processes), they propagate and scatter in the large-scale, geostrophically-balanced background flow. I will discuss models of this scattering which represent the background flow as a random field with known statistics. Without assumption of spatial scale separation between waves and flow, the scattering is described by a kinetic equation involving a scattering cross section determined by the energy spectrum of the flow. In the limit of small-scale waves, this equation reduces to a diffusion equation in wavenumber space. This predicts, in particular, IGW energy spectra scaling as k^{-2}, consistent with observations in the atmosphere and ocean, lending some support to recent claims that (sub)mesoscale spectra can be attributed to almost linear IGWs.  The theoretical predictions are checked against numerical simulations of the three-dimensional Boussinesq equations.
(Joint work with Miles Savva and Hossein Kafiabad.)

It's Valentine's Day this Thursday (14th February in case you've forgotten) and Love AND Maths are in the air. For the first time, at 10am Oxford Mathematics will be LIVE STREAMING a 1st Year undergraduate lecture. In addition we will film (not live) a real tutorial based on that lecture.

The details:
LIVE Oxford Mathematics Student Lecture - James Sparks: 1st Year Undergraduate lecture on 'Dynamics', the mathematics of how things change with time
14th February, 10am-11am UK time

Fri, 08 Mar 2019

12:00 - 13:00
L4

Programmatically Structured Representations for Robust Autonomy in Robots

Subramanian Ramamoorthy
(University of Edinburgh and FiveAI)
Abstract


A defining feature of robotics today is the use of learning and autonomy in the inner loop of systems that are actually being deployed in the real world, e.g., in autonomous driving or medical robotics. While it is clear that useful autonomous systems must learn to cope with a dynamic environment, requiring architectures that address the richness of the worlds in which such robots must operate, it is also equally clear that ensuring the safety of such systems is the single biggest obstacle preventing scaling up of these solutions. I will discuss an approach to system design that aims at addressing this problem by incorporating programmatic structure in the network architectures being used for policy learning. I will discuss results from two projects in this direction.

Firstly, I will present the perceptor gradients algorithm – a novel approach to learning symbolic representations based on the idea of decomposing an agent’s policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.

Next, I will describe work on learning from demonstration where the task representation is that of hybrid control systems, with emphasis on extracting models that are explicitly verifi able and easily interpreted by robot operators. Through an architecture that goes from the sensorimotor level involving fitting a sequence of controllers using sequential importance sampling under a generative switching proportional controller task model, to higher level modules that are able to induce a program for a visuomotor reaching task involving loops and conditionals from a single demonstration, we show how a robot can learn tasks such as tower building in a manner that is interpretable and eventually verifiable.

 

References:

1. S.V. Penkov, S. Ramamoorthy, Learning programmatically structured representations with preceptor gradients, In Proc. International Conference on Learning Representations (ICLR), 2019. http://rad.inf.ed.ac.uk/data/publications/2019/penkov2019learning.pdf

2. M. Burke, S.V. Penkov, S. Ramamoorthy, From explanation to synthesis: Compositional program induction for learning from demonstration, https://arxiv.org/abs/1902.10657
 

Fri, 01 Mar 2019

12:00 - 13:00
L4

Modular, Infinite, and Other Deep Generative Models of Data

Charles Sutton
(University of Edinburgh)
Abstract

Deep generative models provide powerful tools for fitting difficult distributions such as modelling natural images. But many of these methods, including  variational autoencoders (VAEs) and generative adversarial networks (GANs), can be notoriously difficult to fit.

One well-known problem is mode collapse, which means that models can learn to characterize only a few modes of the true distribution. To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise.

Second, maximum mean discrepancy networks (MMD-nets) avoid some of the pathologies of GANs, but have not been able to match their performance. We present a new method of training MMD-nets, based on mapping the data into a lower dimensional space, in which MMD training can be more effective. We call these networks Ratio-based MMD Nets, and show that somewhat mysteriously, they have dramatically better performance than MMD nets.

A final problem is deciding how many latent components are necessary for a deep generative model to fit a certain data set. We present a nonparametric Bayesian approach to this problem, based on defining a (potentially) infinitely wide deep generative model. Fitting this model is possible by combining variational inference with a Monte Carlo method from statistical physics called Russian roulette sampling. Perhaps surprisingly, we find that this modification helps with the mode collapse problem as well.

 

Fri, 22 Feb 2019

12:00 - 13:00
L4

The Maximum Mean Discrepancy for Training Generative Adversarial Networks

Arthur Gretton
(UCL Gatsby Computational Neuroscience Unit)
Abstract

Generative adversarial networks (GANs) use neural networks as generative models, creating realistic samples that mimic real-life reference samples (for instance, images of faces, bedrooms, and more). These networks require an adaptive critic function while training, to teach the networks how to move improve their samples to better match the reference data. I will describe a kernel divergence measure, the maximum mean discrepancy, which represents one such critic function. With gradient regularisation, the MMD is used to obtain current state-of-the art performance on challenging image generation tasks, including 160 × 160 CelebA and 64 × 64 ImageNet. In addition to adversarial network training, I'll discuss issues of gradient bias for GANs based on integral probability metrics, and mechanisms for benchmarking GAN performance.

Fri, 15 Feb 2019

12:00 - 13:00
L4

Some optimisation problems in the Data Science Division at the National Physical Laboratory

Stephane Chretien
(National Physical Laboratory)
Abstract

Data science has become a topic of great interest lately and has triggered new widescale research activities around efficientl first order methods for optimisation and Bayesian sampling. The National Physical Laboratory is addressing some of these challenges with particular focus on  robustness and confidence in the solution.  In this talk, I will present some problems and recent results concerning i. robust learning in the presence of outliers based on the Median of Means (MoM) principle and ii. stability of the solution in super-resolution (joint work with A. Thompson and B. Toader).

Tue, 26 Feb 2019

14:30 - 15:30
L6

Graphons with minimum clique density

Maryam Sharifzadeh
Further Information

Among all graphs of given order and size, we determine the asymptotic structure of graphs which minimise the number of $r$-cliques, for each fixed $r$. In fact, this is achieved by characterising all graphons with given density which minimise the $K_r$-density. The case $r=3$ was proved in 2016 by Pikhurko and Razborov.

 

This is joint work with H. Liu, J. Kim, and O. Pikhurko.

Subscribe to