Past Seminars

15 October 2021
14:00
Dr Vicky Neale
Abstract

We'll discuss what mathematicians are looking for in written solutions.  How can you set out your ideas clearly, and what are the standard mathematical conventions?

This session is likely to be most relevant for first-year undergraduates, but all are welcome.

15 October 2021
14:00
Prof Veronica Ciocanel
Abstract

Actin filaments are polymers that interact with myosin motor
proteins and play important roles in cell motility, shape, and
development. Depending on its function, this dynamic network of
interacting proteins reshapes and organizes in a variety of structures,
including bundles, clusters, and contractile rings. Motivated by
observations from the reproductive system of the roundworm C. elegans,
we use an agent-based modeling framework to simulate interactions
between actin filaments and myosin motor proteins inside cells. We also
develop tools based on topological data analysis to understand
time-series data extracted from these filament network interactions. We
use these tools to compare the filament organization resulting from
myosin motors with different properties. We have also recently studied
how myosin motor regulation may regulate actin network architectures
during cell cycle progression. This work also raises questions about how
to assess the significance of topological features in common topological
summary visualizations.
 

  • Mathematical Biology and Ecology Seminar
14 October 2021
16:00
George Wynne

Further Information: 

ww.datasig.ac.uk/events

Abstract

Kernel-based statistical algorithms have found wide success in statistical machine learning in the past ten years as a non-parametric, easily computable engine for reasoning with probability measures. The main idea is to use a kernel to facilitate a mapping of probability measures, the objects of interest, into well-behaved spaces where calculations can be carried out. This methodology has found wide application, for example two-sample testing, independence testing, goodness-of-fit testing, parameter inference and MCMC thinning. Most theoretical investigations and practical applications have focused on Euclidean data. This talk will outline work that adapts the kernel-based methodology to data in an arbitrary Hilbert space which then opens the door to applications for functional data, where a single data sample is a discretely observed function, for example time series or random surfaces. Such data is becoming increasingly more prominent within the statistical community and in machine learning. Emphasis shall be given to the two-sample and goodness-of-fit testing problems.

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).

14 October 2021
14:00
David Bau
Abstract

 

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

 

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

 

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

 

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Computational Mathematics and Applications Seminar
14 October 2021
14:00
David Bau
Abstract

One of the great challenges of neural networks is to understand how they work.  For example: does a neuron encode a meaningful signal on its own?  Or is a neuron simply an undistinguished and arbitrary component of a feature vector space?  The tension between the neuron doctrine and the population coding hypothesis is one of the classical debates in neuroscience. It is a difficult debate to settle without an ability to monitor every individual neuron in the brain.

 

Within artificial neural networks we can examine every neuron. Beginning with the simple proposal that an individual neuron might represent one internal concept, we conduct studies relating deep network neurons to human-understandable concepts in a concrete, quantitative way: Which neurons? Which concepts? Are neurons more meaningful than an arbitrary feature basis? Do neurons play a causal role? We examine both simplified settings and state-of-the-art networks in which neurons learn how to represent meaningful objects within the data without explicit supervision.

 

Following this inquiry in computer vision leads us to insights about the computational structure of practical deep networks that enable several new applications, including semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained generative network.  It also presents an unanswered mathematical question: why is such disentanglement so pervasive?

 

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we propose to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.

 

  • Data Science Seminar
14 October 2021
12:00
Oliver O'Reilly

Further Information: 

Oliver M. O’Reilly is a professor in the Department of Mechanical Engineering and Interim Vice Provost for Undergraduate Education at the University of California at Berkeley. 

Research interests:

Dynamics, Vibrations, Continuum Mechanics

Key publications:

To view a list of Professor O’Reilly’s publications, please visit the Dynamics Lab website.

Abstract

In this talk, I will discuss a wide range of mechanical systems,
including Hoberman’s sphere, Euler’s disk, a sliding cylinder, the
Dynabee, BB-8, and Littlewood’s hoop, and the research they inspired.
Studies of the dynamics of the cylinder ultimately led to a startup
company while studying Euler’s disk led to sponsored research with a
well-known motorcycle company.


This talk is primarily based on research performed with a number of
former students over the past three decades. including Prithvi Akella,
Antonio Bronars, Christopher Daily-Diamond, Evan Hemingway, Theresa
Honein, Patrick Kessler, Nathaniel Goldberg, Christine Gregg, Alyssa
Novelia, and Peter Varadi over the past three decades.

  • Industrial and Applied Mathematics Seminar
14 October 2021
11:30
Abstract

Sela proved in 2006 that the (non abelian) free groups are stable. This implies the existence of a well-behaved forking independence relation, and raises the natural question of giving an algebraic description in the free group of this model-theoretic notion. In a joint work with Rizos Sklinos we give such a description (in a standard fg model F, over any set A of parameters) in terms of the JSJ decomposition of F over A, a geometric group theoretic tool giving a group presentation of F in terms of a graph of groups which encodes much information about its automorphism group relative to A. The main result states that two tuples of elements of F are forking independent over A if and only if they live in essentially disjoint parts of such a JSJ decomposition.

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).

13 October 2021
16:00
Monika Kudlinska
Abstract

Given an arbitrary group presentation, often very little can be deduced about the underlying group. It is thus something of a miracle that many properties of one-relator groups can be simply read-off from the defining relator. In this talk, I will discuss some of the classical results in the theory of one-relator groups, as well as the key trick used in many of their proofs. Time-permitting, I'll also discuss more recent work on this subject, including some open problems.

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).

  • Junior Topology and Group Theory Seminar
13 October 2021
14:00
Abstract

4d N=2 SCFTs are extremely important structures. In the first minitalk we will introduce them, then we will show three areas of mathematics with which this area of physics interacts. The minitalks are independent. The talk will be hybrid, with teams link below.

The junior Geometry and Physics seminar aims to bring together people from both areas, giving talks which are interesting and understandable to both.

Website: https://sites.google.com/view/oxfordpandg/physics-and-geometry-seminar

Teams link: https://www.google.com/url?q=https%3A%2F%2Fteams.microsoft.com%2Fl%2Fmee...

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).

  • Junior Physics and Geometry Seminar
12 October 2021
15:30
Abstract

Free fermion chains are particularly simple exactly solvable models. Despite this, typically one can find closed expressions for physically important correlators only in certain asymptotic limits. For a particular class of chains, I will show that we can apply Day's formula and Gorodetsky's formula for Toeplitz determinants with rational generating function. This leads to simple closed expressions for determinantal order parameters and the characteristic polynomial of the correlation matrix. The latter result allows us to prove that the ground state of the chain has an exact matrix-product state representation.

  • Random Matrix Theory Seminars

Pages