Tue, 09 Jun 2020
14:00
Virtual

Markov Chains for Programmable Active Matter

Dana Randall
(Georgia Tech)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

Active matter describes ensembles of self-organizing agents, or particles, interacting with their local environments so that their micro-scale behavior determines macro-scale characteristics of the ensemble. While there has been a surge of activity exploring the physics underlying such systems, less attention has been paid to questions of how to program them to achieve desired outcomes. We will present some recent results designing programmable active matter for specific tasks, including aggregation, dispersion, speciation, and locomotion, building on insights from stochastic algorithms and statistical physics.

Tue, 02 Jun 2020
15:30
Virtual

Scaling exponents of step-reinforced random walks

Jean Bertoin
(University of Zurich)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

Let $X_1, \ldots$ be i.i.d. copies of some real random variable $X$. For any $\varepsilon_2, \varepsilon_3, \ldots$ in $\{0,1\}$, a basic algorithm introduced by H.A. Simon yields a reinforced sequence $\hat{X}_1, \hat{X}_2, \ldots$ as follows. If $\varepsilon_n=0$, then $\hat{X}_n$ is a uniform random sample from $\hat{X}_1, …, \hat{X}_{n-1}$; otherwise $\hat{X}_n$ is a new independent copy of $X$. The purpose of this talk is to compare the scaling exponent of the usual random walk $S(n)=X_1 +\ldots + X_n$ with that of its step reinforced version $\hat{S}(n)=\hat{X}_1+\ldots + \hat{X}_n$. Depending on the tail of $X$ and on asymptotic behavior of the sequence $\varepsilon_j$, we show that step reinforcement may speed up the walk, or at the contrary slow it down, or also does not affect the scaling exponent at all. Our motivation partly stems from the study of random walks with memory, notably the so-called elephant random walk and its variations.

Tue, 02 Jun 2020
14:00
Virtual

An entropy proof of the Erdős-Kleitman-Rothschild theorem.

Wojciech Samotij
(Tel Aviv)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

We say that a graph $G$ is $H$-free if $G$ does not contain $H$ as a (not necessarily induced) subgraph. For a positive integer $n$, denote by $\text{ex}(n,H)$ the largest number of edges in an $H$-free graph with $n$ vertices (the Turán number of $H$). The classical theorem of Erdős, Kleitman, and Rothschild states that, for every $r\geq3$, there are $2^{\text{ex}(n,H)+o(n2)}$ many $K_r$-free graphs with vertex set $\{1,…, n\}$. There exist (at least) three different derivations of this estimate in the literature: an inductive argument based on the Kővári-Sós-Turán theorem (and its generalisation to hypergraphs due to Erdős), a proof based on Szemerédi's regularity lemma, and an argument based on the hypergraph container theorems. In this talk, we present yet another proof of this bound that exploits connections between entropy and independence. This argument is an adaptation of a method developed in a joint work with Gady Kozma, Tom Meyerovitch, and Ron Peled that studied random metric spaces.

Tue, 26 May 2020
11:00
Virtual

Subgraph densities in a surface

David Wood
(Monash)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

We study the following question at the intersection of extremal and structural graph theory. Given a fixed graph $H$ that embeds in a fixed surface $\Sigma$, what is the maximum number of copies of $H$ in an $n$-vertex graph that embeds in $\Sigma$? Exact answers to this question are known for specific graphs $H$ when $\Sigma$ is the sphere. We aim for more general, albeit less precise, results. We show that the answer to the above question is $\Theta(nf(H))$, where $f(H)$ is a graph invariant called the `flap-number' of $H$, which is independent of $\Sigma$. This simultaneously answers two open problems posed by Eppstein (1993). When $H$ is a complete graph we give more precise answers. This is joint work with Tony Huynh and Gwenaël Joret [https://arxiv.org/abs/2003.13777]

Tue, 26 May 2020
09:30
Virtual

The small subgraph conditioning method and hypergraphs

Catherine Greenhill
(UNSW)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

The small subgraph conditioning method is an analysis of variance technique which was introduced by Robinson and Wormald in 1992, in their proof that almost all cubic graphs are Hamiltonian. The method has been used to prove many structural results about random regular graphs, mostly to show that a certain substructure is present with high probability. I will discuss some applications of the small subgraph conditioning method to hypergraphs, and describe a subtle issue which is absent in the graph setting.

Mon, 15 Jun 2020

15:45 - 16:45
Virtual

Smooth Open-Closed Field Theories from Gerbes and D-Branes

Severin Bunk
(University of Hamburg)
Abstract

In this talk I will present results from an ongoing joint research  program with Konrad Waldorf. Its main goal is to understand the  relation between gerbes on a manifold M and open-closed smooth field  theories on M. Gerbes can be viewed as categorified line bundles, and  we will see how gerbes with connections on M and their sections give  rise to smooth open-closed field theories on M. If time permits, we  will see that the field theories arising in this way have several characteristic properties, such as invariance under thin homotopies,  and that they carry positive reflection structures. From a physical  perspective, ourconstruction formalises the WZW amplitude as part of  a smooth bordism-type field theory.

Fri, 12 Jun 2020

15:00 - 16:00
Virtual

Contagion Maps for Manifold Learning

Barbara Mahler
(University of Oxford)
Abstract

Contagion maps are a family of maps that map nodes of a network to points in a high-dimensional space, based on the activations times in a threshold contagion on the network. A point cloud that is the image of such a map reflects both the structure underlying the network and the spreading behaviour of the contagion on it. Intuitively, such a point cloud exhibits features of the network's underlying structure if the contagion spreads along that structure, an observation which suggests contagion maps as a viable manifold-learning technique. We test contagion maps as a manifold-learning tool on several different data sets, and compare its performance to that of Isomap, one of the most well-known manifold-learning algorithms. We find that, under certain conditions, contagion maps are able to reliably detect underlying manifold structure in noisy data, when Isomap is prone to noise-induced error. This consolidates contagion maps as a technique for manifold learning. 

Subscribe to