Tue, 26 May 2020
09:30
Virtual

The small subgraph conditioning method and hypergraphs

Catherine Greenhill
(UNSW)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

The small subgraph conditioning method is an analysis of variance technique which was introduced by Robinson and Wormald in 1992, in their proof that almost all cubic graphs are Hamiltonian. The method has been used to prove many structural results about random regular graphs, mostly to show that a certain substructure is present with high probability. I will discuss some applications of the small subgraph conditioning method to hypergraphs, and describe a subtle issue which is absent in the graph setting.

Mon, 15 Jun 2020

15:45 - 16:45
Virtual

Smooth Open-Closed Field Theories from Gerbes and D-Branes

Severin Bunk
(University of Hamburg)
Abstract

In this talk I will present results from an ongoing joint research  program with Konrad Waldorf. Its main goal is to understand the  relation between gerbes on a manifold M and open-closed smooth field  theories on M. Gerbes can be viewed as categorified line bundles, and  we will see how gerbes with connections on M and their sections give  rise to smooth open-closed field theories on M. If time permits, we  will see that the field theories arising in this way have several characteristic properties, such as invariance under thin homotopies,  and that they carry positive reflection structures. From a physical  perspective, ourconstruction formalises the WZW amplitude as part of  a smooth bordism-type field theory.

Fri, 12 Jun 2020

15:00 - 16:00
Virtual

Contagion Maps for Manifold Learning

Barbara Mahler
(University of Oxford)
Abstract

Contagion maps are a family of maps that map nodes of a network to points in a high-dimensional space, based on the activations times in a threshold contagion on the network. A point cloud that is the image of such a map reflects both the structure underlying the network and the spreading behaviour of the contagion on it. Intuitively, such a point cloud exhibits features of the network's underlying structure if the contagion spreads along that structure, an observation which suggests contagion maps as a viable manifold-learning technique. We test contagion maps as a manifold-learning tool on several different data sets, and compare its performance to that of Isomap, one of the most well-known manifold-learning algorithms. We find that, under certain conditions, contagion maps are able to reliably detect underlying manifold structure in noisy data, when Isomap is prone to noise-induced error. This consolidates contagion maps as a technique for manifold learning. 

Fri, 19 Jun 2020

15:00 - 16:00
Virtual

Of monks, lawyers and airports: a unified framework for equivalences in social networks

Nina Otter
(UCLA)
Abstract

One of the main concerns in social network science is the study of positions and roles. By "position" social scientists usually mean a collection of actors who have similar ties to other actors, while a "role" is a specific pattern of ties among actors or positions. Since the 1970s a lot of research has been done to develop these concepts in a rigorous way. An open question in the field is whether it is possible to perform role and positional analysis simultaneously. In joint work in progress with Mason Porter we explore this question by proposing a framework that relies on the principle of functoriality in category theory. In this talk I will introduce role and positional analysis, present some well-studied examples from social network science, and what new insights this framework might give us.

Fri, 29 May 2020

15:00 - 16:00
Virtual

Persistent Homology with Random Graph Laplacians

Tadas Temcinas
(University of Oxford)
Abstract


Eigenvalue-eigenvector pairs of combinatorial graph Laplacians are extensively used in graph theory and network analysis. It is well known that the spectrum of the Laplacian L of a given graph G encodes aspects of the geometry of G  - the multiplicity of the eigenvalue 0 counts the number of connected components while the second smallest eigenvalue (called the Fiedler eigenvalue) quantifies the well-connectedness of G . In network analysis, one uses Laplacian eigenvectors associated with small eigenvalues to perform spectral clustering. In graph signal processing, graph Fourier transforms are defined in terms of an orthonormal eigenbasis of L. Eigenvectors of L also play a central role in graph neural networks.

Motivated by this we study eigenvalue-eigenvector pairs of Laplacians of random graphs and their potential use in TDA. I will present simulation results on what persistent homology barcodes of Bernoulli random graphs G(n, p) look like when we use Laplacian eigenvectors as filter functions. Also, I will discuss the conjectures made from the simulations as well as the challenges that arise when trying to prove them. This is work in progress.
 

Subscribe to