Tue, 03 Nov 2020

14:00 - 15:00
Virtual

FFTA: A bi-directional approach to comparing the modular structure of networks

Mattie Landman
(Mathematical Institute)
Abstract

Here we propose a new method to compare the modular structure of a pair of node-aligned networks. The majority of current methods, such as normalized mutual information, compare two node partitions derived from a community detection algorithm yet ignore the respective underlying network topologies. Addressing this gap, our method deploys a community detection quality function to assess the fit of each node partition with respect to the other network's connectivity structure. Specifically, for two networks A and B, we project the node partition of B onto the connectivity structure of A. By evaluating the fit of B's partition relative to A's own partition on network A (using a standard quality function), we quantify how well network A describes the modular structure of B. Repeating this in the other direction, we obtain a two-dimensional distance measure, the bi-directional (BiDir) distance. The advantages of our methodology are three-fold. First, it is adaptable to a wide class of community detection algorithms that seek to optimize an objective function. Second, it takes into account the network structure, specifically the strength of the connections within and between communities, and can thus capture differences between networks with similar partitions but where one of them might have a more defined or robust community structure. Third, it can also identify cases in which dissimilar optimal partitions hide the fact that the underlying community structure of both networks is relatively similar. We illustrate our method for a variety of community detection algorithms, including multi-resolution approaches, and a range of both simulated and real world networks.

Tue, 17 Nov 2020

14:00 - 15:00
Virtual

FFTA: Causal Network Motifs: Identifying Heterogenous Spillover Effects in A/B Tests

Yuan Yuan and Kristen M. Altenburger
(MIT and Facebook)
Abstract

Randomized experiments, or "A/B" tests, remain the gold standard for evaluating the causal effect of a policy intervention or product change. However, experimental settings such as social networks, where users are interacting and influencing one another, violate conventional assumptions of no interference needed for credible causal inference. Existing solutions include accounting for the fraction or count of treated neighbors in a user's network, among other strategies. Yet, there are often a high number of researcher degrees of freedom in specifying network interference conditions and most current methods do not account for the local network structure beyond simply counting the number of neighbors. Capturing local network structures is important because it can account for theories, such as structural diversity and echo chambers. Our study provides an approach that accounts for both the local structure in a user's social network via motifs as well as the assignment conditions of neighbors. We propose a two-part approach. We first introduce and employ "causal network motifs," i.e. network motifs that characterize the assignment conditions in local ego networks; and then we propose a tree-based algorithm for identifying different network interference conditions and estimating their average potential outcomes. We test our method on a real-world experiment on a large-scale network and a synthetic network setting, which highlight how accounting for local structures can better account for different interference patterns in networks.

Wed, 25 Nov 2020
10:00
Virtual

Veering Triangulations, the Teichmüller Polynomial and the Alexander Polynomial

Anna Parlak
(University of Warwick)
Abstract

Veering triangulations are a special class of ideal triangulations with a rather mysterious combinatorial definition. Their importance follows from a deep connection with pseudo-Anosov flows on 3-manifolds. Recently Landry, Minsky and Taylor introduced a polynomial invariant of veering triangulations called the taut polynomial. It is a generalisation of an older invariant, the Teichmüller polynomial, defined by McMullen in 2002.

The aim of my talk is to demonstrate that veering triangulations provide a convenient setup for computations. More precisely, I will use fairly easy arguments to obtain a fairly strong statement which generalises the results of McMullen relating the Teichmüller polynomial to the Alexander polynomial.

I will not assume any prior knowledge on the Alexander polynomial, the Teichmüller polynomial or veering triangulations.

Wed, 18 Nov 2020
16:00
Virtual

Introduction to left-orderable groups and formal languages.

Hang Lu Su
(ICMAT Madrid)
Abstract

 

I will introduce left-orderable groups and discuss constructions and examples of such groups. I will then motivate studying left-orders under the framework of formal languages and discuss some recent results.

Wed, 11 Nov 2020
10:00
Virtual

Extending Leighton's Graph Covering Theorem

Sam Shepherd
(University of Oxford)
Abstract

Leighton's Theorem states that if two finite graphs have a common universal cover then they have a common finite cover. I will explore various ways in which this result can and can't be extended.

Fri, 30 Oct 2020
14:00
Virtual

Classifying Superconformal Defects in Diverse Dimensions

Yifan Wang
(Harvard)
Abstract

We explore general constraints from unitarity, defect superconformal symmetry and locality of bulk-defect couplings to classify possible superconformal defects in superconformal field theories (SCFT) of spacetime dimensions d>2.  Despite the general absence of locally conserved currents, the defect CFT contains new distinguished operators with protected quantum numbers that account for the broken bulk symmetries.  Consistency with the preserved superconformal symmetry and unitarity requires that such operators arrange into unitarity multiplets of the defect superconformal algebra, which in turn leads to nontrivial constraints on what kinds of defects are admissible in a given SCFT.  We will focus on the case of superconformal lines in this talk and comment on several interesting implications of our analysis, such as symmetry-enforced defect conformal manifolds, defect RG flows and possible nontrivial one-form symmetries in various SCFTs.  

Fri, 20 Nov 2020

12:00 - 13:00

Selection Dynamics for Deep Neural Networks

Peter Markowich
(KAUST)
Abstract

We present a partial differential equation framework for deep residual neural networks and for the associated learning problem. This is done by carrying out the continuum limits of neural networks with respect to width and depth. We study the wellposedness, the large time solution behavior, and the characterization of the steady states of the forward problem. Several useful time-uniform estimates and stability/instability conditions are presented. We state and prove optimality conditions for the inverse deep learning problem, using standard variational calculus, the Hamilton-Jacobi-Bellmann equation and the Pontryagin maximum principle. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between neural networks, PDE theory, variational analysis, optimal control, and deep learning.

This is based on joint work with Hailiang Liu.

Fri, 13 Nov 2020

12:00 - 13:00

Computational Hardness of Hypothesis Testing and Quiet Plantings

Afonso Bandeira
(ETH Zurich)
Abstract

When faced with a data analysis, learning, or statistical inference problem, the amount and quality of data available fundamentally determines whether such tasks can be performed with certain levels of accuracy. With the growing size of datasets however, it is crucial not only that the underlying statistical task is possible, but also that is doable by means of efficient algorithms. In this talk we will discuss methods aiming to establish limits of when statistical tasks are possible with computationally efficient methods or when there is a fundamental «Statistical-to-Computational gap›› in which an inference task is statistically possible but inherently computationally hard. We will focus on Hypothesis Testing and the ``Low Degree Method'' and also address hardness of certification via ``quiet plantings''. Guiding examples will include Sparse PCA, bounds on the Sherrington Kirkpatrick Hamiltonian, and lower bounds on Chromatic Numbers of random graphs.

Fri, 06 Nov 2020

12:00 - 13:00

Bridging GANs and Stochastic Analysis

Haoyang Cao
(Alan Turing Institute)
Abstract

Generative adversarial networks (GANs) have enjoyed tremendous success in image generation and processing, and have recently attracted growing interests in other fields of applications. In this talk we will start from analyzing the connection between GANs and mean field games (MFGs) as well as optimal transport (OT). We will first show a conceptual connection between GANs and MFGs: MFGs have the structure of GANs, and GANs are MFGs under the Pareto Optimality criterion. Interpreting MFGs as GANs, on one hand, will enable a GANs-based algorithm (MFGANs) to solve MFGs: one neural network (NN) for the backward Hamilton-Jacobi-Bellman (HJB) equation and one NN for the Fokker-Planck (FP) equation, with the two NNs trained in an adversarial way. Viewing GANs as MFGs, on the other hand, will reveal a new and probabilistic aspect of GANs. This new perspective, moreover, will lead to an analytical connection between GANs and Optimal Transport (OT) problems, and sufficient conditions for the minimax games of GANs to be reformulated in the framework of OT. Building up from the probabilistic views of GANs, we will then establish the approximation of GANs training via stochastic differential equations and demonstrate the convergence of GANs training via invariant measures of SDEs under proper conditions. This stochastic analysis for GANs training can serve as an analytical tool to study its evolution and stability.

 
Subscribe to