New perspectives for higher-order methods in convex optimisation
This colloquium is the annual Maths-Stats colloquium, held jointly with the Statistics department.
This colloquium is the annual Maths-Stats colloquium, held jointly with the Statistics department.
The statistical analysis of data which lies in a non-Euclidean space has become increasingly common over the last decade, starting from the point of view of shape analysis, but also being driven by a number of novel application areas. However, while there are a number of interesting avenues this analysis has taken, particularly around positive definite matrix data and data which lies in function spaces, it has increasingly raised more questions than answers. In this talk, I'll introduce some non-Euclidean data from applications in brain imaging and in linguistics, but spend considerable time asking questions, where I hope the interaction of statistics and topological data analysis (understood broadly) could potentially start to bring understanding into the applications themselves.
The discrete Fourier transform is fundamental in modern communication systems. It is used to generate and process (i.e. modulate and demodulate) the signals transmitted in 4G, 5G, and wifi systems, and is always implemented by one of the fast Fourier transforms (FFT) algorithms. It is possible to generalize the FFT to work correctly on input vectors with periodic missing values. I will consider whether this has applications, such as more general transmitted signal waveforms, or further applications such as spectral density estimation for time series with missing data. More speculatively, can we generalize to "recursive" missing values, where the non-missing blocks have gaps? If so, how do we optimally recognize such a pattern in a given time series?
Melanie Weber
Title: Geometric Methods for Machine Learning and Optimization
Abstract: A key challenge in machine learning and optimization is the identification of geometric structure in high-dimensional data. Such structural understanding is of great value for the design of efficient algorithms and for developing fundamental guarantees for their performance. Motivated by the observation that many applications involve non-Euclidean data, such as graphs, strings, or matrices, we discuss how Riemannian geometry can be exploited in Machine Learning and Optimization. First, we consider the task of learning a classifier in hyperbolic space. Such spaces have received a surge of interest for representing large-scale, hierarchical data, since they achieve better representation accuracy with fewer dimensions. Secondly, we consider the problem of optimizing a function on a Riemannian manifold. Specifically, we will consider classes of optimization problems where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard (Euclidean) approaches.
Francesca Panero
Title: A general overview of the different projects explored during my DPhil in Statistics.
Abstract: In the first half of the talk, I will present my work on statistical models for complex networks. I will propose a model to describe sparse spatial random graph underpinned by the Bayesian nonparametric theory and asymptotic properties of a more general class of these models, regarding sparsity, degree distribution and clustering coefficients.
The second half will be devoted to the statistical quantification of the risk of disclosure, a quantity used to evaluate the level of privacy that can be achieved by publishing a microdata file without modifications. I propose two ways to estimate the risk of disclosure, using both frequentist and Bayes nonparametric statistics.
*Note the different room location (L2) to usual Fridays@4 sessions*
This week is Mental Health Awareness Week. To mark this, Rebecca Reed from Siendo will deliver a session on mental health and wellbeing. The session will cover the following things:
- The importance of finding a balance with achievement and managing stress and pressure.
- Coping mechanisms work with stresses at work in a positive way (not seeing all stress as bad).
- The difficulties faced in the HE environment, such as the uncertainty felt within jobs and research, combined with the high expectations and workload.
Please join us to celebrate International Women’s Day on Tuesday the 8th of March.
To address this year’s theme - Break the Bias - we will be hosting two sessions in Lecture Theatre 2:
1-2.30pm - How Women Rise in Professional Services, a focus on gender equality from the perspective of Professional Services Staff
2.45-5pm - A screening of 'Picture A Scientist' and panel discussion
5pm – Drinks reception
Holographic correlation functions are under good analytic control when none of the single trace operators live in long multiplets. This is famously the case for SCFTs with sixteen supercharges but it is also possible to construct examples with eight supercharges by exploiting space filling branes in AdS. In particular, one can study 4d N=2 theories which are related to each other by an S-fold in much the same way that N=3 theories are related to N=4 Super Yang-Mills. I will describe how modern methods provide a window into their correlation functions with an emphasis on anomalous dimensions. To compare the different S-folds we will need to go to one loop, and to go to one loop we will need to account for operator mixing. This provides an example of resolving degeneracy by resolving degeneracy.
I will review aspects of the theory of symmetry-protected topological phases, focusing on the case of one-dimensional quantum chains. Important concepts include the bulk-boundary correspondence, with bulk topological invariants leading to interesting boundary phenomena. I will discuss topological invariants and associated boundary phenomena in the case that the system is gapless and described at low energies by a conformal field theory. Based on work with Ruben Verresen, Ryan Thorngren and Frank Pollmann.
Topological data analysis is starting to establish itself as a powerful and effective framework in machine learning , supporting the analysis of neural networks, but also driving the development of novel algorithms that incorporate topological characteristics. As a problem class, graph representation learning is of particular interest here, since graphs are inherently amenable to a topological description in terms of their connected components and cycles. This talk will provide
an overview of how to address graph learning tasks using machine learning techniques, with a specific focus on how to make such techniques 'topology-aware.' We will discuss how to learn filtrations for graphs and how to incorporate topological information into modern graph neural networks, resulting in provably more expressive algorithms. This talk aims to be accessible to an audience of TDA enthusiasts; prior knowledge of machine learning is helpful but not required.