15:00
Automata and algebraic structures
LMS-NZMS Aitkin Lecture 2019
Abstract
Automatic structures are algebraic structures, such as graphs, groups
and partial orders, that can be presented by automata. By varying the
classes of automata (e.g. finite automata, tree automata, omega-automata)
one varies the classes of automatic structures. The class of all automatic
structures is robust in the sense that it is closed under many natural
algebraic and model-theoretic operations.
In this talk, we give formal definitions to
automatic structures, motivate the study, present many examples, and
explain several fundamental theorems. Some results in the area
are deeply connected with algebra, additive combinatorics, set theory,
and complexity theory.
We then motivate and pose several important unresolved questions in the
area.
On the circulation structures in traditional Chinese algorithms
Abstract
It is unnecessary to emphasize important place of algorithms in computer science. Many efficient and convenient algorithms are designed by borrowing or revising ancient mathematical algorithms and methods. For example, recursive method, exhaustive search method, greedy method, “divide and conquer” method, dynamic programming method, reiteration algorithm, circulation algorithm, among others.
From the perspective of the history of computer science, it is necessary to study the history of algorithms used in the computer computations. The history of algorithms for computer science is naturally regarded as a sub-object of history of mathematics. But historians of mathematics, at least those who study history of mathematics in China, have not realized it is important in the history of mathematics. Historians of Chinese mathematics paid little attention to these studies, mainly having not considered from this research angle. Relevant research is therefore insufficient in the field of history of mathematics.
The mechanization thought and algorithmization characteristic of Chinese traditional (and therefore, East Asian) mathematics, however, are coincident with that of computer science. Traditional Chinese algorithms, therefore, show their importance historical significance in computer science. It is necessary and important to survey traditional algorithms again from the point of views of computer science. It is also another angle for understanding traditional Chinese mathematics.
There are many things in the field that need to be researched. For example, when and how were these algorithms designed? What was their mathematical background? How were they applied in ancient mathematical context? How are their complexity and efficiency of ancient algorithms?
In the present paper, we will study the circulation structure in traditional Chinese mathematical algorithms. Circulation structures have great importance in the computer science. Most algorithms are designed by means of one or more circulation structures. Ancient Chinese mathematicians were familiar them with the circulation structures and good at their applications. They designed a lot of circulation structures to obtain their desirable results in mathematical computations. Their circulation structures of dozen ancient algorithms will be analyzed. They are selected from mathematical and astronomical treatises, and also one from the Yijing (Book of Changes), the oldest of the Chinese classics.
Global analytic geometry and Hodge theory
Abstract
In this talk I will describe how to make sense of the function $(1+t)^x$ over the integers. I will explain how different rings of analytic functions can be defined over the integers, and how this leads to global analytic geometry and global Hodge theory. If time permits I will also describe an analytic version of lambda-rings and how this can be used to define a cohomology theory for schemes over Z. This is joint work with Federico Bambozzi and Adam Topaz.
CRICKET MATCH - Mathematical Institute
[[{"fid":"54639","view_mode":"media_portrait_large","fields":{"format":"media_portrait_large","field_file_image_alt_text[und][0][value]":false,"field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"2":{"format":"media_portrait_large","field_file_image_alt_text[und][0][value]":false,"field_file_image_title_text[und][0][value]":false}},"attributes":{"class":"media-element file-media-portrait-large","data-delta":"2"}}]]
John Bush - Walking on water: from biolocomotion to quantum foundations
In this lecture John Bush will present seemingly disparate research topics which are in fact united by a common theme and underlaid by a common mathematical framework.
First there is the ingenuity of the natural world where living creatures use surface tension to support themselves on the water surface and propel
themselves along it. Then there is a system discovered by Yves Couder only fifteen years ago, in which a small droplet bounces along the surface of a vibrating liquid bath, guided or 'piloted’ by its own wave field. Its ability to reproduce many features previously thought to be exclusive to quantum systems has launched the field of hydrodynamic quantum analogs, and motivated a critical revisitation of the philosophical foundations of quantum mechanics.
John Bush is a Professor of Applied Mathematics in the Department of Mathematics at MIT specialising in fluid dynamics.
5.00pm-6.00pm, Mathematical Institute, Oxford
Please email @email to register
Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/bush
Oxford Mathematics Public Lectures are generously supported by XTX Markets.
15:45
Derived modular functors
Abstract
For a semisimple modular tensor category the Reshetikhin-Turaev construction yields an extended three-dimensional topological field theory and hence by restriction a modular functor. By work of Lyubachenko-Majid the construction of a modular functor from a modular tensor category remains possible in the non-semisimple case. We explain that the latter construction is the shadow of a derived modular functor featuring homotopy coherent mapping class group actions on chain complex valued conformal blocks and a version of factorization and self-sewing via homotopy coends. On the torus we find a derived version of the Verlinde algebra, an algebra over the little disk operad (or more generally a little bundles algebra in the case of equivariant field theories). The concepts will be illustrated for modules over the Drinfeld double of a finite group in finite characteristic. This is joint work with Christoph Schweigert (Hamburg).
Higher Segal spaces and lax A-infinity structure
Abstract
The notion of a higher Segal object was introduces by Dyckerhoff and Kapranov as a general framework for studying (higher) associativity inherent
in a wide range of mathematical objects. Most of the examples are related to Hall algebra type constructions, which include quantum groups. We describe a construction that assigns to a simplicial object S a datum H(S) which is naturally interpreted as a "d-lax A-infinity algebra” precisely when S is a (d+1)-Segal object. This extends the extensively studied d=2 case.
OCIAM @ 30 years - PROGRAM RELEASED
Please register here
[[{"fid":"55536","view_mode":"preview","fields":{"format":"preview"},"link_text":"Program.pdf","type":"media","field_deltas":{"4":{"format":"preview"}},"attributes":{"class":"media-element file-preview","data-delta":"4"}}]]
[[{"fid":"55364","view_mode":"embedded_landscape_image_full_width","fields":{"format":"embedded_landscape_image_full_width","field_file_image_alt_text[und][0][value]":false,"field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"embedded_landscape_image_full_width","field_file_image_alt_text[und][0][value]":false,"field_file_image_title_text[und][0][value]":false}},"attributes":{"class":"media-element file-embedded-landscape-image-full-width","data-delta":"1"}}]]
OCIAM was created in 1989, when Alan Tayler, the first director, moved with a group of applied mathematicians into the annex of the Mathematical Institute in Dartington House.
To celebrate our 30th anniversary we have invited twenty speakers, all of whom have spent time in OCIAM, to talk on some of the many aspects of work generated by the group.
This programe will build on the success of ‘Mathematics in the Spirit of Joe Keller’, hosted by the Isaac Newton Institute, Cambridge in 2017.
Programme
The scientific talks commence on Monday 24th June and finish early afternoon on Tuesday 25th June, with lunch served on both days.
There will be a conference dinner on Monday evening at Somerville College, and on Tuesday afternoon the Mathematical Institute cricket match and BBQ at Merton College Pavilion, to which everyone is invited.
North meets South colloquium
Abstract
Aden Forrow
Optimal transport and cell differentiation
Abstract
Optimal transport is a rich theory for comparing distributions, with both deep mathematics and application ranging from 18th century fortification planning to computer graphics. I will tie its mathematical story to a biological one, on the differentiation of cells from pluripotency to specialized functional types. First the mathematics can support the biology: optimal transport is an apt tool for linking experimental samples across a developmental time course. Then the biology can inspire new mathematics: based on the branching structure expected in differentiation pathways, we can find a regularization method that dramatically improves the statistical performance of optimal transport.
Paul Ziegler
Geometry and Arithmetic
Abstract
For a family of polynomials in several variables with integral coefficients, the Weil conjectures give a surprising relationship between the geometry of the complex-valued roots of these polynomials and the number of roots of these polynomials "modulo p". I will give an introduction to this circle of results and try to explain how they are used in modern research.
Smoothness of Persistence
Abstract
We can see the simplest setting of persistence from a functional point of view: given a fixed finite simplicial complex, we have the barcode function which, given a filter function over this complex, returns the corresponding persistent diagram. The bottleneck distance induces a topology on the space of persistence diagrams, and makes the barcode function a continuous map: this is a consequence of the stability Theorem. In this presentation, I will present ongoing work that seeks to deepen our understanding of the analytic properties of the barcode function, in particular whether it can be said to be smooth. Namely, if we smoothly vary the filter function, do we get smooth changes in the resulting persistent diagram? I will introduce a notion of differentiability/smoothness for barcode valued maps, and then explain why the barcode function is smooth (but not everywhere) with respect to the choice of filter function. I will finally explain why these notions are of interest in practical optimisation/learning situations.
Outlier Robust Subsampling Techniques for Persistent Homology
Abstract
The amount and complexity of biological data has increased rapidly in recent years with the availability of improved biological tools. When applying persistent homology to large data sets, many of the currently available algorithms however fail due to computational complexity preventing many interesting biological applications. De Silva and Carlsson (2004) introduced the so called Witness Complex that reduces computational complexity by building simplicial complexes on a small subset of landmark points selected from the original data set. The landmark points are chosen from the data either at random or using the so called maxmin algorithm. These approaches are not ideal as the random selection tends to favour dense areas of the point cloud while the maxmin algorithm often selects outliers as landmarks. Both of these problems need to be addressed in order to make the method more applicable to biological data. We study new ways of selecting landmarks from a large data set that are robust to outliers. We further examine the effects of the different subselection methods on the persistent homology of the data.
Personalised predictive modelling for transcatheter mitral valve replacement
Abstract
Mitral regurgitation is one of the most common valve diseases in the UK and contributes to 50% of the transcatheter mitral valve replacement (TMVR) procedures with bioprosthetic valves. TMVR is generally performed in frailer, older patients unlikely to tolerate open-heart surgery or further interventions. One of the side effects of implanting a bioprosthetic valve is a condition known as left ventricular outflow obstruction, whereby the implanted device can partially obstruct the outflow of blood from the left ventricle causing high flow resistance. The ventricle has then to pump more vigorously to provide adequate blood supply to the circulatory system and becomes hypertrophic. This ultimately results in poor contractility and heart failure.
We developed personalised image-based models to characterise the complex relationship between anatomy, blood flow, and ventricular function both before and after TMVR. The model prediction provides key information to match individual patient and device size, such as postoperative changes in intraventricular pressure gradients and blood residence time. Our pilot data from a cohort of 7 TMVR patients identified a correlation between the degree of outflow obstruction and the deterioration of ventricular function: when approximately one third of the outflow was obstructed as a result of the device implantation, significant increases in the flow resistance and the average time spent by the blood inside the ventricle were observed, which are in turn associated with hypertrophic ventricular remodelling and blood stagnation, respectively. Currently, preprocedural planning for TMVR relies largely on anecdotal experience and standard anatomical evaluations. The haemodynamic knowledge derived from the models has the potential to enhance significantly pre procedural planning and, in the long term, help develop a personalised risk scoring system specifically designed for TMVR patients.
Dynamically consistent parameterization of mesoscale eddies
Abstract
This work aims at developing new approach for parameterizing mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. These effects are often modelled as some diffusion process or a stochastic forcing, and the proposed approach is implicitly related to the latter category. The idea is to approximate transient eddy flux divergence in a simple way, to find its actual dynamical footprints by solving a simplified but dynamically relevant problem, and to relate the ensemble of footprints to the large-scale flow properties.
Explicit Non-Abelian Chabauty via Motivic Periods
Abstract
We report on a line of work initiated by Dan-Cohen and Wewers and continued by Dan-Cohen and the speaker to explicitly compute the zero loci arising in Kim's non-abelian Chabauty's method. We explain how this works, an important step of which is to compute bases of a certain motivic Hopf algebra in low degrees. We will summarize recent work by Dan-Cohen and the speaker, extending previous computations to $\mathbb{Z}[1/3]$ and proposing a general algorithm for solving the unit equation. Many of the methods in the more recent work are inspired by recent ideas of Francis Brown. Finally, we indicate future work, in which we hope to use elliptic motivic periods to explicitly compute points on punctured elliptic curves and beyond.
16:00
What is Arakelov Geometry?
Abstract
Arakelov geometry studies schemes X over ℤ, together with the Hermitian complex geometry of X(ℂ).
Most notably, it has been used to give a proof of Mordell's conjecture (Faltings's Theorem) by Paul Vojta; curves of genus greater than 1 have at most finitely many rational points.
In this talk, we'll introduce some of the ideas behind Arakelov theory, and show how many results in Araklev theory are analogous—with additional structure—to classic results such as intersection theory and Riemann Roch.
A generic construction for high order approximation schemes of semigroups using random grids
Abstract
Our aim is to construct high order approximation schemes for general
semigroups of linear operators $P_{t},t \ge 0$. In order to do it, we fix a time
horizon $T$ and the discretization steps $h_{l}=\frac{T}{n^{l}},l\in N$ and we suppose
that we have at hand some short time approximation operators $Q_{l}$ such
that $P_{h_{l}}=Q_{l}+O(h_{l}^{1+\alpha })$ for some $\alpha >0$. Then, we
consider random time grids $\Pi (\omega )=\{t_0(\omega )=0<t_{1}(\omega
)<...<t_{m}(\omega )=T\}$ such that for all $1\le k\le m$, $t_{k}(\omega
)-t_{k-1}(\omega )=h_{l_{k}}$ for some $l_{k}\in N$, and we associate the approximation discrete
semigroup $P_{T}^{\Pi (\omega )}=Q_{l_{n}}...Q_{l_{1}}.$ Our main result is the
following: for any approximation order $\nu $, we can construct random grids $\Pi_{i}(\omega )$ and coefficients
$c_{i}$, with $i=1,...,r$ such that $P_{t}f=\sum_{i=1}^{r}c_{i} E(P_{t}^{\Pi _{i}(\omega )}f(x))+O(n^{-\nu})$
with the expectation concerning the random grids $\Pi _{i}(\omega ).$
Besides, $Card(\Pi _{i}(\omega ))=O(n)$ and the complexity of the algorithm is of order $n$, for any order
of approximation $\nu$. The standard example concerns diffusion
processes, using the Euler approximation for $Q_l$.
In this particular case and under suitable conditions, we are able to gather the terms in order to produce an estimator of $P_tf$ with
finite variance.
However, an important feature of our approach is its universality in the sense that
it works for every general semigroup $P_{t}$ and approximations. Besides, approximation schemes sharing the same $\alpha$ lead to
the same random grids $\Pi_{i}$ and coefficients $c_{i}$. Numerical illustrations are given for ordinary differential equations, piecewise
deterministic Markov processes and diffusions.
Levitating drops in Leidenfrost state
Abstract
When a liquid drop is deposited over a solid surface whose temperature is sufficiently above the boiling point of the liquid, the drop does not experience nucleate boiling but rather levitates over a thin layer of its own vapor. This is known as the Leidenfrost effect. Whilst highly undesirable in certain cooling applications, because of a drastic decrease of the energy transferred between the solid and the evaporating liquid due to poor heat conductivity of the vapor, this effect can be of great interest in many other processes profiting from this absence of contact with the surface that considerably reduces the friction and confers an extreme mobility on the drop. During this presentation, I hope to provide a good vision of some of the knowledge on this subject through some recent studies that we have done. First, I will present a simple fitting-parameter-free theory of the Leidenfrost effect, successfully validated with experiments, covering the full range of stable shapes, i.e., from small quasi-spherical droplets to larger puddles floating on a pocketlike vapor film. Then, I will discuss the end of life of these drops that appear either to explode or to take-off. Finally, I will show that the Leidenfrost effect can also be observed over hot baths of non-volatile liquids. The understanding of the latter situation, compare to the classical Leidenfrost effect on solid substrate, provides new insights on the phenomenon, whether it concerns levitation or its threshold.
14:00
On integral representations of symmetric groups
Abstract
Abstract: As is well known, every rational representation of a finite group $G$ can be realized over $\mathbb{Z}$, that is, the corresponding $\mathbb{Q}G$-module $V$ admits a $\mathbb{Z}$-form. Although $\mathbb{Z}$-forms are usually far from being unique, the famous Jordan--Zassenhaus Theorem shows that there are only finitely many $\mathbb{Z}$-forms of any given $\mathbb{Q}G$-module, up to isomorphism. Determining the precise number of these isomorphism classes or even explicit representatives is, however, a hard task in general. In this talk we shall be concerned with the case where $G$ is the symmetric group $\mathfrak{S}_n$ and $V$ is a simple $\mathbb{Q}\mathfrak{S}_n$-module labelled by a hook partition. Building on work of Plesken and Craig we shall present some results as well as open problems concerning the construction of the
integral forms of these modules. This is joint work with Tommy Hofmann from Kaiserslautern.
Overcoming the curse of dimensionality: from nonlinear Monte Carlo to deep artificial neural networks
Abstract
Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called "curse of dimensionality" in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we introduce new nonlinear Monte Carlo algorithms for high-dimensional nonlinear PDEs. We prove that such algorithms do indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE with a nonlinearity depending on the PDE solution can be solved approximatively without the curse of dimensionality.
Spectral methods for certain inverse problems on graphs and time series data
We study problems that share an important common feature: they can all be solved by exploiting the spectrum of their corresponding graph Laplacian. We first consider a classic problem in data analysis and machine learning, of establishing a statistical ranking of a set of items given a set of inconsistent and incomplete pairwise comparisons. We formulate the above problem of ranking with incomplete noisy information as an instance of the group synchronization problem over the group SO(2) of planar rotations, whose least-squares solution can be approximated by either a spectral or a semidefinite programming relaxation, and consider an application to detecting leaders and laggers in financial multivariate time series data. An instance of the group synchronization problem over Z_2 with anchor information is broadly applicable to settings where one has available a sparse signal such as positive or negative news sentiment for a subset of nodes, and would like to understand how the available measurements propagate to the remaining nodes of the network. We also present a simple spectral approach to the well-studied constrained clustering problem, which captures constrained clustering as a generalized eigenvalue problem with graph Laplacians. This line of work extends to the setting of clustering signed networks and correlation clustering, where the edge weights between the nodes of the graph may take either positive or negative values, for which we provide theoretical guarantees in the setting of a signed stochastic block model and numerical experiments for financial correlation matrices. Finally, we discuss a spectral clustering algorithm for directed graphs based on a complex-valued representation of the adjacency matrix, motivated by the application of extracting cluster-based lead-lag relationships in time series data.
On well posedness of stochastic mass critical NLS
Abstract
We will discuss the similarity and difference between deterministic and stochastic NLS. Different notions (or possible formulations) of local solutions will also be discussed. We will also present a global well posedness result for stochastic mass critical NLS. Joint work with Weijun Xu (Oxford)
From knots to homotopy theory
Note: unusual time!
Abstract
Knots and their groups are a traditional topic of geometric topology. In this talk, I will explain how aspects of the subject can be approached as a homotopy theorist, rephrasing old results and leading to new ones. Part of this reports on joint work with Tyler Lawson.
16:00
The spectrum of simplicial volume
Abstract
Simplicial volume was first introduced by Gromov to study the minimal volume of manifolds. Since then it has emerged as an active research field with a wide range of applications.
I will give an introduction to simplicial volume and describe a recent result with Clara Löh (University of Regensburg), showing that the set of simplicial volumes in higher dimensions is dense in $R^+$.
Noncommutative geometry from generalized Kahler structures
Abstract
After reviewing our recent description of generalized Kahler structures in terms of holomorphic symplectic Morita equivalence, I will describe how this can be used for explicit constructions of toric generalized Kahler metrics. Then I will describe how these ideas, combined with concepts from geometric quantization, provide a new approach to noncommutative algebraic geometry.