On the semantics of the canonical commutation relations
Abstract
Note: joint with Philosophy of Physics.
Venue: Lecture Room, Radcliffe Humanities, ROQ.
Note: joint with Philosophy of Physics.
Venue: Lecture Room, Radcliffe Humanities, ROQ.
In 1953 Roth proved that any positive density subset of the integers contains a non-trivial three term arithmetic progression. I will present a recent quantitative improvement for this theorem, give an overview of the main ideas of the proof, and discuss its relation to other recent work in the area. I will also discuss some closely related problems.
The Dynamic Dictionary of Mathematical Functions (or DDMF, http://ddmf.msr-inria.inria.fr/) is an interactive website on special functions inspired by reference books such as the NIST Handbook of Special Functions. The originality of the DDMF is that each of its “chapters” is automatically generated from a short mathematical description of the corresponding function.
To make this possible, the DDMF focuses on so-called D-finite (or holonomic) functions, i.e., complex analytic solutions of linear ODEs with polynomial coefficients. D-finite functions include in particular most standard elementary functions (exp, log, sin, sinh, arctan...) as well as many of the classical special functions of mathematical physics (Airy functions, Bessel functions, hypergeometric functions...). A function of this class can be represented by a finite amount of data (a differential equation along with sufficiently many initial values),
and this representation makes it possible to develop a computer algebra framework that deals with the whole class in a unified way, instead of ad hoc algorithms and code for each particular function. The DDMF attempts to put this idea into practice.
In this talk, I will present the DDMF, some of the algorithms and software libraries behind it, and ongoing projects based on similar ideas, with an emphasis on symbolic-numeric algorithms.
In 1983 Kerckhoff settled a long standing conjecture by Nielsen proving that every finite subgroup of the mapping class group of a compact surface can be realized as a group of diffeomorphisms. An important consequence of this theorem is that one can now try to study subgroups of the mapping class group taking the quotient of the surface by these groups of diffeomorphisms. In this talk we will study quotients of surfaces under the action of a finite group to find bounds on the cardinality of such a group.
Modularity is a quality function on partitions of a network which aims to identify highly clustered components. Given a graph G, the modularity of a partition of the vertex set measures the extent to which edge density is higher within parts than between parts; and the modularity q(G) of G is the maximum modularity of a partition of V(G). Knowledge of the maximum modularity of the corresponding random graph is important to determine the statistical significance of a partition in a real network. We provide bounds for the modularity of random regular graphs. Modularity is related to the Hamiltonian of the Potts model from statistical physics. This leads to interest in the modularity of lattices, which we will discuss. This is joint work with Colin McDiarmid.
Topics:
1) Marine Acoustics;
2) Air and water quality discharge and emission modelling;
3) Geospatial mapping, remote sensing and ecosystem services.
The commuting probability of a finite group is defined to be the probability that two randomly chosen group elements commute. Not all rationals between 0 and 1 occur as commuting probabilities. In fact Keith Joseph conjectured in 1977 that all limit points of the set of commuting probabilities are rational, and moreover that these limit points can only be approached from above. In this talk we'll discuss a structure theorem for commuting probabilities which roughly asserts that commuting probabilities are nearly Egyptian fractions of bounded complexity. Joseph's conjectures are corollaries.
It is well known that piecewise smooth signals are approximately sparse in a wavelet basis. However, other sparse representations are possible, such as the discrete gradient basis. It turns out that signals drawn from a random piecewise constant model have sparser representations in the discrete gradient basis than in Haar wavelets (with high probability). I will talk about this result and its implications, and also show some numerical experiments in which the use of the gradient basis improves compressive signal reconstruction.
Donaldson-Thomas invariants are fundamental deformation invariants of Calabi-Yau threefolds. We describe a recent conjecture of Oberdieck and Pandharipande which predicts that the (three variable) generating function for the Donaldson-Thomas invariants of K3xE is given by the reciprocal of the Igusa cusp form of weight 10. For each fixed K3 surface of genus g, the conjecture predicts that the corresponding (two variable) generating function is given by a particular meromorphic Jacobi form. We prove the conjecture for K3 surfaces of genus 0 and genus 1. Our computation uses a new technique which mixes motivic and toric methods.
A recurring theme in attempts to understand the quantum theory of gravity is the idea of "Gravity as the square of Yang-Mills". In recent years this idea has been met with renewed energy, principally driven by a string of discoveries uncovering intriguing and powerful identities relating gravity and gauge scattering amplitudes. In an effort to develop this program further, we explore the relationship between both the global and local symmetries of (super)gravity and those of (super) Yang-Mills theories squared. In the context of global symmetries we begin by giving a unified description of D=3 super-Yang-Mills theory with N=1, 2, 4, 8 supersymmeties in terms of the four division algebras: reals, complex, quaternions and octonions. On taking the product of these multiplets we obtain a set of D=3 supergravity theories with global symmetries (U-dualities) belonging to the Freudenthal magic square: “division algebras squared” = “Yang-Mills squared”! By generalising to D=3,4,6,10 we uncover a magic pyramid of Lie algebras. We then turn our attention to local symmetries. Regarding gravity as the convolution of left and right Yang-Mills theories together with a spectator scalar field in the bi-adjoint representation, we derive in linearised approximation the gravitational symmetries of general covariance, p-form gauge invariance, local Lorentz invariance and local supersymmetry from the flat space Yang-Mills symmetries of local gauge invariance and global super-Poincaré. As a concrete example we focus on the new-minimal (12+12, N=1) off-shell version four-dimensional supergravity obtained by tensoring the off-shell (super) Yang-Mills multiplets (4+4, N =1) and (3+0, N =0).
When solving Einstein's equations with negative cosmological constant, the natural setting is that of an initial-boundary value problem. Data is specified on the timelike conformal boundary as well as on some initial spacelike (or null) hypersurface. At the PDE level, one finds that the boundary data is typically prescribed on a surface at which the equations become singular and standard energy estimates break down. I will discuss how to handle this singularity by introducing a renormalisation procedure. I will also talk about the consequences of different choices of boundary conditions for solutions of Einstein’s equations with negative cosmological constant.
We will introduce both the class of right-angled Artin groups (RAAG) and
the Nielsen realisation problem. Then we will discuss some recent progress
towards solving the problem.
In this talk we study the large time behaviour of some semilinear parabolic PDEs by a purely probabilistic approach. For that purpose, we show that the solution of a backward stochastic differential equation (BSDE) in finite horizon $T$ taken at initial time behaves like a linear term in $T$ shifted with a solution of the associated ergodic BSDE taken at inital time. Moreover we give an explicit rate of convergence: we show that the following term in the asymptotic expansion has an exponential decay. This is a Joint work with Ying Hu and Pierre-Yves Meyer from Rennes (IRMAR - France).
Profinite groups are compact totally disconnected groups, or equivalently projective limits of finite groups. This class of groups appears naturally in infinite Galois theory, but they can be studied for their own sake (which will be the case in this talk). We are interested in pro-p groups, i.e. projective limits of finite p-groups. For instance, the group SL(n,Z_p) - and in general any maximal compact subgroup in a Lie group over a local field of residual characteristic p - contains a pro-p group of finite index. The latter groups can be seen as pro-p Sylow subgroups in this situation (they are all conjugate by a non-positive curvature argument).
We will present an a priori non-linear generalization of these examples, arising via automorphism groups of spaces that we will gently introduce: buildings. The main result is the existence of a wide class of automorphism groups of buildings which are simple and whose maximal compact subgroups are virtually finitely generated pro-p groups. This is only the beginning of the study of these groups, where the main questions deal with linearity, and other homology groups.
This is joint work with Inna Cadeboscq (Warwick). We will also discuss related results with I. Capdeboscq and A. Lubotzky on controlling the size of profinite presentations of compact subgroups in some non-Archimedean simple groups
I will report on recent work on a tropical/symplectic approach to the Horn inequalities. These describe the possible spectra of Hermitian matrices which may be obtained as the sum of two Hermitian matrices with fixed spectra. This is joint work with Anton Alekseev and Maria Podkopaeva.
In this talk, we develop rough integration with jumps, offering a pathwise view on stochastic integration against cadlag processes. A class of Marcus-like rough paths is introduced,which contains D. Williams’ construction of stochastic area for Lévy processes. We then established a Lévy–Khintchine type formula for the expected signature, based on“Marcus(canonical)"stochastic calculus. This calculus fails for non-Marcus-like Lévy rough paths and we treat the general case with Hunt’ theory of Lie group valued Lévy processes is made.
I will describe the computation of the supersymmetric Renyi entropy across an entangling 3-sphere for five-dimensional superconformal field theories. For a class of USp(2N) gauge theories I’ll also construct a holographic dual 1/2 BPS black hole solution of Euclidean Romans F(4) supergravity. The large N limit of the gauge theory results will be shown to agree perfectly with the supergravity computations.
I introduce Stochastic Portfolio Theory (SPT), which is an alternative approach to optimal investment, where the investor aims to beat an index instead of optimising a mean-variance or expected utility criterion. Portfolios which achieve this are called relative arbitrages, and simple and implementable types of such trading strategies have been shown to exist in very general classes of continuous semimartingale market models, with unspecified drift and volatility processes but realistic assumptions on the behaviour of stocks which come from empirical observation. I present some of my recent work on this, namely the so-called diversity-weighted portfolio with negative parameter. This portfolio outperforms the market quite significantly, for which I have found both theoretical and empirical evidence.
We’ll discuss applications of big data in financial services, LBG’s assets, architecture and analytics techniques as well as specific use cases within LBG
Little is known about C_exp, the complex field with the exponential function. Model theoretically it is difficult due to the definability of the integers (so its theory is not stable), and a lack of clear algebraic structure; for instance, it is not known whether or not pi+e is irrational. In order to study C_exp, Boris Zilber constructed a class of pseudo-exponential fields which satisfy all the properties we desire of C_exp. This class is categorical for every uncountable cardinal, and other more general classes have been defined. I shall define the three main classes of exponential fields that I study, one of which being Zilber's class, and show that they exhibit "stable-like" behaviour modulo the integers by defining a notion of independence for each class. I shall also explicitly apply one of these independence relations to show that in the class of exponential fields ECF, types that are orthogonal to the kernel are exactly the generically stable types.
I will recall basic notions of operator K-theory as a non-commutative (C*-algebra) generalisation of topological K-theory. Twisted crossed products will be introduced as generalisations of group C*-algebras, and a model of Karoubi's K-theory, which makes sense for super-algebras, will be sketched. The motivation comes from physics, through the study of quantum mechanical symmetries, charged free quantum fields, and topological insulators. The relevant theorems, which are interesting in their own right but scattered in the literature, will be consolidated.
We discuss a new method to bound the number of primes in certain very thin sets. The sets $S$ under consideration have the property that if $p\in S$ and $q$ is prime with $q|(p-1)$, then $q\in S$. For each prime $p$, only 1 or 2 residue classes modulo $p$ are omitted, and thus the traditional small sieve furnishes only the bound $O(x/\log^2 x)$ (at best) for the counting function of $S$. Using a different strategy, one related to the theory of prime chains and Pratt trees, we prove that either $S$
contains all primes or $\# \{p\in S : p\le x \} = O(x^{1-c})$ for some positive $c$. Such sets arise, for example, in work on Carmichael's conjecture for Euler's function.
In this talk we consider optimal stopping problems under a class of coherent risk measures which includes such well known risk measures as weighted AV@R or absolute semi-deviation risk measures. As a matter of fact, the dynamic versions of these risk measures do not have the so-called time-consistency property necessary for the dynamic programming approach. So the standard approaches are not applicable to optimal stopping problems under coherent risk measures. In this paper, we prove a novel representation, which relates the solution of an optimal stopping problem under a coherent risk measure to the sequence of standard optimal stopping problems and hence makes the application of the standard dynamic programming-based approaches possible. In particular, we derive the analogue of the dual representation of Rogers and Haugh and Kogan. Several numerical examples showing the usefulness of the new representation in applications are presented as well.
It is well known that the Navier-Stokes equations of viscous fluid flow do not give good predictions of when a viscous flow is likely to become unstable. When classical linearized theory is used to explore the stability of a viscous flow, the Navier-Stokes equations predict that instability will occur at fluid speeds (Reynolds numbers) far in excess of those actually measured in experiments. In response to this discrepancy, theories have arisen that suggest the eigenvalues computed in classical stability analysis do not give a full account of the behaviour, while others have suggested that fluid instability is a fundamentally non-linear process which is not accessible to linearized stability analyses.
In this talk, an alternative account of fluid instability and turbulence will be explored. It is suggested that the Navier-Stokes equations themselves might not be entirely appropriate to describe the transition to turbulent flow. A slightly more general model allows the possibility that the classical viscous fluid flows predicted by Navier-Stokes theory may become unstable at Reynolds numbers much closer to those seen in experiments, and so might perhaps give an account of the physics underlying turbulent behaviour.
The coefficients in mathematical models of physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. It is of great importance to quantify the uncertainty in the model outputs based on the (uncertain) information that is available on the model inputs. This invariably leads to very high dimensional quadrature problems associated with the computation of statistics of quantities of interest, such as the time it takes a pollutant plume in an uncertain subsurface flow problem to reach the boundary of a safety region or the buckling load of an airplane wing. Higher order methods, such as stochastic Galerkin or polynomial chaos methods, suffer from the curse of dimensionality and when the physical models themselves are complex and computationally costly, they become prohibitively expensive in higher dimensions. Instead, some of the most promising approaches to quantify uncertainties in continuum models are based on Monte Carlo sampling and the “multigrid philosophy”. Multilevel Monte Carlo (MLMC) Methods have been introduced recently and successfully applied to many model problems, producing significant gains. In this talk I want to recall the classical MLMC method and then show how the gains can be improved further (significantly) by using quasi-Monte Carlo (QMC) sampling rules. More importantly the dimension independence and the improved gains can be justified rigorously for an important model problem in subsurface flow. To achieve uniform bounds, independent of the dimension, it is necessary to work in infinite dimensions and to study quadrature in sequence spaces. I will present the elements of this new theory for the case of lognormal random coefficients in a diffusion problem and support the theory with numerical experiments.
Rationally and polynomially convex domains in ${\mathbb C}^n$ are fundamental objects of study in the theory of functions of several complex variables. After defining and illustrating these notions, I will explain joint work with Y.Eliashberg giving a complete characterization of the possible topologies of such domains in complex dimension at least three. The proofs are based on recent progress in symplectic topology, most notably the h-principles for loose Legendrian knots and Lagrangian caps.
In this talk, I will introduce an internal, structural
characterisation of certain convergence properties (Fréchet-Urysohn, or
more generally, radiality) and apply this structure to understand when
Stone spaces have these properties. This work can be generalised to
certain Zariski topologies and perhaps to larger classes of spaces
obtained from other structures.
The Dehn function of a group measures the complexity of the group's word problem, being the upper bound on the number of relations from a group presentation required to prove that a word in the generators represents the identity element. The Filling Theorem which was first stated by Gromov connects this to the isoperimetric functions of Riemannian manifolds. In this talk, we will see the classification of hyperbolic groups as those with a linear Dehn function, and give Bowditch's proof that a subquadratic isoperimetric inequality implies a linear one (which gives the only gap in the "isoperimetric spectrum" of exponents of polynomial Dehn functions).
The curve graph of a surface has a vertex for each curve on the surface and an edge for each pair of disjoint curves. Although it deals with very simple objects, it has connections with questions in low-dimensional topology, and some properties that encourage people to study it. Yet it is more complicated than it may look from its definition: in particular, what happens if we start following a 'diverging' path along this graph? It turns out that the curves we hit get so complicated that eventually give rise to a lamination filling up the surface. This can be understood by drawing some train track-like pictures on the surface. During the talk I will keep away from any issue that I considered too technical.
Recently several conjectures about l2-invariants of
CW-complexes have been disproved. At the heart of the counterexamples
is a method of computing the spectral measure of an element of the
complex group ring. We show that the same method can be used to
compute the finite field analog of the l2-Betti numbers, the homology
gradient. As an application we point out that (i) the homology
gradient over any field of characteristic different than 2 can be an
irrational number, and (ii) there exists a CW-complex whose homology
gradients over different fields have infinitely many different values.
One general approach to random number generation is to take a uniformly distributed (0,1) random variable and then invert the cumulative distribution function (CDF) to generate samples from another distribution. This talk follows this approach, approximating the inverse CDF for the Poisson distribution in a way which is particularly efficient for vector execution on NVIDIA GPUs.