Some physical and mathematical theories have the unfortunate feature that if one takes them at face value, many quantities of interest appear to be infinite! Various techniques, usually going under the common name of “renormalisation” have been developed over the years to address this, allowing mathematicians and physicists to tame these infinities. We will tip our toes into some of the mathematical aspects of these techniques and we will see how they have recently been used to make precise analytical statements about the solutions of some equations whose meaning was not even clear until recently.

# Past Colloquia

Optimization methods for large-scale machine learning must confront a number of challenges that are unique to this discipline. In addition to being scalable, parallelizable and capable of handling nonlinearity (even non-convexity), they must also be good learning algorithms. These challenges have spurred a great amount of research that I will review, paying particular attention to variance reduction methods. I will propose a new algorithm of this kind and illustrate its performance on text and image classification problems.

Based upon our joint work with M. Marcolli, I will introduce some algebraic geometric models in cosmology related to the "boundaries" of space-time: Big Bang, Mixmaster Universe, and Roger Penrose's crossovers between aeons. We suggest to model the kinematics of Big Bang using the algebraic geometric (or analytic) blow up of a point $x$. This creates a boundary which consists of the projective space of tangent directions to $x$ and possibly of the light cone of $x$. We argue that time on the boundary undergoes the Wick rotation and becomes purely imaginary. The Mixmaster (Bianchi IX) model of the early history of the universe is neatly explained in this picture by postulating that the reverse Wick rotation follows a hyperbolic geodesic connecting imaginary time axis to the real one. Roger Penrose's idea to see the Big Bang as a sign of crossover from "the end of the previous aeon" of the expanding and cooling Universe to the "beginning of the next aeon" is interpreted as an identification of a natural boundary of Minkowski space at infinity with the Bing Bang boundary.

Quantum Mechanics presents a radically different perspective on physical reality compared with the world of classical physics. In particular, results such as the Bell and Kochen-Specker theorems highlight the essentially non-local and contextual nature of quantum mechanics. The rapidly developing field of quantum information seeks to exploit these non-classical features of quantum physics to transcend classical bounds on information processing tasks.

In this talk, we shall explore the rich mathematical structures underlying these results. The study of non-locality and contextuality can be expressed in a unified and generalised form in the language of sheaves or bundles, in terms of obstructions to global sections. These obstructions can, in many cases, be witnessed by cohomology invariants. There are also strong connections with logic. For example, Bell inequalities, one of the major tools of quantum information and foundations, arise systematically from logical consistency conditions.

These general mathematical characterisations of non-locality and contextuality also allow precise connections to be made with a number of seemingly unrelated topics, in classical computation, logic, and natural language semantics. By varying the semiring in which distributions are valued, the same structures and results can be recognised in databases and constraint satisfaction as in probability models arising from quantum mechanics. A rich field of contextual semantics, applicable to many of the situations where the pervasive phenomenon of contextuality arises, promises to emerge.

Universal fluctuations are shown to exist when well-known and widely used numerical algorithms are applied with random data. Similar universal behavior is shown in stochastic algorithms and algorithms that model neural computation. The question of whether universality is present in all, or nearly all, computation is raised. (Joint work with G.Menon, S.Olver and T. Trogdon.)

Evolution by natural selection has resulted in a remarkable diversity of organism morphologies. But is it possible for developmental processes to create “any possible shape?” Or are there intrinsic constraints? I will discuss our recent exploration into the shapes of bird beaks. Initially, inspired by the discovery of genes controlling the shapes of beaks of Darwin's finches, we showed that the morphological diversity in the beaks of Darwin’s Finches is quantitatively accounted for by the mathematical group of affine transformations. We have extended this to show that the space of shapes of bird beaks is not large, and that a large phylogeny (including finches, cardinals, sparrows, etc.) are accurately spanned by only three independent parameters -- the shapes of these bird beaks are all pieces of conic sections. After summarizing the evidence for these conclusions, I will delve into our efforts to create mathematical models that connect these patterns to the developmental mechanism leading to a beak. It turns out that there are simple (but precise) constraints on any mathematical model that leads to the observed phenomenology, leading to explicit predictions for the time dynamics of beak development in song birds. Experiments testing these predictions for the development of zebra finch beaks will be presented.

Based on the following papers:

http://www.pnas.org/content/107/8/3356.short

http://www.nature.com/ncomms/2014/140416/ncomms4700/full/ncomms4700.html

The Plateau's problem, named after the Belgian physicist J. Plateau, is a classic in the calculus of variations and regards minimizing the area among all surfaces spanning a given contour. Although Plateau's original concern were $2$-dimensional surfaces in the $3$-dimensional space, generations of mathematicians have considered such problem in its generality. A successful existence theory, that of integral currents, was developed by De Giorgi in the case of hypersurfaces in the fifties and by Federer and Fleming in the general case in the sixties. When dealing with hypersurfaces, the minimizers found in this way are rather regular: the corresponding regularity theory has been the achievement of several mathematicians in the 60es, 70es and 80es (De Giorgi, Fleming, Almgren, Simons, Bombieri, Giusti, Simon among others).

In codimension higher than one, a phenomenon which is absent for hypersurfaces, namely that of branching, causes very serious problems: a famous theorem of Wirtinger and Federer shows that any holomorphic subvariety in $\mathbb C^n$ is indeed an area-minimizing current. A celebrated monograph of Almgren solved the issue at the beginning of the 80es, proving that the singular set of a general area-minimizing (integral) current has (real) codimension at least 2. However, his original (typewritten) manuscript was more than 1700 pages long. In a recent series of works with Emanuele Spadaro we have given a substantially shorter and simpler version of Almgren's theory, building upon large portions of his program but also bringing some new ideas from partial differential equations, metric analysis and metric geometry. In this talk I will try to give a feeling for the difficulties in the proof and how they can be overcome.

The surface subgroup problem asks whether a given group contains a subgroup that is isomorphic to the fundamental group of a closed surface. In this talk I will survey the role that the surface subgroup problem plays in some important solved and unsolved problems in the theory of 3-manifolds, the geometric group theory, and the theory of arithmetic manifolds.

The height of a rational number a/b (a,b integers which are coprime) is defined as max(|a|, |b|). A rational number with small (resp. big) height is a simple (resp. complicated) number. Though the notion height is so naive, height has played a fundamental role in number theory. There are important variants of this notion. In 1983, when Faltings proved the Mordell conjecture (a conjecture formulated in 1921), he first proved the Tate conjecture for abelian varieties (it was also a great conjecture) by defining heights of abelian varieties, and then deducing Mordell conjecture from this. The height of an abelian variety tells how complicated are the numbers we need to define the abelian variety. In this talk, after these initial explanations, I will explain that this height is generalized to heights of motives. (A motive is a kind of generalisation of abelian variety.) This generalisation of height is related to open problems in number theory. If we can prove finiteness of the number of motives of bounded height, we can prove important conjectures in number theory such as general Tate conjecture and Mordell-Weil type conjectures in many cases.

"We introduce some type of generalized Poisson formula which is equivalent

to Langlands' automorphic transfer from an arbitrary reductive group over a

global field to a general linear group."