Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that balance these criteria. This talk offers an invitation to the field of matrix concentration inequalities and their applications.

# Past Forthcoming Seminars

We will study the l1-homology of the 2-class in one relator groups. We will see that there are many qualitative and quantitive similarities between the l1-norm of the top dimensional class and the stable commutator length of the defining relation. As an application we construct manifolds with small simplicial volume.

This work in progress is joint with Clara Loeh.

Every topological space is metrisable once the symmetry axiom is abandoned and the codomain of the metric is allowed to take values in a suitable structure tailored to fit the topology (and every completely regular space is similarly metrisable while retaining symmetry). This result was popularised in 1988 by Kopperman, who used value semigroups as the codomain for the metric, and restated in 1997 by Flagg, using value quantales. In categorical terms, each of these constructions extends to an equivalence of categories between the category Top and a category of all L-valued metric spaces (where L ranges over either value semigroups or value quantales) and the classical \epsilon-\delta notion of continuous mappings. Thus, there are (at least) two metric formalisms for topology, raising the questions: 1) is any of the two actually useful for doing topology? and 2) are the two formalisms equally powerful for the purposes of topology? After reviewing Flagg's machinery I will attempt to answer the former affirmatively and the latter negatively. In more detail, the two approaches are equipotent when it comes to point-to-point topological consideration, but only Flagg's formalism captures 'higher order' topological aspects correctly, however at a price; there is no notion of product of value quantales. En route to establishing Flagg's formalism as convenient, it will be shown that both fine and coarse variants of homology and homotopy arise as left and right Kan extensions of genuinely metrically constructed functors, and a topologically relevant notion of tensor product of value quantales, a surrogate for the non-existent products, will be described.

The Czech lands were the most industrial part of the Austrian-Hungarian monarchy, broken up at the end of the WW1. As such, Czechoslovakia inherited developed industry supported by developed system of tertiary education, and Czech and German universities and technical universities, where the first chairs for applied mathematics were set up. The close cooperation with the Skoda company led to the establishment of joint research institutes in applied mathematics and spectroscopy in 1929 (1934 resp.).

The development of industry was followed by a gradual introduction of social insurance, which should have helped to settle social contracts, fight with pauperism and prevent strikes. Social insurance institutions set up mathematical departments responsible for mathematical and statistical modelling of the financial system in order to ensure its sustainability. During the 1920s and 1930s Czechoslovakia brought its system of social insurance up to date. This is connected with Emil Schoenbaum, internationally renowned expert on insurance (actuarial) mathematics, Professor of the Charles University and one of the directors of the General Institute of Pensions in Prague.

After the Nazi occupation in 1939, Czech industry was transformed to serve armament of the Wehrmacht and the social system helped the Nazis to introduce the carrot and stick policy to keep weapons production running up to early 1945. There was also strong personal discontinuity, as the Jews and political opponents either fled to exile or were brutally persecuted.

Both in the real and in the p-adic case, I will talk about recent results about C^r-parameterizations and their diophantine applications. In both cases, the dependence on r of the number of parameterizing C^r maps plays a role. In the non-archimedean case, we get as an application new bounds for rational points of bounded height lying on algebraic varieties defined over finite fields, sharpening the bounds by Sedunova, and making them uniform in the finite field. In the real case, some results from joint work with Pila and Wilkie, and also beyond this work, will be presented,

in relation to several questions raised by Yomdin. The non-archimedean case is joint work with Forey and Loeser. The real case is joint work with Pila and Wilkie, continued by my PhD student S. Van Hille. Some work with Binyamini and Novikov in the non-archimedean context will also be mentioned. The relations with questions by Yomdin is joint work with Friedland and Yomdin.

We present the linearized metrizability problem in the context of parabolic geometries and subriemannian geometry, generalizing the metrizability problem in projective geometry studied by R. Liouville in 1889. We give a general method for linearizability and a classification of all cases with irreducible defining distribution where this method applies. These tools lead to natural subriemannian metrics on generic distributions of interest in geometric control theory.

The last remaining open problem from Erdős and Rényi's original paper on random graphs is the following: for q at least 3, what is the largest d so that the random graph G(n,d/n) is q-colorable with high probability? A lot of interesting work in probabilistic combinatorics has gone into proving better and better bounds on this q-coloring threshold, but the full answer remains elusive. However, a non-rigorous method from the statistical physics of glasses - the cavity method - gives a precise prediction for the threshold. I will give an introduction to the cavity method, with random graph coloring as the running example, and describe recent progress in making parts of the method rigorous, emphasizing the role played by tools from extremal combinatorics. Based on joint work with Amin Coja-Oghlan, Florent Krzakala, and Lenka Zdeborová.

(Joint work with Coralia Cartis) The problem of finding the most extreme value of a function, also known as global optimization, is a challenging task. The difficulty is associated with the exponential increase in the computational time for a linear increase in the dimension. This is known as the ``curse of dimensionality''. In this talk, we demonstrate that such challenges can be overcome for functions with low effective dimensionality --- functions which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations.

We propose the use of random subspace embeddings within a(ny) global minimisation algorithm, extending the approach in Wang et al. (2013). We introduce a new framework, called REGO (Random Embeddings for GO), which transforms the high-dimensional optimization problem into a low-dimensional one. In REGO, a new low-dimensional problem is formulated with bound constraints in the reduced space and solved with any GO solver. Using random matrix theory, we provide probabilistic bounds for the success of REGO, which indicate that this is dependent upon the dimension of the embedded subspace and the intrinsic dimension of the function, but independent of the ambient dimension. Numerical results demonstrate that high success rates can be achieved with only one embedding and that rates are for the most part invariant with respect to the ambient dimension of the problem.

Decomposition (aka unital 2-Segal) spaces are simplicial ∞-groupoids with a certain exactness property: they take pushouts of active (end-point preserving) along inert (distance preserving) maps in the simplicial category Δ to pullbacks. They encode the information needed for an 'objective' generalisation of the notion of incidence (co)algebra of a poset, and motivating examples include the decomposition spaces for (derived) Hall algebras, the Connes-Kreimer algebra of trees and Schmitt's algebra of graphs. In this talk I will survey recent activity in this area, including some work in progress on a categorification of (Hopf) bialgebroids.

This is joint work with Imma Gálvez and Joachim Kock.