A graph is quasirandom if its edge distribution is similar (in a well defined quantitative way) to that of a random graph with the same edge density. Classical results of Thomason and Chung-Graham-Wilson show that a variety of graph properties are equivalent to quasirandomness. On the other hand, in some known proofs the error terms which measure quasirandomness can change quite dramatically when going from one property to another which might be problematic in some applications.

Simonovits and Sós proved that the property that all induced subgraphs have about the expected number of copies of a fixed graph $H$ is quasirandom. However, their proof relies on the regularity lemma and gives a very weak estimate. They asked to find a new proof for this result with a better estimate. The purpose of this talk is to accomplish this.

Joint work with D. Conlon and J. Fox

# Past Combinatorial Theory Seminar

It has become possible in recent years to provide unconditional lower bounds on the time needed to perform a number of basic computational operations. I will briefly discuss some of the main techniques involved and show how one in particular, the information transfer method, can be exploited to give time lower bounds for computation on streaming data.

I will then go on to present a simple looking mathematical conjecture with a probabilistic combinatorics flavour that derives from this work. The conjecture is related to the classic "coin weighing with a spring scale" puzzle but has so far resisted our best efforts at resolution.

An edge (vertex) coloured graph is rainbow-connected if there is a rainbow path between any two vertices, i.e. a path all of whose edges (internal vertices) carry distinct colours. Rainbow edge (vertex) connectivity of a graph G is the smallest number of colours needed for a rainbow edge (vertex) colouring of G. We propose a very simple approach to studying rainbow connectivity in graphs. Using this idea, we give a unified proof of several new and known results, focusing on random regular graphs. This is joint work with Michael Krivelevich and Benny Sudakov.

Phylogenies, or evolutionary histories, play a central role in modern biology, illustrating the interrelationships between species, and also aiding the prediction of structural, physiological, and biochemical properties. The reconstruction of the underlying evolutionary history from a set of morphological characters or biomolecular sequences is difficult since the optimality criteria favored by biologists are NP-hard, and the space of possible answers is huge. Phylogenies are often modeled by trees with n leaves, and the number of possible phylogenetic trees is $(2n-5)!!$. Due to the hardness and the large number of possible answers, clever searching techniques and heuristics are used to estimate the underlying tree.

We explore the combinatorial structure of the underlying space of trees, under different metrics, in particular the nearest-neighbor-interchange (NNI), subtree- prune-and-regraft (SPR), tree-bisection-and-reconnection (TBR), and Robinson-Foulds (RF) distances. Further, we examine the interplay between the metric chosen and the difficulty of the search for the optimal tree.

How many $H$-free graphs are there on $n$ vertices? What is the typical structure of such a graph $G$? And how do these answers change if we restrict the number of edges of $G$? In this talk I will describe some recent progress on these basic and classical questions, focusing on the cases $H=K_{r+1}$ and $H=C_{2k}$. The key tools are the hypergraph container method, the Janson inequalities, and some new "balanced" supersaturation results. The techniques are quite general, and can be used to study similar questions about objects such sum-free sets, antichains and metric spaces.

I will mention joint work with a number of different coauthors, including Jozsi Balogh, Wojciech Samotij, David Saxton, Lutz Warnke and Mauricio Collares Neto.

Let $G(n,d)$ be a random $d$-regular graph on $n$ vertices. In 2004 Kim and Vu showed that if $d$ grows faster than $\log n$ as $n$ tends to infinity, then one can define a joint distribution of $G(n,d)$ and two binomial random graphs $G(n,p_1)$ and $G(n,p_2)$ -- both of which have asymptotic expected degree $d$ -- such that with high probability $G(n,d)$ is a supergraph of $G(n,p_1)$ and a subgraph of $G(n,p_2)$. The motivation for such a coupling is to deduce monotone properties (like Hamiltonicity) of $G(n,d)$ from the simpler model $G(n,p)$. We present our work with A. Dudek, A. Frieze and A. Rucinski on the Kim-Vu conjecture and its hypergraph counterpart.

Graphons and permutons are analytic objects associated with convergent sequences of graphs and permutations, respectively. Problems from extremal combinatorics and theoretical computer science led to a study of graphons and permutons determined by finitely many substructure densities, which are referred to as finitely forcible. The talk will contain several results on finite forcibility, focusing on the relation between finite forcibility of graphons and permutons. We also disprove a conjecture of Lovasz and Szegedy about the dimension of the space of typical vertices of finitely forcible graphons. The talk is based on joint work with Roman Glebov, Andrzej Grzesik and Dan Kral.

A fundamental theorem of Wilson states that, for every graph $F$, every sufficiently large $F$-divisible clique has an $F$-decomposition. Here $G$ has an $F$-decomposition if the edges of $G$ can be covered by edge-disjoint copies of $F$ (and $F$-divisibility is a trivial necessary condition for this). We extend Wilson's theorem to graphs which are allowed to be far from complete (joint work with B. Barber, D. Kuhn, A. Lo).

I will also discuss some results and open problems on decompositions of dense graphs and hypergraphs into Hamilton cycles and perfect matchings.

Dellamonica, Kohayakawa, Rödl and Ruciński showed that for $p=C(\log n/n)^{1/d}$ the random graph $G(n,p)$ contains asymptotically almost surely all spanning graphs $H$ with maximum degree $d$ as subgraphs. In this talk I will discuss a resilience version of this result, which shows that for the same edge density, even if a $(1/k-\epsilon)$-fraction of the edges at every vertex is deleted adversarially from $G(n,p)$, the resulting graph continues to contain asymptotically almost surely all spanning $H$ with maximum degree $d$, with sublinear bandwidth and with at least $C \max\{p^{-2},p^{-1}\log n\}$ vertices not in triangles. Neither the restriction on the bandwidth, nor the condition that not all vertices are allowed to be in triangles can be removed. The proof uses a sparse version of the Blow-Up Lemma. Joint work with Peter Allen, Julia Ehrenmüller, Anusch Taraz.