We all know that mathematical activity goes on nowadays in a great variety of settings – not just in academia, but across the whole range of industry, education, and beyond. This diversity in mathematics is by no means new, and yet the study of the history of mathematics has often failed to capture it.
Heights of random trees
Abstract
A rooted tree $T$ has degree sequence $(d_1,\ldots,d_n)$ if $T$ has vertex set $[n]$ and vertex $i$ has $d_i$ children for each $i$ in $[n]$.
I will describe a line-breaking construction of random rooted trees with given degree sequences, as well as a way of coupling random trees with different degree sequences that also couples their heights to one another.
The construction and the coupling have several consequences, and I'll try to explain some of these in the talk.
First, let $T$ be a branching process tree with critical—mean one—offspring distribution, and let $T_n$ have the law of $T$ conditioned to have size $n$. Then the following both hold.
1) $\operatorname{height}(T_n)/\log(n)$ tends to infinity in probability.
2) If the offspring distribution has infinite variance then $\operatorname{height}(T_n)/n^{1/2}$ tends to $0$ in probability. This result settles a conjecture of Svante Janson.
The next two statements relate to random rooted trees with given degree sequences.
1) For any $\varepsilon > 0$ there is $C > 0$ such that the following holds. If $T$ is a random tree with degree sequence $(d_1,\ldots,d_n)$ and at least $\varepsilon n$ leaves, then $\mathbb{E}(\operatorname{height}(T)) < C \sqrt{n}$.
2) Consider any random tree $T$ with a fixed degree sequence such that $T$ has no vertices with exactly one child. Then $\operatorname{height}(T)$ is stochastically less than $\operatorname{height}(B)$, where $B$ is a random binary tree of the same size as $T$ (or size one greater, if $T$ has even size).
This is based on joint work with Serte Donderwinkel and Igor Kortchemski.
16:00
Talks on Talks
Abstract
What makes a good talk? This year, graduate students and postdocs will give a series talks on how to give talks! There may even be a small prize for the audience’s favourite.
If you’d like to have a go at informing, entertaining, or just have an axe to grind about a particularly bad talk you had to sit through, we’d love to hear from you (you can email Ric Wade or ask any of the organizers).
Error Bound on Singular Values Approximations by Generalized Nystrom
Abstract
We consider the problem of approximating singular values of a matrix when provided with approximations to the leading singular vectors. In particular, we focus on the Generalized Nystrom (GN) method, a commonly used low-rank approximation, and its error in extracting singular values. Like other approaches, the GN approximation can be interpreted as a perturbation of the original matrix. Up to orthogonal transformations, this perturbation has a peculiar structure that we wish to exploit. Thus, we use the Jordan-Wieldant Theorem and similarity transformations to generalize a matrix perturbation theory result on eigenvalues of a perturbed Hermitian matrix. Finally, combining the above, we can derive a bound on the GN singular values approximation error. We conclude by performing preliminary numerical examples. The aim is to heuristically study the sharpness of the bound, to give intuitions on how the analysis can be used to compare different approaches, and to provide ideas on how to make the bound computable in practice.