There is a class $\mathcal{B}$ of analytic Besov functions on a half-plane, with a very simple description. This talk will describe a bounded functional calculus $f \in \mathcal{B} \mapsto f(A)$ where $-A$ is the generator of either a bounded $C_0$-semigroup on Hilbert space or a bounded analytic semigroup on a Banach space. This calculus captures many known results for such operators in a unified way, and sometimes improves them. A discrete version of the functional calculus was shown by Peller in 1983.

# Past Forthcoming Seminars

The talk will describe how ideas from random matrix theory can be leveraged to effectively, accurately, and reliably solve important problems that arise in data analytics and large scale matrix computations. We will focus in particular on accelerated techniques for computing low rank approximations to matrices. These techniques rely on randomised embeddings that reduce the effective dimensionality of intermediate steps in the computation. The resulting algorithms are particularly well suited for processing very large data sets.

The algorithms described are supported by rigorous analysis that depends on probabilistic bounds on the singular values of rectangular Gaussian matrices. The talk will briefly review some representative results.

Note: There is a related talk in the Computational Mathematics and Applications seminar on Thursday Feb 27, at 14:00 in L4. There, the ideas introduced in this talk will be extended to the problem of solving large systems of linear equations.

Robust principal component analysis and low-rank matrix completion are extensions of PCA that allow for outliers and missing entries, respectively. Solving these problems requires a low coherence between the low-rank matrix and the canonical basis. However, in both problems the well-posedness issue is even more fundamental; in some cases, both Robust PCA and matrix completion can fail to have any solutions due to the fact that the set of low-rank plus sparse matrices is not closed. Another consequence of this fact is that the lower restricted isometry property (RIP) bound cannot be satisfied for some low-rank plus sparse matrices unless further restrictions are imposed on the constituents. By restricting the energy of one of the components, we close the set and are able to derive the RIP over the set of low rank plus sparse matrices and operators satisfying concentration of measure inequalities. We show that the RIP of an operator implies exact recovery of a low-rank plus sparse matrix is possible with computationally tractable algorithms such as convex relaxations or line-search methods. We propose two efficient iterative methods called Normalized Iterative Hard Thresholding (NIHT) and Normalized Alternative Hard Thresholding (NAHT) that provably recover a low-rank plus sparse matrix from subsampled measurements taken by an operator satisfying the RIP.

Positively folded galleries arise as images of retractions of buildings onto a fixed apartment and play a role in many areas of maths (such as in the study of affine Hecke algebras, Macdonald polynomials, MV-polytopes, and affine Deligne-Lusztig varieties). In this talk, we will define positively folded galleries, and then look at how these can be used to study affine flag varieties. We will also look at a new recursive description of the set of end alcoves of folded galleries with respect to alcove-induced orientations, which gives us a combinatorial description of certain double coset intersections in these affine flag varieties. This talk is based on joint work with Elizabeth Milićević, Petra Schwer and Anne Thomas.

Randomized SVD has become an extremely successful approach for efficiently computing a low-rank approximation of matrices. In particular the paper by Halko, Martinsson (who is speaking twice this week), and Tropp (SIREV 2011) contains extensive analysis, and made it a very popular method.

The complexity for $m\times n$ matrices is $O(Nr+(m+n)r^2)$ where $N$ is the cost of a (fast) matrix-vector multiplication; which becomes $O(mn\log n+(m+n)r^2)$ for dense matrices. This work uses classical results in numerical linear algebra to reduce the computational cost to $O(Nr)$ without sacrificing numerical stability. The cost is essentially optimal for many classes of matrices, including $O(mn\log n)$ for dense matrices. The method can also be adapted for updating, downdating and perturbing the matrix, and is especially efficient relative to previous algorithms for such purposes.

For a family $A$ in $\{0,...,k\}^n$, its deletion shadow is the set obtained from $A$ by deleting from any of its vectors one coordinate. Given the size of $A$, how should we choose $A$ to minimise its deletion shadow? And what happens if instead we may delete only a coordinate that is zero? We discuss these problems, and give an exact solution to the second problem.

Stress perfusion cardiac magnetic resonance (CMR) imaging has been shown to be highly accurate for the detection of coronary artery disease. However, a major limitation is that the accuracy of the visual assessment of the images is challenging and thus the accuracy of the diagnosis is highly dependent on the training and experience of the reader. Quantitative perfusion CMR, where myocardial blood flow values are inferred directly from the MR images, is an automated and user-independent alternative to the visual assessment.

This talk will focus on addressing the main technical challenges which have hampered the adoption of quantitative myocardial perfusion MRI in clinical practice. The talk will cover the problem of respiratory motion in the images and the use of dimension reduction techniques, such as robust principal component analysis, to mitigate this problem. I will then discuss our deep learning-based image processing pipeline that solves the necessary series of computer vision tasks required for the blood flow modelling and introduce the Bayesian inference framework in which the kinetic parameter values are inferred from the imaging data.

A well-known theorem of Choquet-Bruhat and Geroch states that for given smooth initial data for the Einstein equations there exists a unique maximal globally hyperbolic development. In particular, time evolution of globally hyperbolic solutions is unique. This talk investigates whether the same result holds for quasilinear wave equations defined on a fixed background. After recalling the notion of global hyperbolicity, we first present an example of a quasilinear wave equation for which unique time evolution in fact fails and contrast this with the Einstein equations. We then proceed by presenting conditions on quasilinear wave equations which ensure uniqueness. This talk is based on joint work with Harvey Reall and Felicity Eperon.

Multilayer networks are a way to represent dependent connectivity patterns — e.g., time-dependence, multiple types of interactions, or both — that arise in many applications and which are difficult to incorporate into standard network representations. In the study of multilayer networks, it is important to investigate mesoscale (i.e., intermediate-scale) structures, such as communities, to discover features that lie between the microscale and the macroscale. We introduce a framework for the construction of generative models for mesoscale structure in multilayer networks. We model dependency at the level of partitions rather than with respect to edges, and treat the process of generating a multilayer partition separately from the process of generating edges for a given multilayer partition. Our framework can admit many features of empirical multilayer networks and explicitly incorporates a user-specified interlayer dependency structure. We discuss the parameters and some properties of our framework, and illustrate an example of its use with benchmark models for multilayer community-detection tools.

Spatial navigation in preclinical and clinical Alzheimer’s disease - Relevance for topological data analysis?

Spatial navigation changes are one of the first symptoms of Alzheimer’s disease and also lead to significant safeguarding issues in patients after diagnosis. Despite their significant implications, spatial navigation changes in preclinical and clinical Alzheimer’s disease are still poorly understood. In the current talk, I will explain the spatial navigation processes in the brain and their relevance to Alzheimer’s disease. I will then introduce our Sea Hero Quest project, which created the first global benchmark data for spatial navigation in ~4.5 million people worldwide via a VR-based game. I will present data from the game, which has allowed to create personalised benchmark data for at-risk-of-Alzheimer’s people. The final part of my talk will explore how real-world environment & entropy impacts on dementia patients getting lost and how this has relevance for GPS technology based safeguarding and car driving in Alzheimer’s disease.