Kernel-based Statistical Methods for Functional Data
ww.datasig.ac.uk/events
Abstract
Kernel-based statistical algorithms have found wide success in statistical machine learning in the past ten years as a non-parametric, easily computable engine for reasoning with probability measures. The main idea is to use a kernel to facilitate a mapping of probability measures, the objects of interest, into well-behaved spaces where calculations can be carried out. This methodology has found wide application, for example two-sample testing, independence testing, goodness-of-fit testing, parameter inference and MCMC thinning. Most theoretical investigations and practical applications have focused on Euclidean data. This talk will outline work that adapts the kernel-based methodology to data in an arbitrary Hilbert space which then opens the door to applications for functional data, where a single data sample is a discretely observed function, for example time series or random surfaces. Such data is becoming increasingly more prominent within the statistical community and in machine learning. Emphasis shall be given to the two-sample and goodness-of-fit testing problems.
Fast & Accurate Randomized Algorithms for Linear Systems and Eigenvalue Problems
Abstract
We develop a new class of algorithms for general linear systems and a wide range of eigenvalue problems. These algorithms apply fast randomized sketching to accelerate subspace projection methods. This approach offers great flexibility in designing the basis for the approximation subspace, which can improve scalability in many computational environments. The resulting algorithms outperform the classic methods with minimal loss of accuracy. For model problems, numerical experiments show large advantages over MATLAB’s optimized routines, including a 100x speedup.
Joint work with Joel Tropp (Caltech).
Randomized algorithms for trace estimation
Abstract
The Hutchinson’s trace estimator approximates the trace of a large-scale matrix A by computing the average of some quadratic forms involving A and some random vectors. Hutch++ is a more efficient trace estimation algorithm that combines this with the randomized singular value decomposition, which obtains a low-rank approximation of A by multiplying the matrix with some random vectors. In this talk, we present an improved version of Hutch++ which aims at minimizing the computational cost - that is, the number of matrix-vector multiplications with A - needed to achieve a trace estimate with a target accuracy. This is joint work with David Persson and Daniel Kressner.
As the new academic year approaches, we're adding to our catalogue of Oxford Mathematics student lectures on our YouTube Channel.
The latest is a lecture from Vicky Neale (pictured) on Monotonic Sequences, part of her first year Analysis 1 course. There are 50 more lectures for you to watch on the Channel covering many aspects of the undergraduate degree, including two full courses. We will add more over the coming weeks, including more lectures from the third and fourth years when students get to specialise.