A reminder that the Common Room will be in use for a private event 13:30-17:00 on Tuesday 10th and 13:00-16:30 on Wednesday 11th September. The kitchenettes will remain accessible, but we ask that you please use Reception to move between North and South during those time slots. AV technicians may come into the space to do set up in the morning, but they will work around any other users of the space.
Old Bodleian Tour for Postdocs, 18th Sept 11:00 – 12:00
The Radcliffe Science Library (RSL) is offering a tour of the Old Bodleian Library to science postdocs. Escorted by a member of the RSL Subject Librarian Team, you will be shown some of our historic libraries including Duke Humfrey’s and the Radcliffe Camera. Spaces are limited and booking is essential.
Submissions are now open for the first year of the prize, which will be awarded in 2025.
The prize will recognise young researchers who apply techniques in artificial intelligence (AI) – such as machine learning, natural language processing, or computer vision – to help the life sciences research community solve important problems and accelerate their work.
Who needs a residual when an approximation will do?
Abstract
The widespread need to solve large-scale linear systems has sparked a growing interest in randomized techniques. One such class of techniques is known as iterative random sketching methods (e.g., Randomized Block Kaczmarz and Randomized Block Coordinate Descent). These methods "sketch" the linear system to generate iterative, easy-to-compute updates to a solution. By working with sketches, these methods can often enable more efficient memory operations, potentially leading to faster performance for large-scale problems. Unfortunately, tracking the progress of these methods still requires computing the full residual of the linear system, an operation that undermines the benefits of the solvers. In practice, this cost is mitigated by occasionally computing the full residual, typically after an epoch. However, this approach sacrifices real-time progress tracking, resulting in wasted computations. In this talk, we use statistical techniques to develop a progress estimation procedure that provides inexpensive, accurate real-time progress estimates at the cost of a small amount of uncertainty that we effectively control.
Backward error for nonlinear eigenvalue problems
Abstract
The backward error analysis is an important part of the perturbation theory and it is particularly useful for the study of the reliability of the numerical methods. We focus on the backward error for nonlinear eigenvalue problems. In this talk, the matrix-valued function is given as a linear combination of scalar functions multiplying matrix coefficients, and the perturbation is done on the coefficients. We provide theoretical results about the backward error of a set of approximate eigenpairs. Indeed, small backward errors for separate eigenpairs do not imply small backward errors for a set of approximate eigenpairs. In this talk, we provide inexpensive upper bounds, and a way to accurately compute the backward error by means of direct computations or through Riemannian optimization. We also discuss how the backward error can be determined when the matrix coefficients of the matrix-valued function have particular structures (such as symmetry, sparsity, or low-rank), and the perturbations are required to preserve them. For special cases (such as for symmetric coefficients), explicit and inexpensive formulas to compute the perturbed matrix coefficients are also given. This is a joint work with Leonardo Robol (University of Pisa).