Who needs a residual when an approximation will do?
Abstract
The widespread need to solve large-scale linear systems has sparked a growing interest in randomized techniques. One such class of techniques is known as iterative random sketching methods (e.g., Randomized Block Kaczmarz and Randomized Block Coordinate Descent). These methods "sketch" the linear system to generate iterative, easy-to-compute updates to a solution. By working with sketches, these methods can often enable more efficient memory operations, potentially leading to faster performance for large-scale problems. Unfortunately, tracking the progress of these methods still requires computing the full residual of the linear system, an operation that undermines the benefits of the solvers. In practice, this cost is mitigated by occasionally computing the full residual, typically after an epoch. However, this approach sacrifices real-time progress tracking, resulting in wasted computations. In this talk, we use statistical techniques to develop a progress estimation procedure that provides inexpensive, accurate real-time progress estimates at the cost of a small amount of uncertainty that we effectively control.
Backward error for nonlinear eigenvalue problems
Abstract
The backward error analysis is an important part of the perturbation theory and it is particularly useful for the study of the reliability of the numerical methods. We focus on the backward error for nonlinear eigenvalue problems. In this talk, the matrix-valued function is given as a linear combination of scalar functions multiplying matrix coefficients, and the perturbation is done on the coefficients. We provide theoretical results about the backward error of a set of approximate eigenpairs. Indeed, small backward errors for separate eigenpairs do not imply small backward errors for a set of approximate eigenpairs. In this talk, we provide inexpensive upper bounds, and a way to accurately compute the backward error by means of direct computations or through Riemannian optimization. We also discuss how the backward error can be determined when the matrix coefficients of the matrix-valued function have particular structures (such as symmetry, sparsity, or low-rank), and the perturbations are required to preserve them. For special cases (such as for symmetric coefficients), explicit and inexpensive formulas to compute the perturbed matrix coefficients are also given. This is a joint work with Leonardo Robol (University of Pisa).
14:15
Open Gromov-Witten invariants and Mirror symmetry
Abstract
This talk reports on two projects. The first work (in progress), joint with Amanda Hirschi, constructs (genus 0) open Gromov-Witten invariants for any Lagrangian submanifold using a global Kuranishi chart construction. As an application we show open Gromov-Witten invariants are invariant under Lagrangian cobordisms. I will then describe how open Gromov-Witten invariants fit into mirror symmetry, which brings me to the second project: obtaining open Gromov-Witten invariants from the Fukaya category.