Tue, 15 Oct 2019
14:00
L5

Wilkinson, numerical analysis, and me

Nick Trefethen
(Oxford)
Abstract

The two courses I took from Wilkinson as a graduate student at Stanford influenced me greatly.  Along with some reminiscences of those days, this talk will touch upon backward error analysis, Gaussian elimination, and Evariste Galois.  It was originally presented at the Wilkinson 100th Birthday conference in Manchester earlier this year.

 

Mon, 11 Nov 2019

14:15 - 15:15
L4

Green's function estimates and the Poisson equation

Ovidiu Munteanu
(University of Connecticut)
Further Information

 

 

Abstract

The Green's function of the Laplace operator has been widely studied in geometric analysis. Manifolds admitting a positive Green's function are called nonparabolic. By Li and Yau, sharp pointwise decay estimates are known for the Green's function on nonparabolic manifolds that have nonnegative Ricci
curvature. The situation is more delicate when curvature is not nonnegative everywhere. While pointwise decay estimates are generally not possible in this
case, we have obtained sharp integral decay estimates for the Green's function on manifolds admitting a Poincare inequality and an appropriate (negative) lower bound on Ricci curvature. This has applications to solving the Poisson equation, and to the study of the structure at infinity of such manifolds.

Tue, 03 Dec 2019
14:00
L1

On symmetrizing the ultraspherical spectral method for self-adjoint problems

Mikael Slevinsky
(University of Manitoba)
Abstract

A mechanism is described to symmetrize the ultraspherical spectral method for self-adjoint problems. The resulting discretizations are symmetric and banded. An algorithm is presented for an adaptive spectral decomposition of self-adjoint operators. Several applications are explored to demonstrate the properties of the symmetrizer and the adaptive spectral decomposition.

 

Tue, 26 Nov 2019
14:00
L5

Subspace Gauss-Newton for Nonlinear Least-Squares

Constantin Puiu
(Oxford)
Abstract


Subspace methods have the potential to outperform conventional methods, as the derivatives only need to be computed in a smaller dimensional subspace. The sub-problem that needs to be solved at each iteration is also smaller in size, and thus the Linear Algebra cost is also lower. However, if the subspace is not selected "properly", the progress per iteration can be significantly much lower than the progress of the equivalent full-space method. Thus, "improper" selection of the subspace results in subspace methods which are actually more computationally expensive per unit of progress than their full-space alternatives. The popular subspace selection methods (such as randomized) fall into this category when the objective function does not have a known (exploitable) structure. We provide a simple and effective rule to choose the subspace in the "right way" when the objective function does not have a structure. We focus on Gauss-Newton and Least-Squares, but the idea can be generalised to any other solvers and/or objective functions. We show theoretically that the cost of this strategy per unit progress lies in between (approximately) 50% and 100% of the cost of Gauss-Newton, and give an intuition why in practice, it should be closer to the favorable end of the spectrum. We confirm these expectations by running numerical experiments on the CUTEst32 test set. We also compare the proposed selection method with randomized subspace selection. We briefly show that the method is globally convergent and has a 2-step quadratic asymptotic rate of convergence for zero-residual problems.
 

Subscribe to