We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.

# Past Computational Mathematics and Applications Seminar

When formulated appropriately, the broad families of sequential quadratic programming, augmented Lagrangian and interior-point methods all require the solution of symmetric saddle-point linear systems. When regularization is employed, the systems become symmetric and quasi definite. The latter are

indefinite but their rich structure and strong relationships with definite systems enable specialized linear algebra, and make them prime candidates for matrix-free implementations of optimization methods. In this talk, I explore various formulations of the step equations in optimization and corresponding

iterative methods that exploit their structure.

Security Constrained Optimal Power Flow is an increasingly important problem for power systems operation both in its own right and as a subproblem for more complex problems such as transmission switching or

unit commitment.

The structure of the problem resembles stochastic programming problems in that one aims to find a cost optimal operation schedule that is feasible for all possible equipment outage scenarios

(contingencies). Due to the presence of power flow constraints (in their "DC" or "AC" version), the resulting problem is a large scale linear or nonlinear programming problem.

However it is known that only a small subset of the contingencies is active at the solution. We show how Interior Point methods can exploit this structure both by simplifying the linear algebra operations as

well as generating necessary contingencies on the fly and integrating them into the algorithm using IPM warmstarting techniques. The final problem solved by this scheme is significantly smaller than the full

contingency constrained problem, resulting in substantial speed gains.

Numerical and theoretical results of our algorithm will be presented.

Can we extend the FEM to general polytopic, i.e. polygonal and polyhedral, meshes while retaining

the ease of implementation and computational cost comparable to that of standard FEM? Within this talk, I present two approaches that achieve just that (and much more): the Virtual Element Method (VEM) and an hp-version discontinuous Galerkin (dG) method.

The Virtual Element spaces are like the usual (polynomial) finite element spaces with the addition of suitable non-polynomial functions. This is far from being a novel idea. The novelty of the VEM approach is that it avoids expensive evaluations of the non-polynomial "virtual" functions by basing all

computations solely on the method's carefully chosen degrees of freedom. This way we can easily deal

with complicated element geometries and/or higher continuity requirements (like C1, C2, etc.), while

maintaining the computational complexity comparable to that of standard finite element computations.

As you might expect, the choice and number of the degrees of freedom depends on such continuity

requirements. If mesh flexibility is the goal, while one is ready to give up on global regularity, other approaches can be considered. For instance, dG approaches are naturally suited to deal with polytopic meshes. Here I present an extension of the classical Interior Penalty dG method which achieves optimal rates of convergence on polytopic meshes even under elemental edge/face degeneration.

The next step is to exploit mesh flexibility in the efficient resolution of problems characterised by

complicated geometries and solution features, for instance within the framework of automatic FEM

adaptivity. I shall finally introduce ongoing work in this direction.

In this seminar I will present a semi-langrangian discretisation of the Monge-Ampère operator, which is of interest in optimal transport

and differential geometry as well as in related fields of application.

I will discuss the proof of convergence to viscosity solutions. To address the challenge of uniqueness and convexity we draw upon the classical relationship with Hamilton-Jacobi-Bellman equations, which we extend to the viscosity setting. I will explain that the monotonicity of semi-langrangian schemes implies that they possess large stencils, which in turn requires careful treatment of the boundary conditions.

The contents of the seminar is based on current work with X Feng from the University of Tennessee.

The Singular Value Decomposition (SVD) of matrices and the related Principal Components Analysis (PCA) express a matrix in terms of singular vectors, which are linear combinations of all the input data and lack an intuitive physical interpretation. Motivated by the application of PCA and SVD in the analysis of populations genetics data, we will discuss the notion of leverage scores: a simple statistic that reveals columns/rows of a matrix that lie in the subspace spanned by the top principal components (left/right singular vectors). We will then use the leverage scores to present matrix decompositions that express the structure in a matrix in terms of actual columns (and/or rows) of the matrix. Such decompositions are easier to interpret in applications, since the selected columns and rows are subsets of the data. We will also discuss extensions of the leverage scores to reveal influential entries of a matrix.

We present a trust region algorithm for solving nonconvex optimization problems that, in the worst-case, is able to drive the norm of the gradient of the objective below a prescribed threshold $\epsilon > 0$ after at most ${\cal O}(\epsilon^{-3/2})$ function evaluations, gradient evaluations, or iterations. Our work has been inspired by the recently proposed Adaptive Regularisation framework using Cubics (i.e., the ARC algorithm), which attains the same worst-case complexity bound. Our algorithm is modeled after a traditional trust region algorithm, but employs modified step acceptance criteria and a novel trust region updating mechanism that allows it to achieve this desirable property. Importantly, our method also maintains standard global and fast local convergence guarantees.

Although Toeplitz matrices are often dense, matrix-vector products with Toeplitz matrices can be quickly performed via circulant embedding and the fast Fourier transform. This makes their solution by preconditioned Krylov subspace methods attractive.

For a wide class of symmetric Toeplitz matrices, symmetric positive definite circulant preconditioners that cluster eigenvalues have been proposed. MINRES or the conjugate gradient method can be applied to these problems and descriptive convergence theory based on eigenvalues guarantees fast convergence.

In contrast, although circulant preconditioners have been proposed for nonsymmetric Toeplitz systems, guarantees of fast convergence are generally only available for CG for the normal equations (CGNE). This is somewhat unsatisfactory because CGNE has certain drawbacks, including slow convergence and a larger condition number. In this talk we discuss a simple alternative symmetrization of nonsymmetric Toeplitz matrices, that allows us to use MINRES to solve the resulting linear system. We show how existing circulant preconditioners for nonsymmetric Toeplitz matrices can be straightforwardly adapted to this situation and give convergence estimates similar to those in the symmetric case.

Numerical simulation tools for fluid and solid mechanics are often based on the discretisation of coupled systems of partial differential equations, which can easily be identified in terms of physical

conservation laws. In contrast, much physical insight is often gained from the equivalent formulation of the relevant energy or free-energy functional, possibly subject to constraints. Motivated by the

nonlinear static and dynamic behaviour of nematic liquid crystals and of magnetosensitive elastomers, we propose a finite-element framework for minimising these free-energy functionals, using Lagrange multipliers to enforce additional constraints. This talk will highlight challenges, limitations, and successes, both in the formulation of these models and their use in numerical simulation.

This is joint work with PhD students Thomas Benson, David Emerson, and Dong Han, and with James Adler, Timothy Atherton, and Luis Dorfmann.

Preconditioning is of significant importance in the solution of large dimensional systems of linear equations such as those that arise from the numerical solution of partial differential equation problems. In this talk we will attempt a broad ranging review of preconditioning.