# Past Computational Mathematics and Applications Seminar

The requirement to compute Jordan blocks for multiple eigenvalues arises in a number of physical problems, for example panel flutter problems in aerodynamical stability, the stability of electrical power systems, and in quantum mechanics. We introduce a general method for computing a 2-dimensional Jordan block in a parameter-dependent matrix eigenvalue problem based on the so called Implicit Determinant Method. This is joint work with Alastair Spence (Bath).

In many applications we need to find or estimate the $p \ge 1$ largest elements of a matrix, along with their locations. This is required for recommender systems used by Amazon and Netflix, link prediction in graphs, and in finding the most important links in a complex network, for example.

Our algorithm uses only matrix vector products and is based upon a power method for mixed subordinate norms. We have obtained theoretical results on the convergence of this algorithm via a comparison with rook pivoting for the LU decomposition. We have also improved the practicality of the algorithm by producing a blocked version iterating on $n \times t$ matrices, as opposed to vectors, where $t$ is a tunable parameter. For $p > 1$ we show how deflation can be used to improve the convergence of the algorithm.

Finally, numerical experiments on both randomly generated matrices and real-life datasets (the latter for $A^TA$ and $e^A$) show how our algorithms can reliably estimate the largest elements of a matrix whilst obtaining considerable speedups when compared to forming the matrix explicitly: over 1000x in some cases.

For many years, sum-factored algorithms for finite elements in rectangular reference geometry have combined low complexity with the mathematical power of high-order approximation. However, such algorithms rely heavily on the tensor product structure inherent in the geometry and basis functions, and similar algorithms for simplicial geometry have proven elusive.

Bernstein polynomials are totally nonnegative, rotationally symmetric, and geometrically decomposed bases with many other remarkable properties that lead to optimal-complexity algorithms for element wise finite element computations. The also form natural building blocks for the finite element exterior calculus bases for the de Rham complex so that H(div) and H(curl) bases have efficient representations as well. We will also their relevance for explicit discontinuous Galerkin methods, where the element mass matrix requires special attention.

We will review the basic building blocks of iterative solvers, i.e. sparse matrix-vector multiplication, in the context of GPU devices such

as the cards by NVIDIA; we will then discuss some techniques in preconditioning by approximate inverses, and we will conclude with an

application to an image processing problem from the biomedical field.

The aim of this talk is to design an efficient multigrid method for constrained convex optimization problems arising from discretization of some underlying infinite dimensional problems. Due to problem dependency of this approach, we only consider bound constraints with (possibly) a linear equality constraint. As our aim is to target large-scale problems, we want to avoid computation of second

derivatives of the objective function, thus excluding Newton like methods. We propose a smoothing operator that only uses first-order information and study the computational efficiency of the resulting method. In the second part, we consider application of multigrid techniques to more general optimization problems, in particular, the topology design problem.

Quadrature is the term for the numerical evaluation of integrals. It's a beautiful subject because it's so accessible, yet full of conceptual surprises and challenges. This talk will review ten of these, with plenty of history and numerical demonstrations. Some are old if not well known, some are new, and two are subjects of my current research.