# Past Numerical Analysis Group Internal Seminar

The antitriangular factorisation of real symmetric indefinite matrices recently proposed by Mastronardi and van Dooren has several pleasing properties. It is backward stable, preserves eigenvalues and reveals the inertia, that is, the number of positive, zero and negative eigenvalues.

In this talk we show that the antitriangular factorization simplifies for saddle point matrices, and that solving a saddle point system in antitriangular form is equivalent to applying the well-known nullspace method. We obtain eigenvalue bounds for the saddle point matrix and discuss the role of the factorisation in preconditioning.

From cryptography to the proof of Fermat's Last Theorem, elliptic curves (those curves of the form y^2 = x^3 + ax+b) are ubiquitous in modern number theory. In particular, much activity is focused on developing techniques to discover rational points on these curves. It turns out that finding a rational point on an elliptic curve is very much like finding the proverbial needle in the haystack -- in fact, there is currently no algorithm known to completely determine the group of rational points on an arbitrary elliptic curve.

I'll introduce the ''real'' picture of elliptic curves and discuss why the ambient real points of these curves seem to tell us little about finding rational points. I'll summarize some of the story of elliptic curves over finite and p-adic fields and tell you about how I study integral points on (hyper)elliptic curves via p-adic integration, which relies on doing a bit of p-adic linear algebra. Time permitting, I'll also give a short demo of some code we have to carry out these algorithms in the Sage Math Cloud.

In this talk we explore continuous analogues of matrix factorizations. The analogues we develop involve bivariate functions, quasimatrices (a matrix whose columns are 1D functions), and a definition of triangular in the continuous setting. Also, we describe why direct matrix algorithms must become iterative algorithms with pivoting for functions. New applications arise for function factorizations because of the underlying assumption of continuity. One application is central to Chebfun2.

Crouzeix's conjecture is an exasperating problem of linear algebra that has been open since 2004: the norm of p(A) is bounded by twice the maximum value of p on the field of values of A, where A is a square matrix and p is a polynomial (or more generally an analytic function). I'll say a few words about the conjecture and

show the beautiful proof of Pearcy in 1966 of a special case, based on a vector-valued barycentric interpolation formula.

Hessians of functionals of PDE solutions have important applications in PDE-constrained optimisation (Newton methods) and uncertainty quantification (for accelerating high-dimensional Bayesian inference). With current techniques, a typical cost for one Hessian-vector product is 4-11 times the cost of the forward PDE solve: such high costs generally make their use in large-scale computations infeasible, as a Hessian solve or eigendecomposition would have costs of hundreds of PDE solves.

In this talk, we demonstrate that it is possible to exploit the common structure of the adjoint, tangent linear and second-order adjoint equations to greatly accelerate the computation of Hessian-vector products, by trading a large amount of computation for a large amount of storage. In some cases of practical interest, the cost of a Hessian-

vector product is reduced to a small fraction of the forward solve, making it feasible to employ sophisticated algorithms which depend on them.

We propose a new algorithm for the approximate solution of large-scale high-dimensional tensor-structured linear systems. It can be applied to high-dimensional differential equations, which allow a low-parametric approximation of the multilevel matrix, right-hand side and solution in a tensor product format. We apply standard one-site tensor optimisation algorithm (ALS), but expand the tensor manifolds using the classical iterative schemes (e.g. steepest descent). We obtain the rank--adaptive algorithm with the theoretical convergence estimate not worse than the one of the steepest descent, and fast practical convergence, comparable or even better than the convergence of more expensive two-site optimisation algorithm (DMRG).

The method is successfully applied for a high--dimensional problem of quantum chemistry, namely the NMR simulation of a large peptide.

This is a joint work with S.Dolgov (Max-Planck Institute, Leipzig, Germany), supported by RFBR and EPSRC grants.

Keywords: high--dimensional problems, tensor train format, ALS, DMRG, steepest descent, convergence rate, superfast algorithms, NMR.

We study a system of partial differential equations describing a steady flow of an incompressible generalized Newtonian fluid, wherein the Cauchy stress depends on concentration. Namely, we consider a coupled system of the generalized Navier-Stokes equations (viscosity of power-law type with concentration dependent power index) and convection-diffusion equation with non-linear diffusivity. We focus on the existence analysis of a weak solution for certain class of models by using a generalization of the monotone operator theory which fits into the framework of generalized Sobolev spaces with variable exponent (class of Sobolev-Orlicz spaces). Such results is then adapted for a suitable FEM approximation, for which the main tool of proof is a generalization of the Lipschitz approximation method.