Quadrature is the term for the numerical evaluation of integrals. It's a beautiful subject because it's so accessible, yet full of conceptual surprises and challenges. This talk will review ten of these, with plenty of history and numerical demonstrations. Some are old if not well known, some are new, and two are subjects of my current research.

# Past Computational Mathematics and Applications Seminar

Partial differential equations with more than three coordinates arise naturally if the model features certain kinds of stochasticity. Typical examples are the Schroedinger, Fokker-Planck and Master equations in quantum mechanics or cell biology, as well as quantification of uncertainty.

The principal difficulty of a straightforward numerical solution of such equations is the `curse of dimensionality': the storage cost of the discrete solution grows exponentially with the number of coordinates (dimensions).

One way to reduce the complexity is the low-rank separation of variables. One can see all discrete data (such as the solution) as multi-index arrays, or tensors. These large tensors are never stored directly.

We approximate them by a sum of products of smaller factors, each carrying only one of the original variables. I will present one of the simplest but powerful of such representations, the Tensor Train (TT) decomposition. The TT decomposition generalizes the approximation of a given matrix by a low-rank matrix to the tensor case. It was found that many interesting models allow such approximations with a significant reduction of storage demands.

A workhorse approach to computations with the TT and other tensor product decompositions is the alternating optimization of factors. The simple realization is however prone to convergence issues.

I will show some of the recent improvements that are indispensable for really many dimensions, or solution of linear systems with non-symmetric or indefinite matrices.

To face the advent of multicore processors and the ever increasing complexity of hardware architectures, programming

models based on DAG parallelism regained popularity in the high performance, scientific computing community. Modern runtime systems offer a programming interface that complies with this paradigm and powerful engines for scheduling the tasks into which the application is decomposed. These tools have already proved their effectiveness on a number of dense linear algebra applications.

In this talk we present the design of task-based sparse direct solvers on top of runtime systems. In the context of the

qr_mumps solver, we prove the usability and effectiveness of our approach with the implementation of a sparse matrix multifrontal factorization based on a Sequential Task flow parallel programming model. Using this programming model, we developed features such as the integration of dense 2D Communication Avoiding algorithms in the multifrontal method allowing for better scalability compared to the original approach used in qr_mumps.

Following this approach, we move to heterogeneous architectures where task granularity and scheduling strategies are critical to achieve performance. We present, for the multifrontal method, a hierarchical strategy for data partitioning and a scheduling algorithm capable of handling the heterogeneity of resources. Finally we introduce a memory-aware algorithm to control the memory behavior of our solver and show, in the context of multicore architectures, an important reduction of the memory footprint for the multifrontal QR factorization with a small impact on performance.

Functions are usually approximated numerically in a basis, a non-redundant and complete set of functions that span a certain space. In this talk we highlight a number of benefits of using overcomplete sets, in particular using the more general notion of a "frame". The main benefit is that frames are easily constructed even for functions of several variables on domains with irregular shapes. On the other hand, allowing for possible linear depencies naturally leads to ill-conditioning of approximation algorithms. The ill-conditioning is potentially severe. We give some useful examples of frames and we first address the numerical stability of best approximations in a frame. Next, we briefly describe special point sets in which interpolation turns out to be stable. Finally, we review so-called Fourier extensions and an efficient algorithm to approximate functions with spectral accuracy on domains without structure.

When assigned with the task of extracting information from given image data the first challenge one faces is the derivation of a truthful model for both the information and the data. Such a model can be determined by the a-priori knowledge about the image (information), the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we customise our model choice to a particular application? Or better how can we make our model adaptive to the given data?

Starting from the first modelling strategy this talk will lead us from nonlinear diffusion equations and subdifferential inclusions of total variation type functionals as the most successful image modeltoday to non-smooth second- and third-order variational models, with data models for Gaussian and Poisson distributed data as well as impulse noise. These models exhibit solution-dependent adaptivities in form of nonlinearities or non-smooth terms in the PDE or the variational problem, respectively. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a PDE constrained optimisation method that customises a parametrised form of the model by learning from examples. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.

Equations of quantum mechanics in the semiclassical regime present an enduring challenge for numerical analysts, because their solution is highly oscillatory and evolves on two scales. Standard computational approaches to the semiclassical Schrödinger equation do not allow for long time integration as required, for example, in quantum control of atoms by short laser bursts. This has motivated our approach of asymptotic splittings. Combining techniques from Lie-algebra theory and numerical algebra, we present a new computational paradigm of symmetric Zassenhaus splittings, which lends itself to a very precise discretisation in long time intervals, at very little cost. We will illustrate our talk by examples of quantum phenomena – quantum tunnelling and quantum scattering – and their computation and, time allowing, discuss an extension of this methodology to time-dependent semiclassical systems using Magnus expansions

This talk focuses on the direct search method, arguably one of the simplest optimization algorithms. The algorithm minimizes an objective function by iteratively evaluating it along a number of (polling) directions, which are typically taken from so-called positive spanning sets. It does not use derivatives.

We first introduce the worst case complexity theory of direct search, and discuss how to choose the positive spanning set to minimize the complexity bound. The discussion leads us to a long-standing open

problem in Discrete Geometry. A recent result on this problem enables us to establish the optimal order for the worst case complexity of direct search.

We then show how to achieve even lower complexity bound by using random polling directions. It turns out that polling along two random directions at each iteration is sufficient to guarantee the convergence

of direct search for any dimension, and the resultant algorithm enjoys lower complexity both in theory and in practice.

The last part of the talk is devoted to direct search based on inaccurate function values. We address three questions:

i) what kind of solution can we obtain by direct search if the function values are inaccurate?

ii) what is the worst case complexity to attain such a solution? iii) given

the inaccuracy in the function values, when to stop the algorithm in order

to guarantee the quality of the solution and also avoid “over-optimization”?

This talk is based on joint works with F. Delbos, M. Dodangeh, S. Gratton, B. Pauwels, C. W. Royer, and L. N. Vicente.

I will review some recent work on the problem of reliable automatic detection of blow-up behaviour for nonlinear parabolic PDEs. The adaptive algorithms developed are based on rigorous conditional a posteriori error bounds. The use of space-time adaptivity is crucial in making the problem computationally tractable. The results presented are applicable to quite general spatial operators, rendering the approach potentially useful in informing respective PDE theory. The new adaptive algorithm is shown to accurately estimate the blow-up time of a number of problems, including ones exhibiting regional blow-up.