A posteriori error estimators are a key tool for the quality assessment of given finite element approximations to an unknown PDE solution as well as for the application of adaptive techniques. Typically, the estimators are equivalent to the error up to an additive term, the so called oscillation. It is a common believe that this is the price for the `computability' of the estimator and that the oscillation is of higher order than the error. Cohen, DeVore, and Nochetto [CoDeNo:2012], however, presented an example, where the error vanishes with the generic optimal rate, but the oscillation does not. Interestingly, in this example, the local $H^{-1}$-norms are assumed to be computed exactly and thus the computability of the estimator cannot be the reason for the asymptotic overestimation. In particular, this proves both believes wrong in general. In this talk, we present a new approach to posteriori error analysis, where the oscillation is dominated by the error. The crucial step is a new splitting of the data into oscillation and oscillation free data. Moreover, the estimator is computable if the discrete linear system can essentially be assembled exactly.

# Past Computational Mathematics and Applications Seminar

The FEniCS system [1] allows the description of finite element discretisations of partial differential equations using a high-level syntax, and the automated conversion of these representations to working code via automated code generation. In previous work described in [2] the high-level representation is processed automatically to derive discrete tangent-linear and adjoint models. The processing of the model code at a high level eases the technical difficulty associated with management of data in adjoint calculations, allowing the use of optimal data management strategies [3].

This previous methodology is extended to enable the calculation of higher order partial differential equation constrained derivative information. The key additional step is to treat tangent-linear

equations on an equal footing with originating forward equations, and in particular to treat these in a manner which can themselves be further processed to enable the derivation of associated adjoint information, and the derivation of higher order tangent-linear equations, to arbitrary order. This enables the calculation of higher order derivative information -- specifically the contraction of a Kth order derivative against (K - 1) directions -- while still making use of optimal data management strategies. Specific applications making use of Hessian information associated with models written using the FEniCS system are presented.

[1] "Automated solution of differential equations by the finite element method: The FEniCS book", A. Logg, K.-A. Mardal, and G. N. Wells (editors), Springer, 2012

[2] P. E. Farrell, D. A. Ham, S. W. Funke, and M. E. Rognes, "Automated derivation of the adjoint of high-level transient finite element programs", SIAM Journal on Scientific Computing 35(4), C369--C393, 2013

[3] A. Griewank, and A. Walther, "Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation", ACM Transactions on Mathematical Software 26(1), 19--45, 2000

$$

\def\curl#1{\left\{#1\right\}}

\def\vek#1{\mathbf{#1}}

$$

lll-posed problems arise often in the context of scientific applications in which one cannot directly observe the object or quantity of interest. However, indirect observations or measurements can be made, and the observable data $y$ can be represented as the wanted observation $x$ being acted upon by an operator $\mathcal{A}$. Thus we want to solve the operator equation \begin{equation}\label{eqn.Txy} \mathcal{A} x = y, \end{equation} (1) often formulated in some Hilbert space $H$ with $\mathcal{A}:H\rightarrow H$ and $x,y\in H$. The difficulty then is that these problems are generally ill-posed, and thus $x$ does not depend continuously on the on the right-hand side. As $y$ is often derived from measurements, one has instead a perturbed $y^{\delta}$ such that ${y - y^{\delta}}_{H}<\delta$. Thus due to the ill-posedness, solving (1) with $y^{\delta}$ is not guaranteed to produce a meaningful solution. One such class of techniques to treat such problems are the Tikhonov-regularization methods. One seeks in reconstructing the solution to balance fidelity to the data against size of some functional evaluation of the reconstructed image (e.g., the norm of the reconstruction) to mitigate the effects of the ill-posedness. For some $\lambda>0$, we solve \begin{equation}\label{eqn.tikh} x_{\lambda} = \textrm{argmin}_{\widetilde{x}\in H}\left\lbrace{\left\|{b - A\widetilde{x}} \right\|_{H}^{2} + \lambda \left\|{\widetilde{x}}\right\|_{H}^{2}} \right\rbrace. \end{equation} In this talk, we discuss some new strategies for treating *discretized *versions of this problem. Here, we consider a discreditized, finite dimensional version of (1), \begin{equation}\label{eqn.Axb} Ax = b \mbox{ with } A\in \mathbb{R}^{n\times n}\mbox{ and } b\in\mathbb{R}^{n}, \end{equation} which inherits a discrete version of ill conditioning from [1]. We propose methods built on top of the Arnoldi-Tikhonov method of Lewis and Reichel, whereby one builds the Krylov subspace \begin{equation}

\mathcal{K}_{j}(\vek A,\vek w) = {\rm span\,}\curl{\vek w,\vek A\vek w,\vek A^{2}\vek w,\ldots,\vek A^{j-1}\vek w}\mbox{ where } \vek w\in\curl{\vek b,\vek A\vek b}

\end{equation}

and solves the discretized Tikhonov minimization problem projected onto that subspace. We propose to extend this strategy to setting of augmented Krylov subspace methods. Thus, we project onto a sum of subspaces of the form $\mathcal{U} + \mathcal{K}_{j}$ where $\mathcal{U}$ is a fixed subspace and $\mathcal{K}_{j}$ is a Krylov subspace. It turns out there are multiple ways to do this leading to different algorithms. We will explain how these different methods arise mathematically and demonstrate their effectiveness on a few example problems. Along the way, some new mathematical properties of the Arnoldi-Tikhonov method are also proven.

Since the legendary 1972 encounter of H. Montgomery and F. Dyson at tea time in Princeton, a statistical correspondence of the non-trivial zeros of the Riemann Zeta function with eigenvalues of high-dimensional random matrices has emerged. Surrounded by many deep conjectures, there is a striking analogyto the energy levels of a quantum billiard system with chaotic dynamics. Thanks

to extensive calculation of Riemann zeros by A. Odlyzko, overwhelming numerical evidence has been found for the quantum analogy. The statistical accuracy provided by an enormous dataset of more than one billion zeros reveals distinctive finite size effects. Using the physical analogy, a precise prediction of these effects was recently accomplished through the numerical evaluation of operator determinants and their perturbation series (joint work with P. Forrester and A. Mays, Melbourne).

The present work concerns the approximation of the solution map associated to the parametric Helmholtz boundary value problem, i.e., the map which associates to each (real) wavenumber belonging to a given interval of interest the corresponding solution of the Helmholtz equation. We introduce a single-point Least Squares (LS) rational Padé-type approximation technique applicable to any meromorphic Hilbert space-valued univariate map, and we prove the uniform convergence of the Padé approximation error on any compact subset of the interval of interest that excludes any pole. We also present a simplified and more efficient version, named Fast LS-Padé, applicable to Helmholtz-type parametric equations with normal operators.

The LS-Padé techniques are then employed to approximate the frequency response map associated to various parametric time-harmonic wave problems, namely, a transmission/reflection problem, a scattering problem and a problem in high-frequency regime. In all cases we establish the meromorphy of the frequency response map. The Helmholtz equation with stochastic wavenumber is also considered. In particular, for Lipschitz functionals of the solution, and their corresponding probability measures, we establish weak convergence of the measure derived from the LS-Padé approximant to the true one. Two-dimensioanl numerical tests are performed, which confirm the effectiveness of the approximation method.As of the dates

Joint work with: Francesca Bonizzoni and Ilaria Perugia (Uni. Vienna), Davide Pradovera (EPFL)

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that balance these criteria. This talk offers an invitation to the field of matrix concentration inequalities and their applications.

We first consider multilevel Monte Carlo and stochastic collocation methods for determining statistical information about an output of interest that depends on the solution of a PDE with inputs that depend on random parameters. In our context, these methods connect a hierarchy of spatial grids to the amount of sampling done for a given grid, resulting in dramatic acceleration in the convergence of approximations. We then consider multifidelity methods for the same purpose which feature a variety of models that have different fidelities. For example, we could have coarser grid discretizations, reduced-order models, simplified physics, surrogates such as interpolants, and, in principle, even experimental data. No assumptions are made about the fidelity of the models relative to the “truth” model of interest so that unlike multilevel methods, there is no a priori model hierarchy available. However, our approach can still greatly accelerate the convergence of approximations.

In the past few decades, power grids across the world have become dependent on markets that aim to efficiently match supply with demand at all times via a variety of pricing and auction mechanisms. These markets are based on models that capture interactions between producers, transmission and consumers. Energy producers typically maximize profits by optimally allocating and scheduling resources over time. A dynamic equilibrium aims to determine prices and dispatches that can be transmitted over the electricity grid to satisfy evolving consumer requirements for energy at different locations and times. Computation allows large scale practical implementations of socially optimal models to be solved as part of the market operation, and regulations can be imposed that aim to ensure competitive behaviour of market participants.

Questions remain that will be outlined in this presentation.

Firstly, the recent explosion in the use of renewable supply such as wind, solar and hydro has led to increased volatility in this system. We demonstrate how risk can impose significant costs on the system that are not modeled in the context of socially optimal power system markets and highlight the use of contracts to reduce or recover these costs. We also outline how battery storage can be used as an effective hedging instrument.

Secondly, how do we guarantee continued operation in rarely occuring situations and when failures occur and how do we price this robustness?

Thirdly, how do we guarantee appropriate participant behaviour? Specifically, is it possible for participants to develop strategies that move the system to operating points that are not socially optimal?

Fourthly, how do we ensure enough transmission (and generator) capacity in the long term, and how do we recover the costs of this enhanced infrastructure?

Advances in manufacturing technologies, most prominently in additive manufacturing or 3d printing, are making it possible to fabricate highly optimised products with increasing geometric and hierarchical complexity. This talk will introduce our ongoing work on design optimisation that combines CAD-compatible geometry representations, multiresolution geometry processing techniques and immersed finite elements with classical shape and topology calculus. As example applications,the shape optimisation of mechanical structures and electromechanical components, and the topology optimisation of lattice-skin structures will be discussed.