Since the legendary 1972 encounter of H. Montgomery and F. Dyson at tea time in Princeton, a statistical correspondence of the non-trivial zeros of the Riemann Zeta function with eigenvalues of high-dimensional random matrices has emerged. Surrounded by many deep conjectures, there is a striking analogyto the energy levels of a quantum billiard system with chaotic dynamics. Thanks

to extensive calculation of Riemann zeros by A. Odlyzko, overwhelming numerical evidence has been found for the quantum analogy. The statistical accuracy provided by an enormous dataset of more than one billion zeros reveals distinctive finite size effects. Using the physical analogy, a precise prediction of these effects was recently accomplished through the numerical evaluation of operator determinants and their perturbation series (joint work with P. Forrester and A. Mays, Melbourne).

# Past Computational Mathematics and Applications Seminar

The present work concerns the approximation of the solution map associated to the parametric Helmholtz boundary value problem, i.e., the map which associates to each (real) wavenumber belonging to a given interval of interest the corresponding solution of the Helmholtz equation. We introduce a single-point Least Squares (LS) rational Padé-type approximation technique applicable to any meromorphic Hilbert space-valued univariate map, and we prove the uniform convergence of the Padé approximation error on any compact subset of the interval of interest that excludes any pole. We also present a simplified and more efficient version, named Fast LS-Padé, applicable to Helmholtz-type parametric equations with normal operators.

The LS-Padé techniques are then employed to approximate the frequency response map associated to various parametric time-harmonic wave problems, namely, a transmission/reflection problem, a scattering problem and a problem in high-frequency regime. In all cases we establish the meromorphy of the frequency response map. The Helmholtz equation with stochastic wavenumber is also considered. In particular, for Lipschitz functionals of the solution, and their corresponding probability measures, we establish weak convergence of the measure derived from the LS-Padé approximant to the true one. Two-dimensioanl numerical tests are performed, which confirm the effectiveness of the approximation method.As of the dates

Joint work with: Francesca Bonizzoni and Ilaria Perugia (Uni. Vienna), Davide Pradovera (EPFL)

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that balance these criteria. This talk offers an invitation to the field of matrix concentration inequalities and their applications.

We first consider multilevel Monte Carlo and stochastic collocation methods for determining statistical information about an output of interest that depends on the solution of a PDE with inputs that depend on random parameters. In our context, these methods connect a hierarchy of spatial grids to the amount of sampling done for a given grid, resulting in dramatic acceleration in the convergence of approximations. We then consider multifidelity methods for the same purpose which feature a variety of models that have different fidelities. For example, we could have coarser grid discretizations, reduced-order models, simplified physics, surrogates such as interpolants, and, in principle, even experimental data. No assumptions are made about the fidelity of the models relative to the “truth” model of interest so that unlike multilevel methods, there is no a priori model hierarchy available. However, our approach can still greatly accelerate the convergence of approximations.

In the past few decades, power grids across the world have become dependent on markets that aim to efficiently match supply with demand at all times via a variety of pricing and auction mechanisms. These markets are based on models that capture interactions between producers, transmission and consumers. Energy producers typically maximize profits by optimally allocating and scheduling resources over time. A dynamic equilibrium aims to determine prices and dispatches that can be transmitted over the electricity grid to satisfy evolving consumer requirements for energy at different locations and times. Computation allows large scale practical implementations of socially optimal models to be solved as part of the market operation, and regulations can be imposed that aim to ensure competitive behaviour of market participants.

Questions remain that will be outlined in this presentation.

Firstly, the recent explosion in the use of renewable supply such as wind, solar and hydro has led to increased volatility in this system. We demonstrate how risk can impose significant costs on the system that are not modeled in the context of socially optimal power system markets and highlight the use of contracts to reduce or recover these costs. We also outline how battery storage can be used as an effective hedging instrument.

Secondly, how do we guarantee continued operation in rarely occuring situations and when failures occur and how do we price this robustness?

Thirdly, how do we guarantee appropriate participant behaviour? Specifically, is it possible for participants to develop strategies that move the system to operating points that are not socially optimal?

Fourthly, how do we ensure enough transmission (and generator) capacity in the long term, and how do we recover the costs of this enhanced infrastructure?

Advances in manufacturing technologies, most prominently in additive manufacturing or 3d printing, are making it possible to fabricate highly optimised products with increasing geometric and hierarchical complexity. This talk will introduce our ongoing work on design optimisation that combines CAD-compatible geometry representations, multiresolution geometry processing techniques and immersed finite elements with classical shape and topology calculus. As example applications,the shape optimisation of mechanical structures and electromechanical components, and the topology optimisation of lattice-skin structures will be discussed.

The development of reduced order models for complex applications, offering the promise for rapid and accurate evaluation of the output of complex models under parameterized variation, remains a very active research area. Applications are found in problems which require many evaluations, sampled over a potentially large parameter space, such as in optimization, control, uncertainty quantification and applications where near real-time response is needed.

However, many challenges remain to secure the flexibility, robustness, and efficiency needed for general large-scale applications, in particular for nonlinear and/or time-dependent problems.

After giving a brief general introduction to reduced order models, we discuss developments in two different directions. In the first part, we discuss recent developments of reduced methods that conserve chosen invariants for nonlinear time-dependent problems. We pay particular attention to the development of reduced models for Hamiltonian problems and propose a greedy approach to build the basis. As we shall demonstrate, attention to the construction of the basis must be paid not only to ensure accuracy but also to ensure stability of the reduced model. Time permitting, we shall also briefly discuss how to extend the approach to include more general dissipative problems through the notion of port-Hamiltonians, resulting in reduced models that remain stable even in the limit of vanishing viscosity and also touch on extensions to Euler and Navier-Stokes equations.

The second part of the talk discusses the combination of reduced order modeling for nonlinear problems with the use of neural networks to overcome known problems of on-line efficiency for general nonlinear problems. We discuss the general idea in which training of the neural network becomes part of the offline part and demonstrate its potential through a number of examples, including for the incompressible Navier-Stokes equations with geometric variations.

This work has been done with in collaboration with B.F. Afkram (EPFL, CH), N. Ripamonti EPFL, CH) and S. Ubbiali (USI, CH).

In this talk we will introduce and analyse a class of robust numerical methods for nonlocal possibly nonlinear diffusion and convection-diffusion equations. Diffusion and convection-diffusion models are popular in Physics, Chemistry, Engineering, and Economics, and in many models the diffusion is anomalous or nonlocal. This means that the underlying “particle" distributions are not Gaussian, but rather follow more general Levy distributions, distributions that need not have second moments and can satisfy (generalised) central limit theorems. We will focus on models with nonlinear possibly degenerate diffusions like fractional Porous Medium Equations, Fast Diffusion Equations, and Stefan (phase transition) Problems, with or without convection. The solutions of these problems can be very irregular and even possess shock discontinuities. The combination of nonlinear problems and irregular solutions makes these problems challenging to solve numerically.

The methods we will discuss are monotone finite difference quadrature methods that are robust in the sense that they “always” converge. By that we mean that under very weak assumptions, they converge to the correct generalised possibly discontinuous generalised solution. In some cases we can also obtain error estimates. The plan of the talk is: 1. to give a short introduction to the models, 2. explain the numerical methods, 3. give results and elements of the analysis for pure diffusion equations, and 4. give results and ideas of the analysis for convection-diffusion equations.

A very common problem in Science is that we have some Data Observations and we are interested in either approximating the function underlying the data or computing some quantity of interest about this function. This talk will discuss what are best algorithms for such tasks and how we can evaluate the performance of any such algorithm.