How would we get a powerful AI to align itself with human preferences? What are human preferences anyway? And how can you code all this?

It turns out that maths give you the grounding to answer these fascinating and vital questions.

# Forthcoming Seminars

Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

We discuss the design of algorithms and codes for the solution of large sparse systems of linear equations on extreme scale computers that are characterized by having many nodes with multi-core CPUs or GPUs. We first use two approaches to get good single node performance. For symmetric systems we use task-based algorithms based on an assembly tree representation of the factorization. We then use runtime systems for scheduling the computation on both multicore CPU nodes and GPU nodes [6]. In this work, we are also concerned with the efficient parallel implementation of the solve phase using the computed sparse factors, and we show impressive results relative to other state-of-the-art codes [3]. Our second approach was to design a new parallel threshold Markowitz algorithm [4] based on Luby’s method [7] for obtaining a maximal independent set in an undirected graph. This is a significant extension since our graph model is a directed graph. We then extend the scope of both these approaches to exploit distributed memory parallelism. In the first case, we base our work on the block Cimmino algorithm [1] using the ABCD software package coded by Zenadi in Toulouse [5, 8]. The kernel for this algorithm is the direct factorization of a symmetric indefinite submatrix for which we use the above symmetric code. To extend the unsymmetric code to distributed memory, we use the Zoltan code from Sandia [2] to partition the matrix to singly bordered block diagonal form and then use the above unsymmetric code on the blocks on the diagonal. In both cases, we illustrate the added parallelism obtained from combining the distributed memory parallelism with the high single-node performance and show that our codes out-perform other state-of-the-art codes. This work is joint with a number of people. We developed the algorithms and codes in an EU Horizon 2020 Project, called NLAFET, that finished on 30 April 2019. Coworkers in this were: Sebastien Cayrols, Jonathan Hogg, Florent Lopez, and Stojce ´ ∗iain.duff@stfc.ac.uk 1 Nakov. Collaborators in the block Cimmino part of the project were: Philippe Leleux, Daniel Ruiz, and Sukru Torun. Our codes available on the github repository https://github.com/NLAFET.

References [1] M. ARIOLI, I. S. DUFF, J. NOAILLES, AND D. RUIZ, A block projection method for sparse matrices, SIAM J. Scientific and Statistical Computing, 13 (1992), pp. 47–70. [2] E. BOMAN, K. DEVINE, L. A. FISK, R. HEAPHY, B. HENDRICKSON, C. VAUGHAN, U. CATALYUREK, D. BOZDAG, W. MITCHELL, AND J. TERESCO, Zoltan 3.0: Parallel Partitioning, Load-balancing, and Data Management Services; User’s Guide, Sandia National Laboratories, Albuquerque, NM, 2007. Tech. Report SAND2007-4748W http://www.cs.sandia. gov/Zoltan/ug_html/ug.html. [3] S. CAYROLS, I. S. DUFF, AND F. LOPEZ, Parallelization of the solve phase in a task-based Cholesky solver using a sequential task flow model, Int. J. of High Performance Computing Applications, To appear (2019). NLAFET Working Note 20. RAL-TR-2018-008. [4] T. A. DAVIS, I. S. DUFF, AND S. NAKOV, Design and implementation of a parallel Markowitz threshold algorithm, Technical Report RAL-TR-2019-003, Rutherford Appleton Laboratory, Oxfordshire, England, 2019. NLAFET Working Note 22. Submitted to SIMAX. [5] I. S. DUFF, R. GUIVARCH, D. RUIZ, AND M. ZENADI, The augmented block Cimmino distributed method, SIAM J. Scientific Computing, 37 (2015), pp. A1248–A1269. [6] I. S. DUFF, J. HOGG, AND F. LOPEZ, A new sparse symmetric indefinite solver using a posteriori threshold pivoting, SIAM J. Scientific Computing, To appear (2019). NLAFET Working Note 21. RAL-TR-2018-012. [7] M. LUBY, A simple parallel algorithm for the maximal independent set problem, SIAM J. Computing, 15 (1986), pp. 1036–1053. [8] M. ZENADI, The solution of large sparse linear systems on parallel computers using a hybrid implementation of the block Cimmino method., These de Doctorat, ´ Institut National Polytechnique de Toulouse, Toulouse, France, decembre 2013.

This paper studies the spread of losses and defaults in financial networks with two important features: collateral requirements and alternative contract termination rules in bankruptcy. When collateral is committed to a firm’s counterparties, a solvent firm may default if it lacks sufficient liquid assets to meet its payment obligations. Collateral requirements can thus increase defaults and payment shortfalls. Moreover, one firm may benefit from the failure of another if the failure frees collateral committed by the surviving firm, giving it additional resources to make other payments. Contract termination at default may also improve the ability of other firms to meet their obligations. As a consequence of these features, the timing of payments and collateral liquidation must be carefully specified, and establishing the existence of payments that clear the network becomes more complex. Using this framework, we study the consequences of illiquid collateral for the spread of losses through fire sales; we compare networks with and without selective contract termination; and we analyze the impact of alternative bankruptcy stay rules that limit the seizure of collateral at default. Under an upper bound on derivatives leverage, full termination reduces payment shortfalls compared with selective termination.

The flow of a thin film down an inclined plane is an important physical phenomenon appearing in many industrial applications, such as coating (where it is desirable to maintain the fluid interface flat) or heat transfer (where a larger interfacial area is beneficial). These applications lead to the need of reliably manipulating the flow in order to obtain a desired interfacial shape. The interface of such thin films can be described by a number of models, each of them exhibiting instabilities for certain parameter regimes. In this talk, I will propose a feedback control methodology based on same-fluid blowing and suction. I use the Kuramoto–Sivashinsky (KS) equation to model interface perturbations and to derive the controls. I will show that one can use a finite number of point-actuated controls based on observations of the interface to stabilise both the flat solution and any chosen nontrivial solution of the KS equation. Furthermore, I will investigate the robustness of the designed controls to uncertain observations and parameter values, and study the effect of the controls across a hierarchy of models for the interface, which include the KS equation, (nonlinear) long-wave models and the full Navier–Stokes equations.

Let E be an elliptic curve over the rationals and p a prime such that E admits a rational p-isogeny satisfying some assumptions. In a joint work with J. Lee and C. Skinner, we prove the anticyclotomic Iwasawa main conjecture for E/K for some suitable quadratic imaginary field K. I will explain our strategy and how this, combined with complex and p-adic Gross-Zagier formulae, allows us to prove that if E has rank one, then the p-part of the Birch and Swinnerton-Dyer formula for E/Q holds true.

## Further Information:

This lecture is about mathematical visualization: how to make accurate, effective, and beautiful pictures, models, and experiences of mathematical concepts. What is it that makes a visualization compelling?

Henry will show examples in the medium of 3D printing, as well as his work in virtual reality and spherical video. He will also discuss his experiences in teaching a project-based class on 3D printing for mathematics students.

Henry Segerman is an Associate Professor in the Department of Mathematics at Oklahoma State University.

Please email external-relations@maths.ox.ac.uk to register.

Watch live:

https://www.facebook.com/OxfordMathematics/

https://livestream.com/oxuni/Segerman

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

**Background**: The traditional business models for B2B freight and distribution are struggling with underutilised transport capacities resulting in higher costs, excessive environmental damage and unnecessary congestion. The scale of the problem is captured by the European Environmental Agency: only 63% of journeys carry useful load and the average vehicle utilisation is under 60% (by weight or volume). Decarbonisation of vehicles would address only part of the problem. That is why leading sector researchers estimate that freight collaboration (co-shipment) will deliver a step change improvement in vehicle fill and thus remove unproductive journeys delivering over 20% of cost savings and >25% reduction in environmental footprint. However, these benefits can only be achieved at a scale that involves 100’s of players collaborating at a national or pan-regional level. Such scale and level of complexity creates a massive optimisation challenge that current market solutions are unable to handle (modern route planning solutions optimise deliveries only within the “4 walls” of a single business).

**Maths challenge**: The mentioned above optimisation challenge could be expressed as an extended version of the TSP, but with multiple optimisation objectives (other than distance). Moreover, besides the scale and multi-agent setup (many shippers, carriers and recipients engaged simultaneously) the model would have to operate a number of variables and constraints, which in addition to the obvious ones also include: time (despatch/delivery dates/slots and journey durations), volume (items to be delivered), transport equipment with respective rate-cards from different carriers, et al. With the possible variability of despatch locations (when clients have multi-warehouse setup) this potentially creates a very-large non-convex optimisation problem that would require development of new, much faster algorithms and approaches. Such algorithm should be capable of finding “local” optimums and subsequently improve them within a very short window i.e. in minutes, which would be required to drive and manage effective inter-company collaboration across many parties involved. We tried a few different approaches eg used Gurobi solver, which even with clustering was still too slow and lacked scalability, only to realise that we need to build such an algorithm in-house.

**Ask**: We started to investigate other approaches like Simulated Annealing or Gravitational Emulation Local Search but this work is preliminary and new and better ideas are of interest. So in support of our Technical Feasibility study we are looking for support in identification of the best approach and design of the actual algorithm that we’ll use in the development of our Proof of Concept.

I will present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors. Based on joint work with André Uschmajew (MPI MiS Leipzig).

Angiogenesis, the process of the creation of new blood vessels from the existing vasculature, is a necessary step in tumor progression. Consequently, anti-angiogenic treatments have become of particular interest in cancer treatment. Despite the initial enthusiasm, there have been many conflicting results concerning the efficacy of anti-angiogenic treatments. Hence, the benefits of such treatments remain under debate. The dynamics associated with treating cancer with anti-angiogenic drugs are complex. These dynamics must be understood in order to maximize the benefits of such a therapy. We use mathematical modeling as a strategy to quantify the dynamics of the interactions between tumor growth, vasculature generation and anti-angiogenic treatment. We have developed a non-linear, mixed-effect ODE model of tumor growth and treatment of colorectal cancer. Model development is guided by preclinical data of from colorectal tumor bearing mice treated with sunitinib (anti-angiogenic). Parameters are estimated in a mixed-effect fashion (i.e. parameter values for both the population and each individual are estimated) using the SAEM (Stochastic Approximation of the Expectation Maximization algorithm) algorithm. This model accurately predicts tumor growth dynamics of individual subjects and allows us to study the multifaceted effects of anti-angiogenic treatment. This study will thus help in the development of evidence-based treatment protocols designed to optimize the effectiveness of anti-angiogenics, and eventually their combination with other cancer therapies.