In this talk we will discuss a problem that was worked on during MISGSA 2020, a Study Group held in January at The University of Zululand, South Africa.

We look at a communication network with two types of users - Primary users (PU) and Secondary users (SU) - such that we reduce the network to a set of overlapping sub-graphs consisting of SUs indexed by a specific PU. Within any given sub-graph, the PU may be communicating at a certain fixed frequency F. The respective SUs also wish to communicate at the same frequency F, but not at the expense of interfering with the PU signal. Therefore if the PU is active then the SUs will not communicate.

In an attempt to increase information throughput in the network, we instead allow the SUs to communicate at a different frequency G, which may or may not interfere with a different sub-graph PU in the network, leading to a multi-objective optimisation problem.

We will discuss not only the problem formulation and possible approaches for solving it, but also the pitfalls that can be easily fallen into during study groups.

# Past Junior Applied Mathematics Seminar

Stress perfusion cardiac magnetic resonance (CMR) imaging has been shown to be highly accurate for the detection of coronary artery disease. However, a major limitation is that the accuracy of the visual assessment of the images is challenging and thus the accuracy of the diagnosis is highly dependent on the training and experience of the reader. Quantitative perfusion CMR, where myocardial blood flow values are inferred directly from the MR images, is an automated and user-independent alternative to the visual assessment.

This talk will focus on addressing the main technical challenges which have hampered the adoption of quantitative myocardial perfusion MRI in clinical practice. The talk will cover the problem of respiratory motion in the images and the use of dimension reduction techniques, such as robust principal component analysis, to mitigate this problem. I will then discuss our deep learning-based image processing pipeline that solves the necessary series of computer vision tasks required for the blood flow modelling and introduce the Bayesian inference framework in which the kinetic parameter values are inferred from the imaging data.

Many manufacturing processes require the use of robots to transport parts around a factory line. Some parts, which are very thin (e.g. car doors)

are prone to elastic deformations as they are moved around by a robot. These should be avoided at all cost. A problem that was recently raised by

F.E.E. (Fleischmann Elektrotech Engineering) at the ESGI 158 study group in Barcelona was to ascertain how to determine the stresses of a piece when

undergoing a prescribed motion by a robot. We present a simple model using Kirschoff-Love theory of flat plates and how this can be adapted. We

outline how the solutions of the model can then be used to determine the stresses.

Numerous mathematical models have been proposed for modelling cancerous tumour invasion (Gatenby and Gawlinski 1996), angiogenesis (Owen et al 2008), growth kinetics (Wang et al 2009), response to irradiation (Gao et al 2013) and metastasis (Qiam and Akcay 2018). In this study, we attempt to model the qualitative behavior of growth, invasion, angiogenesis and fragmentation of tumours at the tissue level in an explicitly spatial and continuous manner in two dimensions. We simulate the effectiveness of radiation therapy on a growing tumour in comparison with immunotherapy and propose a novel framework based on vector fields for modelling the impact of interstitial flow on tumour morphology. The results of this model demonstrate the effectiveness of employing a system of partial differential equations along with vector fields for simulating tumour fragmentation and that immunotherapy, when applicable, is substantially more effective than radiation therapy.

Topology optimisation finds the optimal material distribution of a fluid or solid in a domain, subject to PDE and volume constraints. There are many formulations and we opt for the density approach which results in a PDE, volume and inequality constrained, non-convex, infinite-dimensional optimisation problem without a priori knowledge of a good initial guess. Such problems can exhibit many local minima or even no minima. In practice, heuristics are used to obtain the global minimum, but these can fail even in the simplest of cases. In this talk, we will present an algorithm that solves such problems and systematically discovers as many of these local minima as possible along the way.

Recent advances in experimental imaging techniques have allowed us to observe the fine details of how droplets behave upon impact onto a substrate. However, these are highly non-linear, multiscale phenomena and are thus a formidable challenge to model. In addition, when the substrate is deformable, such as an elastic sheet, the fluid-structure interaction introduces an extra layer of complexity.

We present two modeling approaches for droplet impact onto deformable substrates: matched asymptotics and direct numerical simulations. In the former, we use Wagner's theory of impact to derive analytical expressions which approximate the behavior during the early time of impact. In the latter, we use the open source volume-of-fluid code Basilisk to conduct simulations designed to give insight into the later times of impact.

We conclude by showing how these methods are complementary, and a combination of both can give a thorough understanding of the droplet impact across timescales.

We consider the problem of global minimization with bound constraints. The problem is known to be intractable for large dimensions due to the exponential increase in the computational time for a linear increase in the dimension (also known as the “curse of dimensionality”). In this talk, we demonstrate that such challenges can be overcome for functions with low effective dimensionality — functions which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations.

Extending the idea of random subspace embeddings in Wang et al. (2013), we introduce a new framework (called REGO) compatible with any global min- imization algorithm. Within REGO, a new low-dimensional problem is for- mulated with bound constraints in the reduced space. We provide probabilistic bounds for the success of REGO; these results indicate that the success is depen- dent upon the dimension of the embedded subspace and the intrinsic dimension of the function, but independent of the ambient dimension. Numerical results show that high success rates can be achieved with only one embedding and that rates are independent of the ambient dimension of the problem.

Introducing cheap function proxies for quickly producing approximate random numbers, we show convergence of modified numerical schemes, and coupling between approximation and discretisation errors. We bound the cumulative roundoff error introduced by floating-point calculations, valid for 16-bit half-precision (FP16). We combine approximate distributions and reduced-precisions into a nested simulation framework (via multilevel Monte Carlo), demonstrating performance improvements achieved without losing accuracy. These simulations predominantly perform most of their calculations in very low precisions. We will highlight the motivations and design choices appropriate for SVE and FP16 capable hardware, and present numerical results on Arm, Intel, and NVIDIA based hardware.

In a robust decision, we are pessimistic toward our decision making when the probability measure is unknown. In particular, we optimise our decision under the worst case scenario (e.g. via value at risk or expected shortfall). On the other hand, most theories in reinforcement learning (e.g. UCB or epsilon-greedy algorithm) tell us to be more optimistic in order to encourage learning. These two approaches produce an apparent contradict in decision making. This raises a natural question. How should we make decisions, given they will affect our short-term outcomes, and information available in the future?

In this talk, I will discuss this phenomenon through the classical multi-armed bandit problem which is known to be solved via Gittins' index theory under the setting of risk (i.e. when the probability measure is fixed). By extending this result to an uncertainty setting, we can show that it is possible to take into account both uncertainty and learning for a future benefit at the same time. This can be done by extending a consistent nonlinear expectation (i.e. nonlinear expectation with tower property) through multiple filtrations.

At the end of the talk, I will present numerical results which illustrate how we can control our level of exploration and exploitation in our decision based on some parameters.

Multiple scales analysis is a powerful asymptotic technique for problems where the solution depends on two scales of widely different sizes. Standard multiple scales involves the introduction of a macroscale and microscale which are assumed to be independent. A common (and usually acceptable) assumption is that when considering behaviour on the microscale, the macroscale variable can be taken as constant, however there are instances where this assumption is not valid. In this talk, I will explain one such situation, that is, when considering conductive-radiative thermal transfer within a solid matrix with spherical perforations and discuss the appropriate measures when converting the radiative boundary condition into multiple-scales form.