Topology optimisation finds the optimal material distribution of a fluid or solid in a domain, subject to PDE and volume constraints. There are many formulations and we opt for the density approach which results in a PDE, volume and inequality constrained, non-convex, infinite-dimensional optimisation problem without a priori knowledge of a good initial guess. Such problems can exhibit many local minima or even no minima. In practice, heuristics are used to obtain the global minimum, but these can fail even in the simplest of cases. In this talk, we will present an algorithm that solves such problems and systematically discovers as many of these local minima as possible along the way.

# Past Junior Applied Mathematics Seminar

Recent advances in experimental imaging techniques have allowed us to observe the fine details of how droplets behave upon impact onto a substrate. However, these are highly non-linear, multiscale phenomena and are thus a formidable challenge to model. In addition, when the substrate is deformable, such as an elastic sheet, the fluid-structure interaction introduces an extra layer of complexity.

We present two modeling approaches for droplet impact onto deformable substrates: matched asymptotics and direct numerical simulations. In the former, we use Wagner's theory of impact to derive analytical expressions which approximate the behavior during the early time of impact. In the latter, we use the open source volume-of-fluid code Basilisk to conduct simulations designed to give insight into the later times of impact.

We conclude by showing how these methods are complementary, and a combination of both can give a thorough understanding of the droplet impact across timescales.

We consider the problem of global minimization with bound constraints. The problem is known to be intractable for large dimensions due to the exponential increase in the computational time for a linear increase in the dimension (also known as the “curse of dimensionality”). In this talk, we demonstrate that such challenges can be overcome for functions with low effective dimensionality — functions which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations.

Extending the idea of random subspace embeddings in Wang et al. (2013), we introduce a new framework (called REGO) compatible with any global min- imization algorithm. Within REGO, a new low-dimensional problem is for- mulated with bound constraints in the reduced space. We provide probabilistic bounds for the success of REGO; these results indicate that the success is depen- dent upon the dimension of the embedded subspace and the intrinsic dimension of the function, but independent of the ambient dimension. Numerical results show that high success rates can be achieved with only one embedding and that rates are independent of the ambient dimension of the problem.

Introducing cheap function proxies for quickly producing approximate random numbers, we show convergence of modified numerical schemes, and coupling between approximation and discretisation errors. We bound the cumulative roundoff error introduced by floating-point calculations, valid for 16-bit half-precision (FP16). We combine approximate distributions and reduced-precisions into a nested simulation framework (via multilevel Monte Carlo), demonstrating performance improvements achieved without losing accuracy. These simulations predominantly perform most of their calculations in very low precisions. We will highlight the motivations and design choices appropriate for SVE and FP16 capable hardware, and present numerical results on Arm, Intel, and NVIDIA based hardware.

In a robust decision, we are pessimistic toward our decision making when the probability measure is unknown. In particular, we optimise our decision under the worst case scenario (e.g. via value at risk or expected shortfall). On the other hand, most theories in reinforcement learning (e.g. UCB or epsilon-greedy algorithm) tell us to be more optimistic in order to encourage learning. These two approaches produce an apparent contradict in decision making. This raises a natural question. How should we make decisions, given they will affect our short-term outcomes, and information available in the future?

In this talk, I will discuss this phenomenon through the classical multi-armed bandit problem which is known to be solved via Gittins' index theory under the setting of risk (i.e. when the probability measure is fixed). By extending this result to an uncertainty setting, we can show that it is possible to take into account both uncertainty and learning for a future benefit at the same time. This can be done by extending a consistent nonlinear expectation (i.e. nonlinear expectation with tower property) through multiple filtrations.

At the end of the talk, I will present numerical results which illustrate how we can control our level of exploration and exploitation in our decision based on some parameters.

Multiple scales analysis is a powerful asymptotic technique for problems where the solution depends on two scales of widely different sizes. Standard multiple scales involves the introduction of a macroscale and microscale which are assumed to be independent. A common (and usually acceptable) assumption is that when considering behaviour on the microscale, the macroscale variable can be taken as constant, however there are instances where this assumption is not valid. In this talk, I will explain one such situation, that is, when considering conductive-radiative thermal transfer within a solid matrix with spherical perforations and discuss the appropriate measures when converting the radiative boundary condition into multiple-scales form.

The time bottleneck in the manufacturing process of Besi (company involved in ESGI 149 Innsbruck) is the extraction of undamaged dies from a component wafer. The easiest way for them to speed up this process is to reduce the number of 'selections' made by the robotic arm. Each 'selection' made by this robotic arm can be thought of as choosing a 2x2 submatix of a large binary matrix, and editing the 1's in this submatrix to be 0's. The quesiton is: what is the fewest number of 2x2 submatrices required to cover the full matrix, and how can we find this number. This problem can be solved exactly using integer programming methods, although this approach proves to be prohibitively expensive for realistic sizes. In this talk I will describe the approach taken by my team at EGSI 149, as well as directions for further improvement.

I plan to present a brief introduction to optimal control theory (no background knowledge assumed), and discuss a fascinating and oft-forgotten family of problems where the optimal control behaves very strangely; it changes state infinitely often in finite time. This causes havoc in practice, and even more so in the literature.

The development of an effective method of targeting delivery of stem cells to the site of an injury is a key challenge in regenerative medicine. However, production of stem cells is costly and current delivery methods rely on large doses in order to be effective. Improved targeting through use of an external magnetic field to direct delivery of magnetically-tagged stem cells to the injury site would allow for smaller doses to be used.

We present a model for delivery of stem cells implanted with a fixed number of magnetic nanoparticles under the action of an external magnetic field. We examine the effect of magnet geometry and strength on therapy efficacy. The accuracy of the mathematical model is then verified against experimental data provided by our collaborators at the University of Birmingham.

With growing population of humans being clustered in large cities and connected by fast routes more suitable environments for epidemics are being created. Topped by rapid mutation rate of viral and bacterial strains, epidemiological studies stay a relevant topic at all times. From the beginning of 2019, the World Health Organization publishes at least five disease outbreak news including Ebola virus disease, dengue fever and drug resistant gonococcal infection, the latter is registered in the United Kingdom.

To control the outbreaks it is necessary to gain information on mechanisms of appearance and evolution of pathogens. Close to all disease-causing virus and bacteria undergo a specialization towards a human host from the closest livestock or wild fauna of a shared habitat. Every strain (or subtype) of a pathogen has a set of characteristics (e.g. infection rate and burst size) responsible for its success in a new environment, a host cell in case of a virus, and with the right amount of skepticism that set can be framed as fitness of the pathogen. In our model, we consider a population of a mutating strain of a virus. The strain specialized towards a new host usually remains in the environment and does not switch until conditions get volatile. Two subtypes, wild and mutant, of the virus share a host. This talk will illustrate findings on an explicitly independent cycling coexistence of the two subtypes of the parasite population. A rare transcritical bifurcation of limit cycles is discussed. Moreover, we will find conditions when one of the strains can outnumber and eventually eliminate the other strain focusing on an infection rate as fitness of strains.