Past Industrial and Interdisciplinary Workshops

5 June 2020
10:00
Junaid Mubeen

Further Information: 

A discussion session will follow the workshop and those interested are invited to stay in the meeting for the discussions.

Abstract

Maths-Whizz is an online, virtual maths tutor for 5-13 year-olds that is designed to behave like a human tutor. Using adaptive assessment and decision-tree algorithms, the virtual tutor guides each student along a personalised learning journey tailored to their needs. As students interact with the tutor, the system captures a range of learning analytics as an automatic by-product. These analytics, collected on a per-lesson and per-question basis, then inform a range of research projects centred on students' learning patterns. This workshop will introduce the mechanics of the Maths-Whizz tutor, as well as its related learning analytics. We will summarise the research behind four InfoMM mini-projects and present open questions we are currently grappling with. Maths-Whizz has supported over a million children and thousands of schools worldwide, from the UK and US to rural Kenya, the DRC and Mexico. In a world of social distancing and widespread school closures, the need for virtual tutoring has never been more paramount to children's learning - and nor has your data analytical expertise!

  • Industrial and Interdisciplinary Workshops
22 May 2020
10:00
Keith Briggs

Further Information: 

A discussion session will follow the workshop and those interested are invited to stay in the meeting for the discussions.

Abstract

Modern cellular radio systems such as 4G and 5G use antennas with multiple elements, a technique known as MIMO, and the intention is to increase the capacity of the radio channel.  5G allows even more possibilities, such as massive MIMO, where there can be hundreds of elements in the transmit antenna, and beam-forming (or beam-steering), where the phase of the signals fed to the antenna elements is adjusted to focus the signal energy in the direction of the receivers.  However, this technology poses some difficult optimization problems, and here mathematicians can contribute.   In this talk I will explain the background, and then look at questions such as: what is an appropriate objective function?; what constraints are there?; are any problems of this type convex (or quasi-convex, or difference-of-convex)?; and, can big problems of this type be solved in real time?

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).

  • Industrial and Interdisciplinary Workshops
28 February 2020
10:00
Christopher Townsend
Abstract

We present a simple algorithm that successfully re-constructs a sine wave, sampled vastly below the Nyquist rate, but with sampling time intervals having small random perturbations. We show how the fact that it works is just common sense, but then go on to discuss how the procedure relates to Compressed Sensing. It is not exactly Compressed Sensing as traditionally stated because the sampling transformation is not linear.  Some published results do exist that cover non-linear sampling transformations, but we would like a better understanding as to what extent the relevant CS properties (of reconstruction up to probability) are known in certain relatively simple but non-linear cases that could be relevant to industrial applications.

  • Industrial and Interdisciplinary Workshops
14 February 2020
10:00
Juan Reveles
Abstract

RF-engineering defines the “perfect” parabolic shape a foldable reflector antenna (e.g. the membrane) should have. In practice it is virtually impossible to design a deployable backing structure that can meet all RF-imposed requirements. Inevitably the shape of the membrane will deviate from its ideal parabolic shape when material properties and pragmatic mechanical design are considered. There is therefore a challenge to model such membranes in order to find the form they take and then use the model as a design tool and perhaps in an optimisation objective function, if tractable. 

The variables we deal with are:
Elasticity of the membrane (anisotropic or orthotropic typ)
Boundary forces (by virtue of the interaction between the membrane and it’s attachment)
Elasticity of the backing structure (e.g. the elasticity properties of the attachment)
Number, location and elasticity of the membrane fixing points

There are also in-orbit environmental effects on such structures for which modelling could also be of value. For example, the structure can undergo thermal shocks and oscillations can occur that are un-dampened by the usual atmospheric interactions at ground level etc. There are many other such points to be considered and allowed for.

  • Industrial and Interdisciplinary Workshops
31 January 2020
10:00
Michael Ostroumov
Abstract

Background: The traditional business models for B2B freight and distribution are struggling with underutilised transport capacities resulting in higher costs, excessive environmental damage and unnecessary congestion. The scale of the problem is captured by the European Environmental Agency: only 63% of journeys carry useful load and the average vehicle utilisation is under 60% (by weight or volume). Decarbonisation of vehicles would address only part of the problem. That is why leading sector researchers estimate that freight collaboration (co-shipment) will deliver a step change improvement in vehicle fill and thus remove unproductive journeys delivering over 20% of cost savings and >25% reduction in environmental footprint. However, these benefits can only be achieved at a scale that involves 100’s of players collaborating at a national or pan-regional level. Such scale and level of complexity creates a massive optimisation challenge that current market solutions are unable to handle (modern route planning solutions optimise deliveries only within the “4 walls” of a single business).

Maths challenge: The mentioned above optimisation challenge could be expressed as an extended version of the TSP, but with multiple optimisation objectives (other than distance). Moreover, besides the scale and multi-agent setup (many shippers, carriers and recipients engaged simultaneously) the model would have to operate a number of variables and constraints, which in addition to the obvious ones also include: time (despatch/delivery dates/slots and journey durations), volume (items to be delivered), transport equipment with respective rate-cards from different carriers, et al. With the possible variability of despatch locations (when clients have multi-warehouse setup) this potentially creates a very-large non-convex optimisation problem that would require development of new, much faster algorithms and approaches. Such algorithm should be capable of finding “local” optimums and subsequently improve them within a very short window i.e. in minutes, which would be required to drive and manage effective inter-company collaboration across many parties involved. We tried a few different approaches eg used Gurobi solver, which even with clustering was still too slow and lacked scalability, only to realise that we need to build such an algorithm in-house.

Ask: We started to investigate other approaches like Simulated Annealing or Gravitational Emulation Local Search but this work is preliminary and new and better ideas are of interest. So in support of our Technical Feasibility study we are looking for support in identification of the best approach and design of the actual algorithm that we’ll use in the development of our Proof of Concept.  

  • Industrial and Interdisciplinary Workshops
6 December 2019
10:00
Steve Walker
Abstract

This challenge relates to problems (of a mathematical nature) in generating optimal solutions for natural flood management.  Natural flood management involves large numbers of small scale interventions in a much larger context through exploiting natural features in place of, for example, large civil engineering construction works. There is an optimisation problem related to the catchment hydrology and present methods use several unsatisfactory simplifications and assumptions that we would like to improve on.

  • Industrial and Interdisciplinary Workshops
29 November 2019
10:00
Brian Macey
Abstract

Background

The RON test is an engine test that is used to measure the research octane number (RON) of a gasoline. It is a parameter that is set in fuels specifications and is an indicator of a fuel to partially explode during burning rather than burn smoothly.

The efficiency of a gasoline engine is limited by the RON value of the fuel that it is using. As the world moves towards lower carbon, predicting the RON of a fuel will become more important.

Typical market gasolines are blended from several hundred hydrocarbon components plus alcohols and ethers. Each component has a RON value and therefore, if the composition is known then the RON can be calculated. Unfortunately, components can have antagonistic or complimentary effects on each other and therefore this needs to be taken into account in the calculation.

Several models have been produced over the years (the RON test has been around for over 60 years) but the accuracy of the models is variable. The existing models are empirically based rather than taking into account the causal links between fuel component properties and RON performance.

Opportunity

BP has developed intellectual property regarding the causal links and we need to know if these can be used to build a functional based model. There is also an opportunity to build a better empirically based model using data on individual fuel components (previous models have grouped similar components to lessen the computing effort)

  • Industrial and Interdisciplinary Workshops
15 November 2019
10:00
Michael Hirsch
Abstract

Optical super-resolution microscopy enables the observations of individual bio-molecules. The arrangement and dynamic behaviour of such molecules is studied to get insights into cellular processes which in turn lead to various application such as treatments for cancer diseases. STFC's Central Laser Facility provides (among other) public access to super-resolution microscope techniques via research grants. The access includes sample preparation, imaging facilities and data analysis support. Data analysis includes single molecule tracking algorithms that produce molecule traces or tracks from time series of molecule observations. While current algorithms are gradually getting away from "connecting the dots" and using probabilistic methods, they often fail to quantify the uncertainties in the results. We have developed a method that samples a probability distribution of tracking solutions using the Metropolis-Hastings algorithm. Such a method can produce likely alternative solutions together with uncertainties in the results. While the method works well for smaller data sets, it is still inefficient for the amount of data that is commonly collected with microscopes. Given the observations of the molecules, tracking solutions are discrete, which gives the proposal distribution of the sampler a peculiar form. In order for the sampler to work efficiently, the proposal density needs to be well designed. We will discuss the properties of tracking solutions and the problems of the proposal function design from the point of view of discrete mathematics, specifically in terms of graphs. Can mathematical theory help to design a efficient proposal function?

  • Industrial and Interdisciplinary Workshops
8 November 2019
10:00
Abstract

We will present three problems that we are interested in:

Forecast of volatility both at the instrument and portfolio level by combining a model based approach with data driven research
We will deal with additional complications that arise in case of instruments that are highly correlated and/or with low volumes and open interest.
Test if volatility forecast improves metrics or can be used to derive alpha in our trading book.

Price predication using physical oil grades data
Hypothesis:
Physical markets are most reflective of true fundamentals. Derivative markets can deviate from fundamentals (and hence physical markets) over short term time horizons but eventually converge back. These dislocations would represent potential trading opportunities.
The problem:
Can we use the rich data from the physical market prices to predict price changes in the derivative markets?
Solution would explore lead/lag relationships amongst a dataset of highly correlated features. Also explore feature interdependencies and non-linearities.
The prediction could be in the form of a price target for the derivative (‘fair value’), a simple direction without magnitude, or a probabilistic range of outcomes.

Modelling oil balances by satellite data
The flow of oil around the world from being extracted, refined, transported and consumed, forms a very large dynamic network. At both regular and irregular intervals, we can make noisy measurements of the amount of oil at certain points in the network.
In addition, we have general macro-economic information about the supply and demand of oil in certain regions.
Based on that information, with general information about the connections between nodes in the network i.e. the typical rate of transfer, one can build a general model for how oil flows through the network.
We would like to build a probabilistic model on the network, representing our belief about the amount of oil stored at each of our nodes, which we refer to as balances.
We want to focus on particular parts of the network where our beliefs can be augmented by satellite data, which can be done by focusing on a sub network containing nodes that satellite measurements can be applied to.

  • Industrial and Interdisciplinary Workshops

Pages