Forthcoming events in this series


Fri, 08 Nov 2019

10:00 - 11:00
L3

Financial modelling and utilisation of a diverse range of data sets in oil markets

Milos Krkic
(BP IST Data Strategists)
Abstract

We will present three problems that we are interested in:

Forecast of volatility both at the instrument and portfolio level by combining a model based approach with data driven research
We will deal with additional complications that arise in case of instruments that are highly correlated and/or with low volumes and open interest.
Test if volatility forecast improves metrics or can be used to derive alpha in our trading book.

Price predication using physical oil grades data
Hypothesis:
Physical markets are most reflective of true fundamentals. Derivative markets can deviate from fundamentals (and hence physical markets) over short term time horizons but eventually converge back. These dislocations would represent potential trading opportunities.
The problem:
Can we use the rich data from the physical market prices to predict price changes in the derivative markets?
Solution would explore lead/lag relationships amongst a dataset of highly correlated features. Also explore feature interdependencies and non-linearities.
The prediction could be in the form of a price target for the derivative (‘fair value’), a simple direction without magnitude, or a probabilistic range of outcomes.

Modelling oil balances by satellite data
The flow of oil around the world from being extracted, refined, transported and consumed, forms a very large dynamic network. At both regular and irregular intervals, we can make noisy measurements of the amount of oil at certain points in the network.
In addition, we have general macro-economic information about the supply and demand of oil in certain regions.
Based on that information, with general information about the connections between nodes in the network i.e. the typical rate of transfer, one can build a general model for how oil flows through the network.
We would like to build a probabilistic model on the network, representing our belief about the amount of oil stored at each of our nodes, which we refer to as balances.
We want to focus on particular parts of the network where our beliefs can be augmented by satellite data, which can be done by focusing on a sub network containing nodes that satellite measurements can be applied to.

Fri, 25 Oct 2019

10:00 - 11:00
L3

Maximum temperature rise of a thermally conductive cuboid subjected to a (potentially time dependent) power deposition profile

Wayne Arter
(CCFE)
Abstract

The challenge is to produce a reduced order model which predicts the maximum temperature rise of a thermally conducting object subjected to a power deposition profile supplied by an external code. The target conducting object is basically cuboidal but with one or more shaped faces and may have complex internal cooling structures, the deposition profile may be time dependent and exhibit hot spots and sharp edged shadows among other features. An additional feature is the importance of radiation which makes the problem nonlinear, and investigation of control strategies is also of interest. Overall there appears to be a sequence of problems of degree of difficulty sufficient to tax the most gifted student, starting with a line profile on a cuboid (quasi-2D) with linearised radiation term, and moving towards increased difficulty.

Fri, 14 Jun 2019

10:00 - 11:00
L2

Robust Identification of Drones and UAVs in the Air Space for Improving Public Safety and Security

Jahangir Mohammed
(Thales (Aveillant))
Abstract

The disruptive drone activity at airports requires an early warning system and Aveillant make a radar system that can do the job. The main problem is telling the difference between birds and drones where there may be one or two drones and 10s or 100s of birds. There is plenty of data including time series for how the targets move and the aim is to improve the discrimination capability of tracker using machine learning.

Specifically, the challenge is to understand whether there can be sufficient separability between birds and drones based on different features, such as flight profiles, length of the track, their states, and their dominance/correlation in the overall discrimination. Along with conventional machine learning techniques, the challenge is to consider how different techniques, such as deep neural networks, may perform in the discrimination task.

Fri, 31 May 2019

10:00 - 11:00
L3

An optimal control approach to Formula 1 lap simulation

Mike Beeson, Matt Davidson and James Rogers
(Racing Point F1)
Abstract

In Formula 1 engineers strive to produce the fastest car possible for their drivers. A lap simulation provides an objective evaluation of the performance of the car and the subsequent lap time achieved. Using this information, engineers aim to test new car concepts, determine performance limitations or compromises, and identify the sensitivity of performance to car setup parameters.

The latest state of the art lap simulation techniques use optimal control approaches. Optimisation methods are employed to derive the optimal control inputs of the car that achieve the fastest lap time within the constraints of the system. The resulting state trajectories define the complete behaviour of the car. Such approaches aim to create more robust, realistic and powerful simulation output compared to traditional methods.

In this talk we discuss our latest work in this area. A dynamic vehicle model is used within a free-trajectory solver based on direct optimal control methods. We discuss the reasons behind our design choices, our progress to date, and the issues we have faced during development. Further, we look at the short and long term aims of our project and how we wish to develop our mathematical methods in the future.

Fri, 10 May 2019

10:00 - 11:00
L3

Developing the Next Generation of Image Reconstruction in Atom Probe Tomography

Charlie Fletcher and Dan Haley
(Department of Materials Science)
Abstract

Atom Probe Tomography is a powerful 3D mass spectrometry technique. By pulsing the sample apex with an electric field, surface atoms are ionised and collected by a detector. A 3D image of estimated initial ion positions is constructed via an image reconstruction protocol. Current protocols assume ion trajectories follow a stereographic projection. However, this method assumes a hemispherical sample apex that fails to account for varying material ionisation rates and introduces severe distortions into atomic distributions for complex material systems.

We aim to develop continuum models and use this to derive a time-dependent mapping describing how ion initial positions on the sample surface correspond to final impact positions on the detector. When correctly calibrated with experiment, such a mapping could be used for performing reconstruction.

Currently we track the sample surface using a level set method, while the electric field is solved via BEM or a FEM-BEM coupling. These field calculations must remain accurate close to the boundary. Calibrating unknown evaporation parameters with experiment requires an ensemble of models per experiment. Therefore, we are also looking to maximise model efficiency via BEM compression methods i.e. fast multipole BEM. Efficiently constructing and reliably interpolating the non-bijective trajectory mapping, while accounting for ion trajectory overlap and instabilities (at sample surface corners), also presents intriguing problems.

This project is in collaboration with Cameca, the leading manufacturer of commercial atom probe instruments. If successful in minimising distortions such a technique could become valuable within the semiconductor industry.

Fri, 25 Jan 2019

10:00 - 11:00
L5

Coresets for clustering very large datasets

Stephane Chretien
(NPL)
Abstract

Clustering is a very important task in data analytics and is usually addressed using (i) statistical tools based on maximum likelihood estimators for mixture models, (ii) techniques based on network models such as the stochastic block model, or (iii) relaxations of the K-means approach based on semi-definite programming (or even simpler spectral approaches). Statistical approaches of type (i) often suffer from not being solvable with sufficient guarantees, because of the non-convexity of the underlying cost function to optimise. The other two approaches (ii) and (iii) are amenable to convex programming but do not usually scale to large datasets. In the big data setting, one usually needs to resort to data subsampling, a preprocessing stage also known as "coreset selection". We will present this last approach and the problem of selecting a coreset for the special cases of K-means and spectral-type relaxations.

 

Fri, 19 Oct 2018

10:00 - 11:00
L3

The Interdistrict shipping problem

Brent Peterson
(AirProducts)
Abstract

At first glance the Interdistrict shipping problem resembles a transportation problem.  N sources with M destinations with k Stock keeping units (SKU’s); however, we want to solve for the optimal shipping frequency between each node while determining the flow of each SKU across the network.  As the replenishment quantity goes up, the shipping frequency goes down and the inventory holding cost goes up (AWI = Replenishment Qty/2 + SS).  Safety stock also increases as frequency decreases.  The relationship between replenishment quantity and shipping frequency is non-linear (frequency = annual demand/replenishment qty).  The trucks which are used to transfer the product have finite capacity and the cost to drive the truck between 2 locations is constant regardless of how many containers are actually on the truck up to the max capacity.  Each product can have a different footprint of truck capacity.  Cross docking is allowed.  (i.e. a truck may travel from Loc A to loc B carrying products X and Y.  At loc B, the truck unloads product X, picks up product Z, and continues to location C.  The key here is that product Y does not incur any handling costs at Loc B while products X and Z do.)

The objective function seeks to minimize the total costs ( distribution + handling + inventory holding costs)  for all locations, for all SKU’s, while determining how much of each product should flow across each arc such that all demand is satisfied.

Fri, 09 Mar 2018

10:00 - 11:00
L3

1-3 Composite Modelling

Hannah Rose
(Thales)
Abstract

An important and relevant topic at Thales is 1-3 composite modelling capability. In particular, sensitivity enhancement through design.

A simplistic model developed by Smith and Auld1 has grouped the polycrystalline active and filler materials into an effective homogenous medium by using the rule of weighted averages in order to generate “effective” elastic, electric and piezoelectric properties. This method had been further improved by Avellaneda & Swart2. However, these models fail to provide all of the terms necessary to populate a full elasto-electric matrix – such that the remaining terms need to be estimated by some heuristic approach. The derivation of an approach which allowed all of the terms in the elasto-electric matrix to be calculated would allow much more thorough and powerful predictions – for example allowing lateral modes etc. to be traced and allow a more detailed design of a closely-packed array of 1-3 sensors to be conducted with much higher confidence, accounting for inter-elements coupling which partly governs the key field-of-view of the overall array. In addition, the ability to populate the matrix for single crystal material – which features more independent terms in the elasto-electric matrix than conventional polycrystalline material- would complement the increasing interest in single crystals for practical SONAR devices.

1.“Modelling 1-3 Composite Piezoelectrics: Hydrostatic Response” – IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control 40(1):41-

2.“Calculating the performance of 1-3 piezoelectric composites for hydrophone applications: An effective medium approach” The Journal of the Acoustical Society of America 103, 1449, 1998