Hello Oxford Maths students! 

GSK is building a new team in their AI R&D department, lead by a fellow Oxford MMath alumnus. Join us in the very fast-growing space of pharmaceutical finance and answer important questions like “How can we optimise our capital allocation for the development of essential medicines and vaccines? What level of risk can we afford to take given our budget? What can historical data tell us about the uncertainty in clinical trial outcomes?”

A group of students studyingA reminder that prelims corner is taking place every Monday at 11am in the South Mezzanine!

Physics-informed recovery of nonlinear residual stress fields in an inverse continuum framework
Sanz-Herrera, J Goriely, A Journal of the Mechanics and Physics of Solids volume 200 (27 Feb 2025)
Hyperbolicity and Model-Complete Fields
Szachniewicz, M Ye, J International Mathematics Research Notices volume 2025 issue 4 rnaf019 (21 Feb 2025)

For those of you (all?) who follow our social media we have now joined Bluesky as 'oxfordmathematics', though we are not leaving X as we don't want to abandon nearly 70,000 followers.

We haven't posted on Bluesky yet, not least because much of our content is now video and Bluesky has a small video file size. But we will.

Higher order Lipschitz Sandwich theorems
Lyons, T McLeod, A Journal of the London Mathematical Society volume 111 issue 3 (07 Mar 2025)
Thu, 08 May 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Multilevel Monte Carlo Methods with Smoothing

Aretha Teckentrup
(University of Edinburgh)
Abstract

Parameters in mathematical models are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the uncertainty in the model outputs. 

In this talk, we employ the multilevel Monte Carlo (MLMC) method to compute expected values of quantities of interest related to partial differential equations with random coefficients. We make use of the circulant embedding method for sampling from the coefficient, and to further improve the computational complexity of the MLMC estimator, we devise and implement the smoothing technique integrated into the circulant embedding method. This allows to choose the coarsest mesh on the  first level of MLMC independently of the correlation length of the covariance function of the random  field, leading to considerable savings in computational cost.

 

 

Please note; this talk is hosted by Rutherford Appleton Laboratory, Harwell Campus, Didcot, OX11 0QX

 

 

 

Wasserstein distributional adversarial training for deep neural networks
Bai, X He, G Jiang, Y Obloj, J (13 Feb 2025)
Subscribe to