The triangulation complexity of fibred 3–manifolds
Lackenby, M Purcell, J Geometry & Topology volume 28 issue 4 1727-1828 (18 Jul 2024)
Approximation of an Inverse of the Incomplete Beta Function
Giles, M Beentjes, C Mathematical Software – ICMS 2024 volume 14749 (17 Jul 2024)
A set of guidelines for expected professional behaviour at the Mathematical Institute.
The Mathematics of Shock Reflection-Diffraction and von Neumann’s Conjectures Chen, G Feldman, M (01 Jan 2018)
Thu, 23 Jan 2025

14:00 - 15:00
Lecture Room 3

Multi-Index Monte Carlo Method for Semilinear Stochastic Partial Differential Equations

Abdul Lateef Haji-Ali
(Heriot Watt)
Abstract

We present an exponential-integrator-based multi-index Monte Carlo (MIMC) method for the weak approximation of mild solutions to semilinear stochastic partial differential equations (SPDEs). Theoretical results on multi-index coupled solutions of the SPDE are provided, demonstrating their stability and the satisfaction of multiplicative error estimates. Leveraging this theory, we develop a tractable MIMC algorithm. Numerical experiments illustrate that MIMC outperforms alternative approaches, such as multilevel Monte Carlo, particularly in low-regularity settings.

A first passage model of intravitreal drug delivery and residence time - influence of ocular geometry, individual variability, and injection location
Lamirande, P Gaffney, E Gertz, M Maini, P Crawshaw, J Caruso, A Investigative Ophthalmology and Visual Science
Thu, 07 Nov 2024

14:00 - 15:00
Lecture Room 3

Multilevel Monte Carlo methods

Mike Giles
(Oxford University)
Abstract

In this seminar I will begin by giving an overview of some problems in stochastic simulation and uncertainty quantification. I will then outline the Multilevel Monte Carlo for situations in which accurate simulations are very costly, but it is possible to perform much cheaper, less accurate simulations.  Inspired by the multigrid method, it is possible to use a combination of these to achieve the desired overall accuracy at a much lower cost.

Thu, 14 Nov 2024

14:00 - 15:00
Lecture Room 3

Group discussion on the use of AI tools in research

Mike Giles
(Oxford University)
Abstract

AI tools like ChatGPT, Microsoft Copilot, GitHub Copilot, Claude and even older AI-enabled tools like Grammarly and MS Word, are becoming an everyday part of our research environment.  This last-minute opening up of a seminar slot due to the unfortunate illness of our intended speaker (who will hopefully re-schedule for next term) gives us an opportunity to discuss what this means for us as researchers; what are good helpful uses of AI, and are there uses of AI which we might view as inappropriate?  Please come ready to participate with examples of things which you have done yourselves with AI tools.

Thu, 05 Dec 2024

14:00 - 15:00
Lecture Room 3

Solving (algebraic problems from) PDEs; a personal perspective

Andy Wathen
(Oxford University)
Abstract

We are now able to solve many partial differential equation problems that were well beyond reach when I started in academia. Some of this success is due to computer hardware but much is due to algorithmic advances. 

I will give a personal perspective of the development of computational methodology in this area over my career thus far. 

Thu, 28 Nov 2024

14:00 - 15:00
Lecture Room 3

Unleashing the Power of Deeper Layers in LLMs

Shiwei Liu
(Oxford University)
Abstract

Large Language Models (LLMs) have demonstrated impressive achievements. However, recent research has shown that their deeper layers often contribute minimally, with effectiveness diminishing as layer depth increases. This pattern presents significant opportunities for model compression. 

In the first part of this seminar, we will explore how this phenomenon can be harnessed to improve the efficiency of LLM compression and parameter-efficient fine-tuning. Despite these opportunities, the underutilization of deeper layers leads to inefficiencies, wasting resources that could be better used to enhance model performance. 

The second part of the talk will address the root cause of this ineffectiveness in deeper layers and propose a solution. We identify the issue as stemming from the prevalent use of Pre-Layer Normalization (Pre-LN) and introduce Mix-Layer Normalization (Mix-LN) with combined Pre-LN and Post-LN as a new approach to mitigate this training deficiency.

Subscribe to