Multilevel Monte Carlo methods
Abstract
In this seminar I will begin by giving an overview of some problems in stochastic simulation and uncertainty quantification. I will then outline the Multilevel Monte Carlo for situations in which accurate simulations are very costly, but it is possible to perform much cheaper, less accurate simulations. Inspired by the multigrid method, it is possible to use a combination of these to achieve the desired overall accuracy at a much lower cost.
Group discussion on the use of AI tools in research
Abstract
AI tools like ChatGPT, Microsoft Copilot, GitHub Copilot, Claude and even older AI-enabled tools like Grammarly and MS Word, are becoming an everyday part of our research environment. This last-minute opening up of a seminar slot due to the unfortunate illness of our intended speaker (who will hopefully re-schedule for next term) gives us an opportunity to discuss what this means for us as researchers; what are good helpful uses of AI, and are there uses of AI which we might view as inappropriate? Please come ready to participate with examples of things which you have done yourselves with AI tools.
Solving (algebraic problems from) PDEs; a personal perspective
Abstract
We are now able to solve many partial differential equation problems that were well beyond reach when I started in academia. Some of this success is due to computer hardware but much is due to algorithmic advances.
I will give a personal perspective of the development of computational methodology in this area over my career thus far.
Unleashing the Power of Deeper Layers in LLMs
Abstract
Large Language Models (LLMs) have demonstrated impressive achievements. However, recent research has shown that their deeper layers often contribute minimally, with effectiveness diminishing as layer depth increases. This pattern presents significant opportunities for model compression.
In the first part of this seminar, we will explore how this phenomenon can be harnessed to improve the efficiency of LLM compression and parameter-efficient fine-tuning. Despite these opportunities, the underutilization of deeper layers leads to inefficiencies, wasting resources that could be better used to enhance model performance.
The second part of the talk will address the root cause of this ineffectiveness in deeper layers and propose a solution. We identify the issue as stemming from the prevalent use of Pre-Layer Normalization (Pre-LN) and introduce Mix-Layer Normalization (Mix-LN) with combined Pre-LN and Post-LN as a new approach to mitigate this training deficiency.
Tackling complexity in multiscale kinetic and mean-field equations
Abstract
Kinetic and mean-field equations are central to understanding complex systems across fields such as classical physics, engineering, and the socio-economic sciences. Efficiently solving these equations remains a significant challenge due to their high dimensionality and the need to preserve key structural properties of the models.
In this talk, we will focus on recent advancements in deterministic numerical methods, which provide an alternative to particle-based approaches (such as Monte Carlo or particle-in-cell methods) by avoiding stochastic fluctuations and offering higher accuracy. We will discuss strategies for designing these methods to reduce computational complexity while preserving fundamental physical properties and maintaining efficiency in stiff regimes.
Special attention will be given to the role of these methods in addressing multi-scale problems in rarefied gas dynamics and plasma physics. Time permitting, we will also touch on emerging techniques for uncertainty quantification in these systems.
16:00
Norm properties of tracially complete C*-algebras
Abstract
We discuss the trace problem and stable rank for tracialy complete C*-algebras