Thu, 20 Mar 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Firedrake: a differentiable programming framework for finite element simulation

David Ham
(Imperial College London)
Abstract

Differentiable programming is the underpinning technology for the AI revolution. It allows neural networks to be programmed in very high level user code while still achieving very high performance for both the evaluation of the network and, crucially, its derivatives. The Firedrake project applies exactly the same concepts to the simulation of physical phenomena modelled with partial differential equations (PDEs). By exploiting the high level mathematical abstraction offered by the finite element method, users are able to write mathematical operators for the problem they wish to solve in Python. The high performance parallel implementations of these operators are then automatically generated, and composed with the PETSc solver framework to solve the resulting PDE. However, because the symbolic differential operators are available as code, it is possible to reason symbolically about them before the numerical evaluation. In particular, the operators can be differentiated with respect to their inputs, and the resulting derivative operators composed in forward or reverse order. This creates a differentiable programming paradigm congruent with (and compatible with) machine learning frameworks such as Pytorch and JAX. 

 

In this presentation, David Ham will present Firedrake in the context of differentiable programming, and show how this enables productivity, capability and performance to be combined in a unique way. I will also touch on the mechanism that enables Firedrake to be coupled with Pytorch and JAX.

  

Please note this talk will take place at Rutherford Appleton Laboratory, Harwell Campus, Didcot. 

Faster Lead-Acid Battery Simulations from Porous-Electrode Theory: I. Physical Model
Sulzer, V Chapman, S Please, C Howey, D Monroe, C (05 Feb 2019)
Faster Lead-Acid Battery Simulations from Porous-Electrode Theory: II. Asymptotic Analysis
Sulzer, V Chapman, S Please, C Howey, D Monroe, C (05 Feb 2019)
Geometric martingale Benamou–Brenier transport and geometric Bass martingales
Backhoff, J Loeper, G Obloj, J Proceedings of the American Mathematical Society
Bounds on Heavy Axions with an X-Ray Free Electron Laser
Halliday, J Marocco, G Beyer, K Heaton, C Nakatsutsumi, M Preston, T Arrowsmith, C Baehtz, C Goede, S Humphries, O Garcia, A Plackett, R Svensson, P Vacalis, G Wark, J Wood, D Zastrau, U Bingham, R Shipsey, I Sarkar, S Gregori, G Physical Review Letters volume 134 issue 5 055001 (06 Feb 2025)
Mon, 17 Feb 2025
16:00
C6

Hoheisel's theorem on primes in short intervals via combinatorics

Jori Merikoski
(Oxford)
Abstract

Hoheisel's theorem states that there is some $\delta> 0$ and some $x_0>0$ such that for all $x > x_0$ the interval $[x,x+x^{1-\delta}]$ contains prime numbers. Classically this is proved using the Riemann zeta function and results about its zeros such as the zero-free region and zero density estimates. In this talk I will describe a new elementary proof of Hoheisel's theorem. This is joint work with Kaisa Matomäki (Turku) and Joni Teräväinen (Cambridge). Instead of the zeta function, our approach is based on sieve methods and ideas coming from additive combinatorics, in particular, the transference principle. The method also gives an L-function free proof of Linnik's theorem on the least prime in arithmetic progressions.

Reply: Yes, the human brain has around 86 billion neurons.
Goriely, A Brain : a journal of neurology awaf049 (06 Feb 2025)
Subscribe to