14:00
Differentiable programming is the underpinning technology for the AI revolution. It allows neural networks to be programmed in very high level user code while still achieving very high performance for both the evaluation of the network and, crucially, its derivatives. The Firedrake project applies exactly the same concepts to the simulation of physical phenomena modelled with partial differential equations (PDEs). By exploiting the high level mathematical abstraction offered by the finite element method, users are able to write mathematical operators for the problem they wish to solve in Python. The high performance parallel implementations of these operators are then automatically generated, and composed with the PETSc solver framework to solve the resulting PDE. However, because the symbolic differential operators are available as code, it is possible to reason symbolically about them before the numerical evaluation. In particular, the operators can be differentiated with respect to their inputs, and the resulting derivative operators composed in forward or reverse order. This creates a differentiable programming paradigm congruent with (and compatible with) machine learning frameworks such as Pytorch and JAX.
In this presentation, David Ham will present Firedrake in the context of differentiable programming, and show how this enables productivity, capability and performance to be combined in a unique way. I will also touch on the mechanism that enables Firedrake to be coupled with Pytorch and JAX.
Please note this talk will take place at Rutherford Appleton Laboratory, Harwell Campus, Didcot.