17:00
17:00
17:00
Sharply k-homogeneous actions on Fraïssé structures
Abstract
On Global Rates for Regularization Methods Based on Secant Derivative Approximations
Abstract
An inexact framework for high-order adaptive regularization methods is presented, in which approximations may be used for the pth-order tensor, based on lower-order derivatives. Between each recalculation of the pth-order derivative approximation, a high-order secant equation can be used to update the pth-order tensor as proposed in (Welzel 2022) or the approximation can be kept constant in a lazy manner. When refreshing the pth-order tensor approximation after m steps, an exact evaluation of the tensor or a finite difference approximation can be used with an explicit discretization stepsize. For all the newly adaptive regularization variants, we retrieve standard complexity bound to reach a second-order stationary point. Discussions on the number of oracle calls for each introduced variant are also provided. When p = 2, we obtain a second-order method that uses quasi-Newton approximations with optimal number of iterations bound.
Implicit-in-time, finite-element implementation of the bilinear Fokker-Planck collision operator for application to magnetised plasmas.
Contributors: M.R. Hardman, M. Abazorius, Omotani, M. Barnes, S.L. Newton, J.W.S. Cook, P.E. Farrell, F.I. Parra
Abstract
In continuum kinetic models of quasineutral plasmas, binary collisions between particles are represented by the bilinear Fokker-Planck collision operator. In full-F kinetic models, which solve for the entire particle probability distribution function, it is important to correctly capture this operator, which pushes the system towards thermodynamic equilibrium. We show a multi-species, conservative, finite element implementation of this operator, using the continuum Galerkin representation, in the Julia programming language. A Jacobian-free-Newton-Krylov solver is used to implement a backward-Euler time advance. We present several example problems that demonstrate the performance of the implementation, and we speculate on future applications.
Lanczos with compression for symmetric eigenvalue problems
Abstract
On the symmetry constraint and angular momentum conservation in mixed stress formulations
Abstract
In the numerical simulation of incompressible flows and elastic materials, it is often desirable to design discretisation schemes that preserve key structural properties of the underlying physical model. In particular, the conservation of angular momentum plays a critical role in accurately capturing rotational effects, and is closely tied to the symmetry of the stress tensor. Classical formulations such as the Stokes equations or linear elasticity can exhibit significant discrepancies when this symmetry is weakly enforced or violated at the discrete level.
This work focuses on mixed finite element methods that impose the symmetry of the stress tensor strongly, thereby ensuring exact conservation of angular momentum in the absence of body torques and couple stresses. We systematically study the effect of this constraint in both incompressible Stokes flow and linear elasticity, including anisotropic settings inspired by liquid crystal polymer networks. Through a series of benchmark problems—ranging from rigid body motions to transversely isotropic materials—we demonstrate the advantages of angular-momentum-preserving discretisations, and contrast their performance with classical elements.
Our findings reveal that strong symmetry enforcement not only leads to more robust a priori error estimates and pressure-independent velocity approximations, but also more reliable physical predictions in scenarios where angular momentum conservation is critical.
These insights advocate for the broader adoption of structure-preserving methods in computational continuum mechanics, especially in applications sensitive to rotational invariants.
From reinforcement learning to transfer learning and diffusion models, a (rough) differential equation perspective
Abstract
Transfer learning is a machine learning technique that leverages knowledge acquired in one domain to improve learning in another, related task. It is a foundational method underlying the success of large language models (LLMs) such as GPT and BERT, which were initially trained for specific tasks. In this talk, I will demonstrate how reinforcement learning (RL), particularly continuous time RL, can benefit from incorporating transfer learning techniques, especially with respect to convergence analysis. I will also show how this analysis naturally yields a simple corollary concerning the stability of score-based generative diffusion models.
Based on joint work with Zijiu Lyu of UC Berkeley.