17:00
Sharply k-homogeneous actions on Fraïssé structures
Abstract
On the symmetry constraint and angular momentum conservation in mixed stress formulations
Abstract
In the numerical simulation of incompressible flows and elastic materials, it is often desirable to design discretisation schemes that preserve key structural properties of the underlying physical model. In particular, the conservation of angular momentum plays a critical role in accurately capturing rotational effects, and is closely tied to the symmetry of the stress tensor. Classical formulations such as the Stokes equations or linear elasticity can exhibit significant discrepancies when this symmetry is weakly enforced or violated at the discrete level.
This work focuses on mixed finite element methods that impose the symmetry of the stress tensor strongly, thereby ensuring exact conservation of angular momentum in the absence of body torques and couple stresses. We systematically study the effect of this constraint in both incompressible Stokes flow and linear elasticity, including anisotropic settings inspired by liquid crystal polymer networks. Through a series of benchmark problems—ranging from rigid body motions to transversely isotropic materials—we demonstrate the advantages of angular-momentum-preserving discretisations, and contrast their performance with classical elements.
Our findings reveal that strong symmetry enforcement not only leads to more robust a priori error estimates and pressure-independent velocity approximations, but also more reliable physical predictions in scenarios where angular momentum conservation is critical.
These insights advocate for the broader adoption of structure-preserving methods in computational continuum mechanics, especially in applications sensitive to rotational invariants.
Reinforcement learning, transfer learning, and diffusion models
Abstract
Transfer learning is a machine learning technique that leverages knowledge acquired in one domain to improve learning in another, related task. It is a foundational method underlying the success of large language models (LLMs) such as GPT and BERT, which were initially trained for specific tasks. In this talk, I will demonstrate how reinforcement learning (RL), particularly continuous time RL, can benefit from incorporating transfer learning techniques, especially with respect to convergence analysis. I will also show how this analysis naturally yields a simple corollary concerning the stability of score-based generative diffusion models.
Based on joint work with Zijiu Lyu of UC Berkeley.