Finite element schemes and mesh smoothing for geometric evolution problems
Abstract
Geometric evolutions can arise as simple models or fundamental building blocks in various applications with moving boundaries and time-dependent domains, such as grain boundaries in materials or deforming cell boundaries. Mesh-based methods require adaptation and smoothing, particularly in the case of strong deformations. We consider finite element schemes based on classical approaches for geometric evolution equations but augmented with the gradient of the Dirichlet energy or a variant of it, which is known to produce a tangential mesh movement beneficial for the mesh quality. We focus on the one-dimensional case, where convergence of semi-discrete schemes can be proved, and discuss two cases. For networks forming triple junctions, it is desirable to keep the impact of any additional, mesh smoothing terms on the geometric evolution as small as possible, which can be achieved with a perturbation approach. Regarding the elastic flow of curves, the Dirichlet energy can serve as a replacement of the usual penalty in terms of the length functional in that, modulo rescaling, it yields the same minimisers in the long run.
two-dimensional ODEs
invasion: Simulations and comparisons
Group-invariant tensor train networks for supervised learning
Abstract
Invariance under selected transformations has recently proven to be a powerful inductive bias in several machine learning models. One class of such models are tensor train networks. In this talk, we impose invariance relations on tensor train networks. We introduce a new numerical algorithm to construct a basis of tensors that are invariant under the action of normal matrix representations of an arbitrary discrete group. This method can be up to several orders of magnitude faster than previous approaches. The group-invariant tensors are then combined into a group-invariant tensor train network, which can be used as a supervised machine learning model. We applied this model to a protein binding classification problem, taking into account problem-specific invariances, and obtained prediction accuracy in line with state-of-the-art invariant deep learning approaches. This is joint work with Brent Sprangers.
appendix with Dawid Kielak
Nonlinear Fokker-Planck equations modelling large networks of neurons
Sessions led by Dr Pierre Roux will take place on
30 May 2023 10:00 - 12:00 C2
6 June 2023 15:00 - 17:00 C2
8 June 2023 10:00 - 12:00 C2
13 June 2023 15:00 - 17:00 C2
Participants should have a good knowledge of Functional Analysis; basic knowledge about PDEs and distributions; and notions in probability. Should you be interested in taking part in the course, please send an email to @email.
Abstract
We will start from the description of a particle system modelling a finite size network of interacting neurons described by their voltage. After a quick description of the non-rigorous and rigorous mean-field limit results, we will do a detailed analytical study of the associated Fokker-Planck equation, which will be the occasion to introduce in context powerful general methods like the reduction to a free boundary Stefan-like problem, the relative entropy methods, the study of finite time blowup and the numerical and theoretical exploration of periodic solutions for the delayed version of the model. I will then present some variants and related models, like nonlinear kinetic Fokker-Planck equations and continuous systems of Fokker-Planck equations coupled by convolution.