On the symmetry constraint and angular momentum conservation in mixed stress formulations
Abstract
In the numerical simulation of incompressible flows and elastic materials, it is often desirable to design discretisation schemes that preserve key structural properties of the underlying physical model. In particular, the conservation of angular momentum plays a critical role in accurately capturing rotational effects, and is closely tied to the symmetry of the stress tensor. Classical formulations such as the Stokes equations or linear elasticity can exhibit significant discrepancies when this symmetry is weakly enforced or violated at the discrete level.
This work focuses on mixed finite element methods that impose the symmetry of the stress tensor strongly, thereby ensuring exact conservation of angular momentum in the absence of body torques and couple stresses. We systematically study the effect of this constraint in both incompressible Stokes flow and linear elasticity, including anisotropic settings inspired by liquid crystal polymer networks. Through a series of benchmark problems—ranging from rigid body motions to transversely isotropic materials—we demonstrate the advantages of angular-momentum-preserving discretisations, and contrast their performance with classical elements.
Our findings reveal that strong symmetry enforcement not only leads to more robust a priori error estimates and pressure-independent velocity approximations, but also more reliable physical predictions in scenarios where angular momentum conservation is critical.
These insights advocate for the broader adoption of structure-preserving methods in computational continuum mechanics, especially in applications sensitive to rotational invariants.
Markov α-potential games
Abstract
We propose a new framework of Markov α-potential games to study Markov games.
We show that any Markov game with finite-state and finite-action is a Markov α-potential game, and establish the existence of an associated α-potential function. Any optimizer of an α-potential function is shown to be an α-stationary Nash equilibrium. We study two important classes of practically significant Markov games, Markov congestion games and the perturbed Markov team games, via the framework of Markov α-potential games, with explicit characterization of an upper bound for αand its relation to game parameters.
Additionally, we provide a semi-infinite linear programming based formulation to obtain an upper bound for α for any Markov game.
Furthermore, we study two equilibrium approximation algorithms, namely the projected gradient- ascent algorithm and the sequential maximum improvement algorithm, along with their Nash regret analysis.
This talk is part of the Erlangen AI Hub.
Stabilisation of the Navier–Stokes equations on under-resolved meshes via enstrophy preservation
Abstract
The typical energy estimate for the Navier-Stokes equations provides a bound for the gradient of the velocity; energy-stable numerical methods that preserve this estimate preserve this bound. However, the bound scales with the Reynolds number (Re) causing solutions to be numerically unstable (i.e. exhibit spurious oscillations) on under-resolved meshes. The dissipation of enstrophy on the other hand provides, in the transient 2D case, a bound for the gradient that is independent of Re.
We propose a finite-element integrator for the Navier-Stokes equations that preserves the evolution of both the energy and enstrophy, implying gradient bounds that are, in the 2D case, independent of Re. Our scheme is a mixed velocity-vorticity discretisation, making use of a discrete Stokes complex. While we introduce an auxiliary vorticity in the discretisation, the energy- and enstrophy-stability results both hold on the primal variable, the velocity; our scheme thus exhibits greater numerical stability at large Re than traditional methods.
We conclude with a demonstration of numerical results, and a discussion of the existence and uniqueness of solutions.
14:00
Quick on the draw: high-frequency trading in the Wild West of cryptocurrency limit order-book markets
Abstract
Cryptocurrencies such as Bitcoin have only recently become a significant part of the financial landscape. Many billions of dollars are now traded daily on limit order-book markets such as Binance, and these are probably among the most open, liquid and transparent markets there are. They therefore make an interesting platform from which to investigate myriad questions to do with market microstructure. I shall talk about a few of these, including live-trading experiments to investigate the difference between on-paper strategy analysis (typical in the academic literature) and actual trading outcomes. I shall also mention very recent work on the new Hyperliquid exchange which runs on a blockchain basis, showing how to use this architecture to obtain datasets of an unprecendented level of granularity. This is joint work with Jakob Albers, Mihai Cucuringu and Alex Shestopaloff.
Cubic-quartic regularization models for solving polynomial subproblems in third-order tensor methods
Abstract
High-order tensor methods for solving both convex and nonconvex optimization problems have recently generated significant research interest, due in part to the natural way in which higher derivatives can be incorporated into adaptive regularization frameworks, leading to algorithms with optimal global rates of convergence and local rates that are faster than Newton's method. On each iteration, to find the next solution approximation, these methods require the unconstrained local minimization of a (potentially nonconvex) multivariate polynomial of degree higher than two, constructed using third-order (or higher) derivative information, and regularized by an appropriate power of the change in the iterates. Developing efficient techniques for the solution of such subproblems is currently, an ongoing topic of research, and this talk addresses this question for the case of the third-order tensor subproblem. In particular, we propose the CQR algorithmic framework, for minimizing a nonconvex Cubic multivariate polynomial with Quartic Regularisation, by sequentially minimizing a sequence of local quadratic models that also incorporate both simple cubic and quartic terms.
The role of the cubic term is to crudely approximate local tensor information, while the quartic one provides model regularization and controls progress. We provide necessary and sufficient optimality conditions that fully characterise the global minimizers of these cubic-quartic models. We then turn these conditions into secular equations that can be solved using nonlinear eigenvalue techniques. We show, using our optimality characterisations, that a CQR algorithmic variant has the optimal-order evaluation complexity of $O(\epsilon^{-3/2})$ when applied to minimizing our quartically-regularised cubic subproblem, which can be further improved in special cases. We propose practical CQR variants that judiciously use local tensor information to construct the local cubic-quartic models. We test these variants numerically and observe them to be competitive with ARC and other subproblem solvers on typical instances and even superior on ill-conditioned subproblems with special structure.
Reducing acquisition time and radiation damage: data-driven subsampling for spectromicroscopy
Abstract
Spectro-microscopy is an experimental technique with great potential to science challenges such as the observation of changes over time in energy materials or environmental samples and investigations of the chemical state in biological samples. However, its application is often limited by factors like long acquisition times and radiation damage. We present two measurement strategies that significantly reduce experiment times and applied radiation doses. These strategies involve acquiring only a small subset of all possible measurements and then completing the full data matrix from the sampled measurements. The methods are data-driven, utilizing spectral and spatial importance subsampling distributions to select the most informative measurements. Specifically, we use data-driven leverage scores and adaptive randomized pivoting techniques. We explore raster importance sampling combined with the LoopASD completion algorithm, as well as CUR-based sampling where the CUR approximation also serves as the completion method. Additionally, we propose ideas to make the CUR-based approach adaptive. As a result, capturing as little as 4–6% of the measurements is sufficient to recover the same information as a conventional full scan.
Low-rank approximation of parameter-dependent matrices via CUR decomposition
Abstract
Low-rank approximation of parameter-dependent matrices A(t) is an important task in the computational sciences, with applications in areas such as dynamical systems and the compression of series of images. In this talk, we introduce AdaCUR, an efficient randomised algorithm for computing low-rank approximations of parameter-dependent matrices using the CUR decomposition. The key idea of our approach is the ability to reuse column and row indices for nearby parameter values, improving efficiency. The resulting algorithm is rank-adaptive, provides error control, and has complexity that compares favourably with existing methods. This is joint work with Yuji Nakatsukasa.
Fast solvers for high-order finite element discretizations of the de Rham complex
Abstract
Many applications in electromagnetism, magnetohydrodynamics, and pour media flow are well-posed in spaces from the 3D de Rham complex involving $H^1$, $H(curl)$, $H(div)$, and $L^2$. Discretizing these spaces with the usual conforming finite element spaces typically leads to discrete problems that are both structure-preserving and uniformly stable with respect to the mesh size and polynomial degree. Robust preconditioners/solvers usually require the inversion of subproblems or auxiliary problems on vertex, edge, or face patches of elements. For high-order discretizations, the cost of inverting these patch problems scales like $\mathcal{O}(p^9)$ and is thus prohibitively expensive. We propose a new set of basis functions for each of the spaces in the discrete de Rham complex that reduce the cost of the patch problems to $\mathcal{O}(p^6)$ complexity. By taking advantage of additional properties of the new basis, we propose further computationally cheaper variants of existing preconditioners. Various numerical examples demonstrate the performance of the solvers.