15:30
15:30
14:00
From Chebfun3 to RTSMS: A journey into deterministic and randomized Tucker decompositions
Abstract
In this talk we will first focus on the continuous framework and revisit how Tucker decomposition forms the foundation of Chebfun3 for numerical computing with 3D functions and the deterministic algorithm behind Chebfun3. The key insight is that separation of variables achieved via low-rank Tucker decomposition simplifies and speeds up lots of subsequent computations.
We will then switch to the discrete framework and discuss a new algorithm called RTSMS (randomized Tucker with single-mode sketching). The single-mode sketching aspect of RTSMS allows utilizing simple sketch matrices which are substantially smaller than alternative methods leading to considerable performance gains. Within its least-squares strategy, RTSMS incorporates leverage scores for efficiency with Tikhonov regularization and iterative refinement for stability. RTSMS is demonstrated to be competitive with existing methods, sometimes outperforming them by a large margin.
Stress and flux-based finite element methods
Abstract
This talk explores recent advancements in stress and flux-based finite element methods. It focuses on addressing the limitations of traditional finite elements, in order to describe complex material behavior and engineer new metamaterials.
Stress and flux-based finite element methods are particularly useful in error estimation, laying the groundwork for adaptive refinement strategies. This concept builds upon the hypercircle theorem [1], which states that in a specific energy space, both the exact solution and any admissible stress field lie on a hypercircle. However, the construction of finite element spaces that satisfy admissible states for complex material behavior is not straightforward. It often requires a relaxation of specific properties, especially when dealing with non-symmetric stress tensors [2] or hyperelastic materials.
Alternatively, methods that directly approximate stresses can be employed, offering high accuracy of the stress fields and adherence to physical conservation laws. However, when approximating eigenvalues, this significant benefit for the solution's accuracy implies that the solution operator cannot be compact. To address this, the solution operator must be confined to a subset of the solution that excludes the stresses. Yet, due to compatibility conditions, the trial space for the other solution components typically does not yield the desired accuracy. The second part of this talk will therefore explore the Least-Squares method as a remedy to these challenges [3].
To conclude this talk, we will emphasize the integration of those methods within global solution strategies, with a particular focus on the challenges regarding model order reduction methods [4].
[1] W. Prager, J. Synge. Approximations in elasticity based on the concept of function space.
Quarterly of Applied Mathematics 5(3), 1947.
[2] FB, K. Bernhard, M. Moldenhauer, G. Starke. Weakly symmetric stress equilibration and a posteriori error estimation for linear elasticity, Numerical Methods for Partial Differential Equations 37(4), 2021.
[3] FB, D. Boffi. First order least-squares formulations for eigenvalue problems, IMA Journal of Numerical Analysis 42(2), 2023.
[4] FB, D. Boffi, A. Halim. A reduced order model for the finite element approximation of eigenvalue problems,Computer Methods in Applied Mechanics and Engineering 404, 2023.
On the use of "conventional" unconstrained minimization solvers for training regression problems in scientific machine learning
Abstract
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications. At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods.
However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization.
In this talk, we empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.
Euclidean Ramsey Theory
Abstract
Euclidean Ramsey Theory is a natural multidimensional version of Ramsey Theory. A subset of Euclidean space is called Ramsey if, for any $k$, whenever we partition Euclidean space of sufficiently high dimension into $k$ classes, one class much contain a congruent copy of our subset. It is still unknown which sets are Ramsey. We will discuss background on this and then proceed to some recent results.
12:00
Thermodynamics of Near Extremal Black Holes in AdS(5)
Abstract
17:30
Twistor Particle Programme Rebooted: A "zig-z̄ag" Theory of Massive Spinning Particles
Note: we would recommend to join the meeting using the Zoom client for best user experience.
Abstract
Recently, the Newman-Janis shift has been revisited from the angle of scattering amplitudes in terms of the so-called "massive spinor-helicity variables," tracing back to Penrose and Perjés in the 70s. However, well-established results are limited in the same-helicity (self-dual) sector, while a puzzle of spurious poles arises in mixed-helicity sectors. This talk will outline how massive twistor theory can reproduce the same-helicity results while offering a possible solution to the spurious pole puzzle. Firstly, the Newman-Janis shift in the same-helicity sector is derived from a complexified version of the equivalence principle. Secondly, the massive twistor particle is coupled to background fields from bottom-up and top-down perspectives. The former is based on perturbations of symplectic structures in massive twistor space. The latter provides a generalization of Newman-Janis shift in generic backgrounds, which also leads to "curved massive twistor space" and its deformed massive incidence relation. Lastly, the Feynman rules of the first-quantized massive twistor particle and their physical interpretation are briefly discussed. Overall, a significant emphasis is put on the Kähler geometry ("zig-z̄ag structure") of massive twistor space, which eventually connects to a worldsheet structure of the Kerr solution.