In September 2024 we reported that a team of mathematicians from Oxford Mathematics and the Budapest University of Technology and Economics had uncovered a new class of shapes that tile space without using sharp corners. Remarkably, these ’ideal soft shapes’ are found abundantly in nature – from sea shells to muscle cells.
Resonances as a computational tool
Abstract
Speaker Katharina Schratz will talk about 'Resonances as a computational tool'
A large toolbox of numerical schemes for dispersive equations has been established, based on different discretization techniques such as discretizing the variation-of-constants formula (e.g., exponential integrators) or splitting the full equation into a series of simpler subproblems (e.g., splitting methods). In many situations these classical schemes allow a precise and efficient approximation. This, however, drastically changes whenever non-smooth phenomena enter the scene such as for problems at low regularity and high oscillations. Classical schemes fail to capture the oscillatory nature of the solution, and this may lead to severe instabilities and loss of convergence. In this talk I present a new class of resonance based schemes. The key idea in the construction of the new schemes is to tackle and deeply embed the underlying nonlinear structure of resonances into the numerical discretization. As in the continuous case, these terms are central to structure preservation and offer the new schemes strong geometric properties at low regularity.
Where on earth is the best laboratory to demonstrate the beauty of fluid dynamics?
Actually it’s not on earth. Here is the story of the soft cell.
And a longer read about the soft cell, discovered by Gabor Domokos and our own Alain Goriely.
Deep Learning is Not So Mysterious or Different
Abstract
Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization. We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized using long-standing generalization frameworks such as PAC-Bayes and countable hypothesis bounds. We present soft inductive biases as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode connectivity, and its relative universality.
Bio: Andrew Gordon Wilson is a Professor at the Courant Institute of Mathematical Sciences and Center for Data Science at New York University. He is interested in developing a prescriptive foundation for building intelligent systems. His work includes loss landscapes, optimization, Bayesian model selection, equivariances, generalization theory, and scientific applications.
His website is https://cims.nyu.edu/~andrewgw.