Around Siu inequality
Abstract
I will talk about the connections between the Siu inequality and existence of the model companion for GVFs. The talk will be partially based on a joint work with Antoine Sedillot.
I will talk about the connections between the Siu inequality and existence of the model companion for GVFs. The talk will be partially based on a joint work with Antoine Sedillot.
The hyperbolic plane and its higher-dimensional analogues are well-known
objects. They belong to a larger class of spaces, called rank-one
symmetric spaces, which include not only the hyperbolic spaces but also
their complex and quaternionic counterparts, and the octonionic
hyperbolic plane. By a result of Pansu, two of these families exhibit
strong rigidity properties with respect to their self-quasiisometries:
any self-quasiisometry of a quaternionic hyperbolic space or the
octonionic hyperbolic plane is at uniformly bounded distance from an
isometry. The goal of this talk is to give an overview of the rank-one
symmetric spaces and the tools used to prove Pansu's rigidity theorem,
such as the subRiemannian structure of their visual boundaries and the
analysis of quasiconformal maps.
This is a notion we defined with Johan de Jong. If a finitely presented group is the topological fundamental group of a smooth quasi-projective complex variety, then we prove that it is weakly integral. To this aim we use the Langlands program (both arithmetic to produce companions and geometric to use de Jong’s conjecture). On the other hand there are finitely presented groups which are not weakly integral (Breuillard). So this notion is an obstruction.
Self-similar groups are groups of automorphisms of infinite rooted trees obeying a simple but powerful rule. Under this rule, groups with exotic properties can be generated from very basic starting data, most famously the Grigorchuk group which was the first example of a group with intermediate growth.
Nekrashevych introduced a groupoid and a C*-algebra for a self-similar group action on a tree as models for some underlying noncommutative space for the system. Our goal is to compute the K-theory of the C*-algebra and the homology of the groupoid. Our main theorem provides long exact sequences which reduce the problems to group theory. I will demonstrate how to apply this theorem to fully compute homology and K-theory through the example of the Grigorchuk group.
This is joint work with Benjamin Steinberg.
Let G be a free group of rank N, let f be an automorphism of G and let Fix(f) be the corresponding subgroup of fixed points. Bestvina and Handel showed that the rank of Fix(f) is at most N, for which they developed the theory of train track maps on free groups. Different arguments were provided later on by Sela, Paulin and Gaboriau-Levitt-Lustig. In this talk, we present a new proof which involves the Linnell division ring of G. We also discuss how our approach relates to previous ones and how it gives new insight into variations of the problem.
While the brain has long been conceptualized as a network of neurons connected by synapses, attempts to describe the connectome using established models in network science have yielded conflicting outcomes, leaving the architecture of neural networks unresolved. Here, we analyze eight experimentally mapped connectomes, finding that the degree and the strength distribution of the underlying networks cannot be described by random nor scale-free models. Rather, the node degrees and strengths are well approximated by lognormal distributions, whose emergence lacks a mechanistic model in the context of networks. Acknowledging the fact that the brain is a physical network, whose architecture is driven by the spatially extended nature of its neurons, we analytically derive the multiplicative process responsible for the lognormal neuron length distribution, arriving to a series of empirically falsifiable predictions and testable relationships that govern the degree and the strength of individual neurons. The lognormal network characterizing the connectome represents a novel architecture for network science, that bridges critical gaps between neural structure and function, with unique implications for brain dynamics, robustness, and synchronization.
Upper bounds on the number of incidences between points and lines, tubes, and other geometric objects, have many applications in combinatorics and analysis. On the other hand, much less is known about lower bounds. We prove a general lower bound for the number of incidences between points and tubes in the plane under a natural spacing condition. In particular, if you take $n$ points in the unit square and draw a line through each point, then there is a non-trivial point-line pair with distance at most $n^{-2/3+o(1)}$. This quickly implies that any $n$ points in the unit square define a triangle of area at most $n^{-7/6+o(1)}$, giving a new upper bound for the Heilbronn's triangle problem.
Joint work with Alex Cohen and Cosmin Pohoata.
Let X be a n by n unitary matrix, drawn at random according to the Haar measure on U_n, and let m be a natural number. What can be said about the distribution of X^m and its eigenvalues?
The density of the distribution \tau_m of X^m can be written as a linear combination of irreducible characters of U_n, where the coefficients are the Fourier coefficients of \tau_m. In their seminal work, Diaconis and Shahshahani have shown that for any fixed m, the sequence (tr(X),tr(X^2),...,tr(X^m)) converges, as n goes to infinity, to m independent complex normal random variables (suitably normalized). This can be seen as a statement about the low-dimensional Fourier coefficients of \tau_m.
In this talk, I will focus on high-dimensional spectral information about \tau_m. For example:
(a) Can one give sharp estimates on the rate of decay of its Fourier coefficients?
(b) For which values of p, is the density of \tau_m L^p-integrable?
Using works of Rains about the distribution of X^m, we will see how Item (a) is equivalent to a branching problem in the representation theory of certain compact homogeneous spaces, and how (b) is equivalent to a geometric problem about the singularities of certain varieties called (Weyl) hyperplane arrangements.
Based on joint works with Julia Gordon and Yotam Hendel and with Nir Avni and Michael Larsen.
A question we get asked all the time! We'll also be discussing the numerous ways our identities as Mathematicians are shaped by being a minority. Free lunch provided.
Gauging is a systematic way to construct a model with non-invertible symmetry from a model with ordinary group-like symmetry. In 2+1d dimensions or higher, one can generalize the standard gauging procedure by stacking a symmetry-enriched topological order before gauging the symmetry. This generalized gauging procedure allows us to realize a large class of non-invertible symmetries. In this talk, I will describe the generalized gauging of finite group symmetries in 2+1d lattice models. This talk will be based on my ongoing work with L. Bhardwaj, S.-J. Huang, S. Schäfer-Nameki, and A. Tiwari.
The Camassa–Holm equation, which is nonlinear one-dimensional nonlinear PDE which is completely integrable and has applications in several areas, has received considerable attention. We will discuss recent work regarding the Camassa—Holm equation with transport noise, more precisely, the equation $u_t+uu_x+P_x+\sigma u_x \circ dW=0$ and $P-P_{xx}=u^2+u_x^2/2$. În particular, we will show existence of a weak, global, dissipative solution of the Cauchy initial-value problem on the torus. This is joint work with L. Galimberti (King’s College), K.H. Karlsen (Oslo), and P.H.C. Pang (NTNU/Oslo).
The class of henselian valued fields with non-discrete value group is not well-understood. In 2018, Koenigsmann conjectured that a list of seven natural axioms describes a complete axiomatisation of $\mathbb{Q}_p^{ab}$, the maximal extension of the $p$-adic numbers $\mathbb{Q}_p$ with abelian Galois group, which is an example of such a valued field. Informed by the recent work of Jahnke-Kartas on the model theory of perfectoid fields, we formulate an eighth axiom (the discriminant property) that is not a consequence of the other seven. Revisiting work by Koenigsmann (the Galois characterisation of $\mathbb{Q}_p$) and Jahnke-Kartas, we give a uniform treatment of their underlying method. In particular, we highlight how this method yields short, non-standard model-theoretic proofs of known results (e.g. finite extensions of perfectoid fields are perfectoid).
I’ll tell you about some of my favorite algebraic varieties, which are beautiful in their own right, and also have some dramatic applications to algebraic combinatorics. These include the top-heavy conjecture (one of the results for which June Huh was awarded the Fields Medal), as well as non-negativity of Kazhdan—Lusztig polynomials of matroids.
Cost-sensitive loss functions are crucial in many real-world prediction problems, where different types of errors are penalized differently; for example, in medical diagnosis, a false negative prediction can lead to worse consequences than a false positive prediction. However, traditional learning theory has mostly focused on the symmetric zero-one loss, letting cost-sensitive losses largely unaddressed. In this work, we extend the celebrated theory of boosting to incorporate both cost-sensitive and multi-objective losses. Cost-sensitive losses assign costs to the entries of a confusion matrix, and are used to control the sum of prediction errors accounting for the cost of each error type. Multi-objective losses, on the other hand, simultaneously track multiple cost-sensitive losses, and are useful when the goal is to satisfy several criteria at once (e.g., minimizing false positives while keeping false negatives below a critical threshold). We develop a comprehensive theory of cost-sensitive and multi-objective boosting, providing a taxonomy of weak learning guarantees that distinguishes which guarantees are trivial (i.e., can always be achieved), which ones are boostable (i.e., imply strong learning), and which ones are intermediate, implying non-trivial yet not arbitrarily accurate learning. For binary classification, we establish a dichotomy: a weak learning guarantee is either trivial or boostable. In the multiclass setting, we describe a more intricate landscape of intermediate weak learning guarantees. Our characterization relies on a geometric interpretation of boosting, revealing a surprising equivalence between cost-sensitive and multi-objective losses.
It was recently argued that topological operators (at least those associated with continuous symmetries) need regularization. However, such regularization seems to be ill-defined when the underlying QFT is coupled to gravity. If both of these claims are correct, it means that charges cannot be meaningfully measured in the presence of gravity. I will review the evidence supporting these claims as discussed in [arXiv:2411.08858]. Given the audience's high level of expertise, I hope this will spark discussion about whether this is a promising approach to understanding the fate of global symmetries in quantum gravity.
Note: we would recommend to join the meeting using the Teams client for best user experience.
Dey and Xin (J. Appl.Comput.Top. 2022) describe an algorithm to decompose finitely presented multiparameter persistence modules using a matrix reduction algorithm. Their algorithm only works for modules whose generators and relations are distinctly graded. We extend their approach to work on all finitely presented modules and introduce several improvements that lead to significant speed-ups in practice.
Our algorithm is FPT with respect to the maximal number of relations with the same degree and with further optimisation we obtain an O(n3) algorithm for interval-decomposable modules. As a by-product to the proofs of correctness we develop a theory of parameter restriction for persistence modules. Our algorithm is implemented as a software library aida which is the first to enable the decomposition of large inputs.
This is joint work with Tamal Dey and Michael Kerber.
Bernstein–Gelfand–Gelfand (BGG) resolutions and the Grothendieck–Cousin complex both play central roles in modern algebraic geometry and representation theory. The BGG approach provides elegant, combinatorial resolutions for important classes of modules especially those arising in Lie theory; while Grothendieck–Cousin complexes furnish a powerful framework for computing local cohomology via filtrations by support. In this talk, we will give an overview of these two constructions and illustrate how they arise from the same categorical consideration.
Understanding how living systems dynamically self-organise across spatial and temporal scales is a fundamental problem in biology; from the study of embryo development to regulation of cellular physiology. In this talk, I will discuss how we can use mathematical modelling to uncover the role of microscale physical interactions in cellular self-organisation. I will illustrate this by presenting two seemingly unrelated problems: environmental-driven compartmentalisation of the intracellular space; and self-organisation during collective migration of multicellular communities. Our results reveal hidden connections between these two processes hinting at the general role that chemical regulation of physical interactions plays in controlling self-organisation across scales in living matter
For some values of degrees d=(d_1,...,d_c), we construct a compactification of a Hilbert scheme of complete intersections of type d. We present both a quotient and a direct construction. Then we work towards the construction of a quasiprojective coarse moduli space of smooth complete intersections via Geometric Invariant Theory.
I will discuss several interesting examples of classes of structures for which there is a sensible first-order theory of "almost all" structures in the class, for certain notions of "almost all". These examples include the classical theory of almost all finite graphs due to Glebskij-Kogan-Liogon'kij-Talanov and Fagin (and many more examples from finite model theory), as well as more recent examples from the model theory of infinite fields: the theory of almost all algebraic extensions and the universal/existential theory of almost all completions of a global field (both joint work with Arno Fehm). Interestingly, such asymptotic theories are sometimes quite well-behaved even when the base theories are not.
This paper shows that a simple sale contract with a collection of options implements the full-information first-best allocation in a variety of continuous-time dynamic adverse selection settings with news. Our model includes as special cases most models in the literature. The implementation result holds regardless of whether news is public (i.e., contractible) or privately observed by the buyer, and it does not require deep pockets on either side of the market. It is an implication of our implementation result that, irrespective of the assumptions on the game played, no agent waits for news to trade in such models. The options here do not play a hedging role and are, thus, not priced using a no-arbitrage argument. Rather, they are priced using a game-theoretic approach.
I will explain the content of Geometric Langlands (which is a theorem over the ground fields of characteristic 0 but still a conjecture in positive characteristic) and show how it implies a description of the space of automorphic functions in terms of Galois data. The talk will mostly follow a joint paper with Arinkin, Kazhdan, Raskin, Rozenblyum and Varshavsky from 2022.
Deflation is a technique to remove a solution to a problem so that other solutions to this problem can subsequently be found. The most prominent instance is deflation we see in eigenvalue solvers, but recent interest has been in deflation of rootfinding problems from nonlinear PDEs with many isolated solutions (spearheaded by Farrell and collaborators).
In this talk I’ll show you recent results on deflation techniques for optimisation algorithms with many local minima, focusing on the Gauss—Newton algorithm for nonlinear least squares problems. I will demonstrate advantages of these techniques instead of the more obvious approach of applying deflated Newton’s method to the first order optimality conditions and present some proofs that these algorithms will avoid the deflated solutions. Along the way we will see an interesting generalisation of Woodbury’s formula to least squares problems, something that should be more well known in Numerical Linear Algebra (joint work with Güttel, Nakatsukasa and Bloor Riley).
Main preprint: https://arxiv.org/abs/2409.14438.
WoodburyLS preprint: https://arxiv.org/abs/2406.15120
A number of algorithms are now available---including Halko-Martinsson-Tropp, interpolative decomposition, CUR, generalized Nystrom, and QR with column pivoting---for computing a low-rank approximation of matrices. Some methods come with extremely strong guarantees, while others may fail with nonnegligible probability. We present methods for efficiently estimating the error of the approximation for a specific instantiation of the methods. Such certificate allows us to execute "responsibly reckless" algorithms, wherein one tries a fast, but potentially unstable, algorithm, to obtain a potential solution; the quality of the solution is then assessed in a reliable fashion, and remedied if necessary. This is joint work with Gunnar Martinsson.
Time permitting, I will ramble about other topics in Randomised NLA.
Christiana is an Assistant Professor at the Courant Institute of Mathematical Sciences (New York University) working in the Applied Math Lab, primarily with Leif Ristroph and Jun Zhang. Her interests are in using modeling, numerical simulations, and experiments to study fluid dynamical problems, with an emphasis on fluid-structure interactions.
Currently Christiana is working on understanding the role of flow interactions in flying bird formations and the hydrodynamics of swimming fish.
We consider two problems in fluid dynamics: the collective locomotion of flying animals and the interaction of vortex rings with fluid interfaces. First, we present a model of formation flight, viewing the group as a material whose properties arise from the flow-mediated interactions among its members. This aerodynamic model explains how flapping flyers produce vortex wakes and how they are influenced by the wakes of others. Long in-line arrays show that the group behaves as a soft, excitable "crystal" with regularly ordered member "atoms" whose positioning is susceptible to deformations and dynamical instabilities. Second, we delve into the phenomenon of vortex ring reflections at water-air interfaces. Experimental observations reveal reflections analogous to total internal reflection of a light beam. We present a vortex-pair--vortex-sheet model to simulate this phenomenon, offering insights into the fundamental interactions of vortex rings with free surfaces.
Originally posed in the 1950s, the Hadwiger-Nelson problem interrogates the ‘chromatic number of the plane’ via an infinite unit-distance graph. This question remains open today, known only to be 5,6, or 7. We may ask the same question of the hyperbolic plane; there the lack of homogeneous dilations leads to unique behaviour for each length scale d. This variance leads to other questions: is the d-chromatic number finite for all d>0? How does the d-chromatic number behave as d increases/decreases? In this talk, I will provide a summary of existing methods and results, before discussing improved bounds through the consideration of semi-regular tilings of the hyperbolic plane.
[This is the second in a series of two talks; the first talk will be in the Algebra Seminar of Tuesday Feb 4th https://www.maths.ox.ac.uk/node/70022]
In 2005, Faltings initiated a p-adic analogue of the complex Simpson correspondence, a theory that has since been explored by various authors through different approaches. In this two-lecture series (part I in the Algebra Seminar and part II in the Arithmetic Geometry Seminar), I will present a joint work in progress with Michel Gros and Takeshi Tsuji, motivated by the goal of comparing the parallel approaches we have developed and establishing a robust framework to achieve broader functoriality results for the p-adic Simpson correspondence.
The approach I developed with M. Gros relies on the choice of a first-order deformation and involves a torsor of deformations along with its associated Higgs-Tate algebra, ultimately leading to Higgs bundles. In contrast, T. Tsuji's approach is intrinsic, relying on Higgs envelopes and producing Higgs crystals. The evaluations of a Higgs crystal on different deformations differ by a twist involving a line bundle on the spectral variety. A similar and essentially equivalent twisting phenomenon occurs in the first approach when considering the functoriality of the p-adic Simpson correspondence by pullback by a morphism that may not lift to the chosen deformations.
We introduce a novel approach to twisting Higgs modules using Higgs-Tate algebras, similar to the first approach of the p-adic Simpson correspondence. In fact, the latter can itself be reformulated as a twist. Our theory provides new twisted higher direct images of Higgs modules, that we apply to study the functoriality of the p-adic Simpson correspondence by higher direct images with respect to a proper morphism that may not lift to the chosen deformations. Along the way, we clarify the relation between our twisting and another twisting construction using line bundles on the spectral variety that appeared recently in other works.
Given two von Neumann algebras A,B with an action by a locally compact (quantum) group G, one can consider its associated equivariant correspondences, which are usual A-B-correspondences (in the sense of Connes) with a compatible unitary G-representation. We show how the category of such equivariant A-B-correspondences carries an analogue of the Fell topology, which is preserved under natural operations (such as crossed products or equivariant Morita equivalence). If time permits, we will discuss one particular interesting example of such a category of equivariant correspondences, which quantizes the representation category of SL(2,R). This is based on joint works with Joeri De Ro and Joel Dzokou Talla.
A well-known problem in algebraic geometry is to construct smooth projective Calabi-Yau varieties $Y$. In the smoothing approach, we construct first a degenerate (reducible) Calabi-Yau scheme $V$ by gluing pieces. Then we aim to find a family $f\colon X \to C$ with special fiber $X_0 = f^{-1}(0) \cong V$ and smooth general fiber $X_t = f^{-1}(t)$. In this talk, we see how infinitesimal logarithmic deformation theory solves the second step of this approach: the construction of a family out of a degenerate fiber $V$. This is achieved via the logarithmic Bogomolov-Tian-Todorov theorem as well as its variant for pairs of a log Calabi-Yau space $f_0\colon X_0 \to S_0$ and a line bundle $\mathcal{L}_0$ on $X_0$.
How to uniformly, or at least almost uniformly, choose an element from a finite group ? When G is too large to enumerate all its elements, direct (pseudo)random selection is impossible. However, if we have an explicit set of generators of G (e.g., as in the Rubik's cube group), several methods are available. This talk will focus on one such method based on the well-known product replacement algorithm. I will discuss how recent results on property (T) by Kaluba, Kielak, Nowak and Ozawa partially explain the surprisingly good performance of this algorithm.
Real-world networks have a complex topology with many interconnected elements often organized into communities. Identifying these communities helps reveal the system’s organizational and functional structure. However, network data can be noisy, with incomplete link observations, making it difficult to detect significant community structures as missing data weakens the evidence for specific solutions. Recent research shows that flow-based community detection methods can highlight spurious communities in sparse networks with incomplete link observations. To address this issue, these methods require regularization. In this talk, I will show how a Bayesian approach can be used to regularize flows in networks, reducing overfitting in the flow-based community detection method known as the map equation.
The normal covering number $\gamma(G)$ of a finite group $G$ is the minimal size of a collection of proper subgroups whose conjugates cover the group. This definition is motivated by number theory and related to the concept of intersective polynomials. For the symmetric and alternating groups we will see how these numbers are closely connected to some elementary (as in "relating to basic concepts", not "easy") problems in additive combinatorics, and we will use this connection to better understand the asymptotics of $\gamma(S_n)$ and $\gamma(A_n)$ as $n$ tends to infinity.
When tensor products of N minimal models accumulate at central charge N, they also admit relevant operators arbitrarily close to marginality. This raises the tantalizing possibility that they can be use to reach purely Virasoro symmetric CFTs where the breaking of extended chiral symmetry can be seen in a controlled way. This talk will give an overview of the theories where this appears to be the case, according to a brute force check at low lying spins. We will also encounter an interesting non-example where the same type of analysis can be used to give a simpler proof of integrability.
In 2005, Faltings initiated a p-adic analogue of the complex Simpson correspondence, a theory that has since been explored by various authors through different approaches. In this two-lecture series (part I in the Algebra Seminar and part II in the Arithmetic Geometry Seminar), I will present a joint work in progress with Michel Gros and Takeshi Tsuji, motivated by the goal of comparing the parallel approaches we have developed and establishing a robust framework to achieve broader functoriality results for the p-adic Simpson correspondence.
The approach I developed with M. Gros relies on the choice of a first-order deformation and involves a torsor of deformations along with its associated Higgs-Tate algebra, ultimately leading to Higgs bundles. In contrast, T. Tsuji's approach is intrinsic, relying on Higgs envelopes and producing Higgs crystals. The evaluations of a Higgs crystal on different deformations differ by a twist involving a line bundle on the spectral variety. A similar and essentially equivalent twisting phenomenon occurs in the first approach when considering the functoriality of the p-adic Simpson correspondence by pullback by a morphism that may not lift to the chosen deformations.
We introduce a novel approach to twisting Higgs modules using Higgs-Tate algebras, similar to the first approach of the p-adic Simpson correspondence. In fact, the latter can itself be reformulated as a twist. Our theory provides new twisted higher direct images of Higgs modules, that we apply to study the functoriality of the p-adic Simpson correspondence by higher direct images with respect to a proper morphism that may not lift to the chosen deformations. Along the way, we clarify the relation between our twisting and another twisting construction using line bundles on the spectral variety that appeared recently in other works.
The Riemann problem is an IVP having simple piecewise constant initial data that is invariant under scaling. In 1D, the problem was originally considered by Riemann during the 19th century in the context of gas dynamics, and the general theory was more or less completed by Lax and Glimm in the mid-20th century. In 2D and MD, the situation is much more complicated, and very few analytic results are available. We discuss a shock reflection problem for the Euler equations for potential flow, with initial data that generates four interacting shockwaves. After reformulating the problem as a free boundary problem for a nonlinear PDE of mixed hyperbolic-elliptic type, the problem is solved via a sophisticated iteration procedure. The talk is based on joint work with G-Q Chen (Oxford) et. al. arXiv:2305.15224, to appear in JEMS (2025).
The Keating-Snaith conjecture for quadratic twists of elliptic curves predicts the central values should have a log-normal distribution. I present recent progress towards establishing this in the range of large deviations of order of the variance. This extends Selberg’s Central Limit Theorem from ranges of order of the standard deviation to ranges of order of the variance in a variety of contexts, inspired by random walk theory. It is inspired by recent work on large deviations of the zeta function and central values of L-functions.
Kasparov's bivariant K-theory (or KK-theory) is an extremely powerful invariant for both C*-algebras and C*-dynamical systems, which was originally motivated for a tool to solve classical problems coming from topology and geometry. Its paramount importance for classification theory was discovered soon after, impressively demonstrated within the Kirchberg-Phillips theorem to classify simple nuclear and purely infinite C*-algebras. Since then, it can be said that every methodological novelty about extracting information from KK-theory brought along some new breakthrough in classification theory. Perhaps the most important example of this is the Lin-Dadarlat-Eilers stable uniqueness theorem, which forms the technical basis behind many of the most important articles written over the past decade. In the recent landmark paper of Carrion et al, it was demonstrated how the stable uniqueness theorem can be upgraded to a uniqueness theorem of sorts under extra assumptions. It was then posed as an open problem whether the statement of a desired "KK-uniqueness theorem" always holds.
In this talk I want to present the affirmative answer to this question: If A and B are separable C*-algebras and (f,g) is a Cuntz pair of absorbing representations whose induced class in KK(A,B) vanishes, then f and g are strongly asymptotically unitarily equivalent. The talk shall focus on the main conceptual ideas towards this theorem, and I plan to discuss variants of the theorem if time permits. It turns out that the analogous KK-uniqueness theorem is true in a much more general context, which covers equivariant and/or ideal-related and/or nuclear KK-theory.
Gauge theory excels at solving minimal genus problems for 3- and 4-manifolds. A notable triumph is its resolution of the Thom conjecture, asserting that the genus of a smooth complex curve in the complex projective plane is no larger than any smooth submanifold homologous to it. Gauge theoretic techniques have also been used to verify analagous conjectures for Kähler surfaces or, more generally, symplectic 4-manifolds. One can formulate versions of these conjectures for surfaces with boundary lying in a 3-manifold, and I'll discuss work in progress with Katherine Raoux which attempts to extend these "relative" Thom conjectures outside the complex (or even symplectic) realm using tools from Floer homology.
Score-based generative models (SGMs), which include diffusion models and flow matching, have had a transformative impact on the field of generative modeling. In a nutshell, the key idea is that by taking the time-reversal of a forward ergodic diffusion process initiated at the data distribution, one can "generate data from noise." In practice, SGMs learn an approximation of the score function of the forward process and employ it to construct an Euler scheme for its time reversal.
In this talk, I will present the main ideas of a general strategy that combines insights from stochastic control and entropic optimal transport to bound the error in SGMs. That is, to bound the distance between the algorithm's output and the target distribution. A nice feature of this approach is its robustness: indeed, it can be used to analyse SGMs built upon noising dynamics that are different from the Ornstein-Uhlenbeck process . As an example, I will illustrate how to obtain error bounds for SGMs on the hypercube.
ALF gravitational instantons, of which the Taub-NUT and Atiyah-Hitchin metrics are prototypes, are the complete non-compact hyperkähler 4-manifolds with cubic volume growth. Examples have been known since the 1970's, but a complete classification was only given around 10 years ago. In this talk, I will present joint work with Haskins and Nordström where we extend some of these results to complete non-compact 7-manifolds with holonomy G2 and an asymptotic geometry, called ALC (asymptotically locally conical), that generalises to higher dimension the asymptotic geometry of ALF spaces.
Inverse problems involve reconstructing unknown physical quantities from indirect measurements. They appear in various fields, including medical imaging (e.g., MRI, Ultrasound, CT), material sciences and molecular biology (e.g., electron microscopy), as well as remote sensing just to name a few examples. While deep neural networks are currently able to achieve state-of-the-art performance in many imaging tasks, in this talk we argue that many inverse imaging problems cannot be solved convincingly using a black-box solution. Instead, they require a well-crafted combination of computational tools taking the underlying signal, the physical constraints and acquisition characteristics into account.
In the first part of the talk, we introduce INDigo+, a novel INN-guided probabilistic diffusion algorithm for arbitrary image restoration tasks. INDigo+ combines the perfect reconstruction property of invertible neural networks (INNs) with the strong generative capabilities of pre-trained diffusion models. Specifically, we leverage the invertibility of the network to condition the diffusion process and in this way we generate high quality restored images consistent with the measurements.
In the second part of the talk, we discuss the unfolding techniques which is an approach that allows embedding priors and models in the neural network architecture. In this context we discuss the problem of monitoring the dynamics of large populations of neurons over a large area of the brain. Light-field microscopy (LFM), a type of scanless microscopy, is a particularly attractive candidate for high-speed three-dimensional (3D) imaging which is needed for monitoring neural activity. We review fundamental aspects of LFM and then present computational methods based on deep learning for neuron localization and activity estimation from light-field data.
Finally, we look at the multi-modal case and present an application in art investigation. Often X-ray images of Old Master paintings contain information of the visible painting and of concealed sub-surface design, we therefore introduce a model-based neural network capable of separating from the “mixed X-ray” the X-ray image of the visible painting and the X-ray of the concealed design.
This is joint work with A. Foust, P. Song, C. Howe, H. Verinaz, J. Huang, Di You and Y. Su from Imperial College London, M. Rodrigues and W. Pu from University College London, I. Daubechies from Duke University, Barak Sober from the Hebrew University of Jerusalem and C. Higgitt and N. Daly from The National Gallery in London.
Black holes play a central role in our understanding of quantum gravity, but identifying their precise counterparts in a dual QFT remains a tricky business. These states are heavy, chaotic, and encode various universal aspects — but are also notoriously hard to characterise. In this talk, we’ll explore how supersymmetric field theories provide a controlled setting to study black hole states. In particular, we’ll introduce the idea of fortuitous states as a useful criterion for identifying BPS black hole states. We’ll then illustrate this concept with concrete examples, including the (supersymmetric) SYK model and the D1-D5 CFT.
The discussion will be based on the following recent papers:
arXiv:2402.10129, arXiv:2412.06902, and arXiv:2501.05448.
