Mathematrix: Short Talks by PhD Students
Abstract
Several PhD students from the department will give short 5 minute talks on their research. This is also targeted at undergraduates interested in doing PhDs .
Several PhD students from the department will give short 5 minute talks on their research. This is also targeted at undergraduates interested in doing PhDs .
Quantum computers are becoming a reality and current generations of machines are already well beyond the 50-qubit frontier. However, hardware imperfections still overwhelm these devices and it is generally believed the fault-tolerant, error-corrected systems will not be within reach in the near term: a single logical qubit needs to be encoded into potentially thousands of physical qubits which is prohibitive.
Due to limited resources, in the near term, hybrid quantum-classical protocols are the most promising candidates for achieving early quantum advantage but these need to resort to quantum error mitigation techniques. I will explain the basic concepts and introduce hybrid quantum-classical protocols are the most promising candidates for achieving early quantum advantage. These have the potential to solve real-world problems---including optimisation or ground-state search---but they suffer from a large number of circuit repetitions required to extract information from the quantum state. I will detail a range of application areas of randomised quantum circuits, such as quantum algorithms, classical shadows, and quantum error mitigation introducing recent results that help lower the barrier for practical quantum advantage.
: With motivation from string compactifications, I will present work on the use of machine learning methods for the computation of geometric and topological properties of Calabi-Yau manifolds.
I will describe the open problems of singularity formation in incompressible fluids. I will discuss a list of related models, some results, and some more open problems.
I will describe the open problems of singularity formation in incompressible fluids. I will discuss a list of related models, some results, and some more open problems.
Peter Constantin is the John von Neumann Professor of Mathematics and Applied and Computational Mathematics at Princeton University. Peter Constantin received his B.A and M.A. summa cum laude from the University of Bucharest, Faculty of Mathematics and Mechanics. He obtained his Ph.D. from The Hebrew University of Jerusalem under the direction of Shmuel Agmon.
Constantin’s work is focused on the analysis of PDE and nonlocal models arising in statistical and nonlinear physics. Constantin worked on scattering for Schr¨odinger operators, on finite dimensional aspects of the dynamics of Navier-Stokes equations, on blow up for models of Euler equations. He introduced active scalars, and, with Jean-Claude Saut, local smoothing for general dispersive PDE. Constantin worked on singularity formation in fluid interfaces, on turbulence shell models, on upper bounds for turbulent transport, on the inviscid limit, on stochastic representation of Navier-Stokes equations, on the Onsager conjecture. He worked on critical nonlocal dissipative equations, on complex fluids, and on ionic diffusion in fluids.
Constantin has advised thirteen graduate students in mathematics, and served in the committee of seven graduate students in physics. He mentored twenty-five postdoctoral associates.
Constantin served as Chair of the Mathematics Department of the University of Chicago and as the Director of the Program in Applied and Computational Mathematics at Princeton University.
Constantin is a Fellow of the Institute of Physics, a SIAM Fellow, and Inaugural Fellow of the American Mathematical Society, a Fellow of the American Academy of Arts and Sciences and a member of the National Academy of Sciences
In general, the classification of finitely generated subgroups of a given group is intractable. Restricting to two-generator subgroups in a geometric setting is an exception. For example, a two-generator subgroup of a right-angled Artin group is either free or free abelian. Jaco and Shalen proved that a two-generator subgroup of the fundamental group of an orientable atoroidal irreducible 3-manifold is either free, free-abelian, or finite-index. In this talk I will present recent work proving a similar classification theorem for two generator mapping-torus groups of free group endomorphisms: every two generator subgroup is either free or conjugate to a sub-mapping-torus group. As an application we obtain an analog of the Jaco-Shalen result for free-by-cyclic groups with fully irreducible atoroidal monodromy. While the statement is algebraic, the proof technique uses the topology of finite graphs, a la Stallings. This is joint work with Naomi Andrew, Ilya Kapovich, and Stefano Vidussi.
I will present a notion of spin structure on a perfect complex in characteristic zero, generalizing the classical notion for an (algebraic) vector bundle. For a complex $E$ on $X$ with an oriented quadratic structure one obtains an associated ${\mathbb Z}/2{\mathbb Z}$-gerbe over X which obstructs the existence of a spin structure on $E$. This situation arises naturally on moduli spaces of coherent sheaves on Calabi-Yau fourfolds. Using spin structures as orientation data, we construct a categorical refinement of a K-theory class constructed by Oh-Thomas on such moduli spaces.
Self-predictive learning (aka non-contrastive learning) has become an increasingly important paradigm for representation learning. Self-predictive learning is simple yet effective: it learns without contrastive examples yet extracts useful representations through a self-predicitve objective. A common myth with self-predictive learning is that the optimization objective itself yields trivial representations as globally optimal solutions, yet practical implementations can produce meaningful solutions.
We reconcile the theory-practice gap by studying the learning dynamics of self-predictive learning. Our analysis is based on analyzing a non-linear ODE system that sheds light on why despite a seemingly problematic optimization objective, self-predictive learning does not collapse, which echoes with important implementation "tricks" in practice. Our results also show that in a linear setup, self-predictive learning can be understood as gradient based PCA or SVD on the data matrix, hinting at meaningful representations to be captured through the learning process.
This talk is based on our ICML 2023 paper "Understanding self-predictive learning for reinforcement learning".
Celestial holography posits that the 4D S-matrix may be calculated holographically by a 2D conformal field theory. However, bulk translation invariance forces low-point massless celestial amplitudes to be distributional, which is an unusual property for a 2D CFT. In this talk, I show that translation-invariant MHV gluon amplitudes can be extracted from smooth 'leaf' amplitudes, where a bulk interaction vertex is integrated only over a hyperbolic slice of spacetime. After describing gluon leaf amplitudes' soft and collinear limits, I will show that MHV leaf amplitudes can be generated by a simple 2D system of free fermions and the semiclassical limit of Liouville theory, showing that translation-invariant, distributional amplitudes can be obtained from smooth correlation functions. An important step is showing that, in the semiclassical limit of Liouville theory, correlation functions of light operators are given by contact AdS Witten diagrams. This talk is based on a series of papers with Atul Sharma, Andrew Strominger, and Tianli Wang [2312.07820, 2402.04150,2403.18896].
North meets South is a tradition founded by and for early-career researchers. One speaker from the North of the Andrew Wiles Building and one speaker from the South each present an idea from their work in an accessible yet intriguing way.
Speaker: Paul-Hermann Balduf
Title: Statistics of Feynman integral
Abstract: In quantum field theory, one way to compute predictions for physical observables is perturbation theory, which means that the sought-after quantity is expressed as a formal power series in some coupling parameter. The coefficients of the power series are Feynman integrals, which are, in general, very complicated functions of the masses and momenta involved in the physical process. However, there is also a complementary difficulty: A higher orders, millions of distinct Feynman integrals contribute to the same series coefficient.
My talk concerns the statistical properties of Feynman integrals, specifically for phi^4 theory in 4 dimensions. I will demonstrate that the Feynman integrals under consideration follow a fairly regular distribution which is almost unchanged for higher orders in perturbation theory. The value of a given Feynman integral is correlated with many properties of the underlying Feynman graph, which can be used for efficient importance sampling of Feynman integrals. Based on 2305.13506 and 2403.16217.
Speaker: Marc Suñé
Title: Extreme mechanics of thin elastic objects
Abstract: Exceptionally hard --- or soft -- materials, materials that are active and response to different stimuli, elastic objects that undergo large deformations; the advances in the recent decades in robotics, 3D printing and, more broadly, in materials engineering, have created a new world of opportunities to test the (extreme) mechanics of solids.
In this colloquium I will focus on the elastic instabilities of slender objects. In particular, I will discuss the transverse actuation of a stretched elastic sheet. This problem is a peculiar example of buckling under tension and it has a vast potential scope of applications, from understanding the mechanics of graphene and cell tissues, to the engineering of meta-materials.

Note: we would recommend to join the meeting using the Teams client for best user experience.
In this talk, I will give an overview of recent joint work on Topological Data Analysis (TDA). The first one is an application of TDA to quantify porosity in pathological bone tissue. The second is an extension of persistent homology to directed simplicial complexes. Lastly, we present an evaluation of the persistent Laplacian in machine learning tasks. This is joint work with Ysanne Pritchard, Aikta Sharma, Claire Clarkin, Helen Ogden, and Sumeet Mahajan; David Mendez; and Tom Davies and Zhengchao Wang, respectively.
We will explore the connection between Celestial and Euclidean Anti-de Sitter (EAdS) holography in the massive scalar case. Specifically, exploiting the so-called hyperbolic foliation of Minkowski space-time, we will show that each contribution to massive Celestial correlators can be reformulated as a linear combination of contributions to corresponding massive Witten correlators in EAdS. This result will be demonstrated explicitly both for contact diagrams and for the four-point particle exchange diagram, and it extends to all orders in perturbation theory by leveraging the bootstrapping properties of the Celestial CFT (CCFT). Within this framework, the Kantorovic-Lebedev transform plays a central role. This transform will allow us to make broader considerations regarding non-perturbative properties of a CCFT.
This week's Fridays@2 will be a panel discussion focusing on what it is like to pursue a research degree. The panel will share their thoughts and experiences in a question-and-answer session, discussing some of the practicalities of being a postgraduate student, and where a research degree might lead afterwards.
Introduction to flat space holography in three dimensions and Carrollian CFT2, with selected results on correlation functions, thermal entropy, entanglement entropy and an outlook to Bondi news in 3d.
Ramification theory serves the dual purpose of a diagnostic tool and treatment by helping us locate, measure, and treat the anomalous behavior of mathematical objects. In the classical setup, the degree of a finite Galois extension of "nice" fields splits up neatly into the product of two well-understood numbers (ramification index and inertia degree) that encode how the base field changes. In the general case, however, a third factor called the defect (or ramification deficiency) can pop up. The defect is a mysterious phenomenon and the main obstruction to several long-standing open problems, such as obtaining resolution of singularities. The primary reason is, roughly speaking, that the classical strategy of "objects become nicer after finitely many adjustments" fails when the defect is non-trivial. I will discuss my previous and ongoing work in ramification theory that allows us to understand and treat the defect.
Please join us for refreshments outside the lecture room from 15:30.
We frame dynamic persuasion in a partial observation stochastic control game with an ergodic criterion. The receiver controls the dynamics of a multidimensional unobserved state process. Information is provided to the receiver through a device designed by the sender that generates the observation process.
The commitment of the sender is enforced and an exogenous information process outside the control of the sender is allowed. We develop this approach in the case where all dynamics are linear and the preferences of the receiver are linear-quadratic.
We prove a verification theorem for the existence and uniqueness of the solution of the HJB equation satisfied by the receiver’s value function. An extension to the case of persuasion of a mean field of interacting receivers is also provided. We illustrate this approach in two applications: the provision of information to electricity consumers with a smart meter designed by an electricity producer; the information provided by carbon footprint accounting rules to companies engaged in a best-in-class emissions reduction effort. In the first application, we link the benefits of information provision to the mispricing of electricity production. In the latter, we show that when firms declare a high level of best-in-class target, the information provided by stringent accounting rules offsets the Nash equilibrium effect that leads firms to increase pollution to make their target easier to achieve.
This is a joint work with Prof. René Aïd, Prof. Giorgia Callegaro and Prof. Luciano Campi.
One of the main entries in the AdS/CFT dictionary is a relation between the bulk on-shell partition function with specified boundary conditions and the generating function of correlation functions of primary operators in the boundary CFT. In this talk, I will show how to construct a similar relation for gravity in 4d asymptotically flat spacetimes. For simplicity, we will restrict to the leading infrared sector, where a careful treatment of soft modes and their canonical partners leads to a non-vanishing on-shell action. I will show that this action localizes to a codimension-2 surface and coincides with the generating function of 2d CFT correlators involving insertions of Kac-Moody currents. The latter were previously shown, using effective field theory methods, to reproduce the leading soft graviton theorems in 4d. I will conclude with comments on the implications of these results for the computation of soft charge fluctuations in the vacuum.
In this seminar I will begin by giving an overview of some problems in stochastic simulation and uncertainty quantification. I will then outline the Multilevel Monte Carlo for situations in which accurate simulations are very costly, but it is possible to perform much cheaper, less accurate simulations. Inspired by the multigrid method, it is possible to use a combination of these to achieve the desired overall accuracy at a much lower cost.
We investigate an interacting particle model to simulate a foraging colony of ants, where each ant is represented as a so-called active Brownian particle. Interactions among ants are mediated through chemotaxis, aligning their orientations with the upward gradient of the pheromone field. We show how the empirical measure of the interacting particle system converges to a solution of a mean-field limit (MFL) PDE for some subset of the model parameters. We situate the MFL PDE as a non-gradient flow continuity equation with some other recent examples. We then demonstrate that the MFL PDE for the ant model has two distinctive behaviors: the well-known Keller--Segel aggregation into spots and the formation of lanes along which the ants travel. Using linear and nonlinear analysis and numerical methods we provide the foundations for understanding these particle behaviors at the mean-field level. We conclude with long-time estimates that imply that there is no infinite time blow-up for the MFL PDE.
In Bayesian inverse problems, it is common to consider several hyperparameters that define the prior and the noise model that must be estimated from the data. In particular, we are interested in linear inverse problems with additive Gaussian noise and Gaussian priors defined using Matern covariance models. In this case, we estimate the hyperparameters using the maximum a posteriori (MAP) estimate of the marginalized posterior distribution.
However, this is a computationally intensive task since it involves computing log determinants. To address this challenge, we consider a stochastic average approximation (SAA) of the objective function and use the preconditioned Lanczos method to compute efficient function evaluation approximations.
We can therefore compute the MAP estimate of the hyperparameters efficiently by building a preconditioner which can be updated cheaply for new values of the hyperparameters; and by leveraging numerical linear algebra tools to reuse information efficiently for computing approximations of the gradient evaluations. We demonstrate the performance of our approach on inverse problems from tomography.
Samir Ghadiali is Professor and Chair/Head of the Department of Biomedical Engineering at the Ohio State University (OSU) and a Professor of Pulmonary and Critical Care Medicine at the OSU Wexner Medical Center. Dr. Ghadiali is a Fellow of the American Institute of Medical and Biological Engineering, the Biomedical Engineering Society and is a Parker B. Francis Fellow in Pulmonary Research. He is a member of the Davis Heart & Lung Research Institute and the Biophysics Graduate Program at OSU, and his internationally recognized research program uses biomedical engineering tools to develop novel diagnostic platforms and drug/gene therapies for cardiovascular and respiratory disorders. His research has been funded by the National Science Foundation, National Institutes of Health, the American Heart Association, and the United States Department of Defense and he has mentored over 35 pre-doctoral and post-doctoral trainees who have gone on to successful academic, industrial and research careers.
The global COVID19 pandemic has highlighted the lethality and morbidity associated with infectious respiratory diseases. These diseases can lead to devastating syndrome known as the acute respiratory distress syndrome (ARDS) where bacterial/viral infections cause excessive lung inflammation, pulmonary edema, and severe hypoxemia (shortness of breath). Although ARDS patients require artificial mechanical ventilation, the complex biofluid and biomechanical forces generated by the ventilator exacerbates lung injury leading to high mortality. My group has used mathematical and computational modeling to both characterize the complex mechanics of lung injury during ventilation and to identify novel ways to prevent injury at the cellular level. We have used in-vitro and in-vivo studies to validate our mathematical predictions and have used engineering tools to understand the biological consequences of the mechanical forces generated during ventilation. In this talk I will specifically describe how our mathematical/computational approach has led to novel cytoskeletal based therapies and how coupling mathematics and molecular biology has led to the discovery of a gene regulatory mechanisms that can minimize ventilation induced lung injury. I will also describe how we are currently using nanotechnology and gene/drug delivery systems to enhance the lung’s native regulatory responses and thereby prevent lung injury during ARDS.
A topological quantum field theory (TQFT) is a functor from a category of bordisms to a category of vector spaces. Classifying low-dimensional TQFTs often involves considering presentations of bordism categories in terms of generators and relations. In this talk, we will introduce these concepts and outline a program for obtaining such presentations using Morse–Cerf theory.
Schwarzian Theory is a quantum field theory which has attracted a lot of attention in the physics literature in the context of two-dimensional quantum gravity, black holes and AdS/CFT correspondence. It is predicted to be universal and arise in many systems with emerging conformal symmetry, most notably in Sachdev--Ye--Kitaev random matrix model and Jackie--Teitelboim gravity.
In this talk we will discuss our recent progress on developing rigorous mathematical foundations of the Schwarzian Field Theory, including rigorous construction of the corresponding measure, calculation of both the partition function and a natural class of correlation functions, and a large deviation principle.
The stable uniqueness theorem for KK-theory asserts that a Cuntz-pair of *-homomorphisms between separable C*-algebras gives the zero element in KK if and only if the *-homomorphisms are stably homotopic through a unitary path, in a specific sense. This result, along with its group equivariant analogue, has been crucial in the classification theory of C*-algebras and C*-dynamics. In this talk, I will present a unitary tensor category analogue of the stable uniqueness theorem and explore its application to a duality in tensor category equivariant KK-theory. To make the talk approachable even for those unfamiliar with actions of unitary tensor categories or KK-theory, I will introduce the relevant definitions and concepts, drawing comparisons with the case of group actions. This is joint work with Kan Kitamura and Robert Neagu.
Simplicial volume is a homotopy invariant of manifolds introduced by Gromov to study their metric and rigidity properties. One of the strongest vanishing results for simplicial volume of closed manifolds is in presence of amenable covers with controlled multiplicity. I will discuss some conditions under which this result can be extended to manifolds with boundary. To this end, I will follow Gromov's original approach via the theory of multicomplexes, whose foundations have been recently laid down by Frigerio and Moraschini.
All atmospheric phenomena, from daily weather patterns to the global climate system, are invariably influenced by atmospheric flow. Despite its importance, its complex behaviour makes extracting informative features from its dynamics challenging. In this talk, I will present a network-based approach to explore relationships between different flow structures. Using three phenomenon- and model-independent methods, we will investigate coherence patterns, vortical interactions, and Lagrangian coherent structures in an idealised model of the Northern Hemisphere stratospheric polar vortex. I will argue that networks built from fluid data retain essential information about the system's dynamics, allowing us to reveal the underlying interaction patterns straightforwardly and offering a fresh perspective on atmospheric behaviour.
In a graph $H$ whose edges are coloured (not necessarily properly) a rainbow copy of a graph $G$ is a (not necessarily induced) subgraph of $H$ that is isomorphic to $G$ and whose edges are all coloured differently. In this talk I will explain why the problem of finding such rainbow copies is interesting, survey what we know, concentrating mainly on the case where $G$ is a Hamilton cycle, and then tell you a bit about a new result about finding rainbow Hamilton cycles resiliently in random graphs (which is joint work with Peter Allen and Liana Yepremyan).
Smooth generic representations of $GL_n$ over a $p$-adic field $F$, i.e. representations admitting a nondegenerate Whittaker model, are an important class of representations, for example in the setting of Rankin-Selberg integrals. However, in recent years there has been an increased interest in non-generic representations and their degenerate Whittaker models. By the theory of Bernstein-Zelevinsky derivatives we can associate to each smooth irreducible representation of $GL_n(F)$ an integer partition of $n$, which encodes the "degeneracy" of the representation. By using these "highest derivative partitions" we can define a stratification of the category of smooth complex representations and prove the surprising fact that all of the strata categories are equivalent to module categories over commutative rings. This is joint work with David Helm.
In the talk, I will start by recalling some basics of optimal transport and how it can be used to define Ricci curvature lower bounds for singular spaces, in a synthetic sense. Then, I will present some joint work with De Luca-De Ponti and Tomasiello, where we show that some singular spaces, naturally showing up in gravity compactifications (namely, Dp-branes), enter the aforementioned setting of non-smooth spaces satisfying Ricci curvature lower bounds in a synthetic sense. Time permitting, I will discuss some applications to the Kaluza-Klein spectrum.
On arbitrary Carnot groups, the only hypoelliptic Hodge-Laplacians on forms that have been introduced are 0-order pseudodifferential operators constructed using the Rumin complex. However, to address questions where one needs sharp estimates, this 0-order operator is not suitable. Indeed, this is a rather difficult problem to tackle in full generality, the main issue being that the Rumin exterior differential is not homogeneous on arbitrary Carnot groups. In this talk, I will focus on the specific example of the free Carnot group of step 3 with 2 generators, where it is possible to introduce different hypoelliptic Hodge-Laplacians on forms. Such Laplacians can be used to obtain sharp div-curl type inequalities akin to those considered by Bourgain & Brezis and Lanzani & Stein for the de Rham complex, or their subelliptic counterparts obtained by Baldi, Franchi & Pansu for the Rumin complex on Heisenberg groups
A successful strategy to handle problems involving primes is to approximate them by a more 'simple' function. Two aspects need to be balanced. On the one hand, the approximant should be simple enough so that the considered problem can be solved for it. On the other hand, it needs to be close enough to the primes in order to make it an admissible to replacement. In this talk I will present how one can construct general approximants in the context of the Circle Method and will use this to give a different perspective on Goldbach type applications.
We show that linear reflection groups in the sense of Vinberg are often Zariski dense in PGL(n). Among the applications are examples of low-dimensional closed hyperbolic manifolds whose fundamental groups virtually embed as Zariski-dense subgroups of SL(n,Z), as well as some one-ended Zariski-dense subgroups of SL(n,Z) that are finitely generated but infinitely presented, for all sufficiently large n. This is joint work with Jacques Audibert, Gye-Seon Lee, and Ludovic Marquis.
We consider the problem of parametric and non-parametric statistical inference for systems of weakly interacting diffusions and of their mean field limit. We present several parametric inference methodologies, based on stochastic gradient descent in continuous time, spectral methods and the method of moments. We also show how one can perform fully nonparametric Bayesian inference for the mean field McKean-Vlasov PDE. The effect of non-uniqueness of stationary states of the mean field dynamics on the inference problem is elucidated.
Dr. Aditya Kolachana is an Assistant Professor in the Department of Humanities and Social Sciences at the Indian Institute of Technology Madras, Chennai. He heads the Centre for Indian Knowledge Systems at IIT Madras where his research delves into India's scientific and cultural heritage. He is a recipient of the Young Historian of Science Award instituted by the Indian National Science Academy and the Best Teacher Award at IIT Madras.
During the 14th to the 16th centuries CE, a succession of Indian scholars, collectively referred to as the Kerala school, made remarkable contributions in the fields of mathematics and astronomy. Mādhava of Saṅgamagrāma, a gifted mathematician and astronomer, is considered the founder of this school, and is perhaps best known for discovering an infinite series for pi, among other achievements. Subsequently, Mādhava's lineage of disciples, consisting of illustrious names such as Parameśvara, Dāmodara, Nīlakaṇṭha, Jyeṣṭhadeva, Śaṅkara Vāriyar, Citrabhānu, Acyuta Piṣaraṭi etc., made numerous important contributions of their own in the fields of mathematics and astronomy. Later scholars of the Kerala school flourished up to the 19th century. This talk will provide a historical overview of the Kerala school and highlight its important contributions.
Electron cryomicroscopy (cryo-EM) is an imaging technique widely used in structural biology to determine the three-dimensional structure of biological molecules from noisy two-dimensional projections with unknown orientations. As the typical pipeline involves processing large amounts of data, efficient algorithms are crucial for fast and reliable results. The stochastic gradient descent (SGD) algorithm has been used to improve the speed of ab initio reconstruction, which results in a first, low-resolution estimation of the volume representing the molecule of interest, but has yet to be applied successfully in the high-resolution regime, where expectation-maximization algorithms achieve state-of-the-art results, at a high computational cost.
In this work, we investigate the conditioning of the optimisation problem and show that the large condition number prevents the successful application of gradient descent-based methods at high resolution.
Our results include a theoretical analysis of the condition number of the optimisation problem in a simplified setting where the individual projection directions are known, an algorithm based on computing a diagonal preconditioner using Hutchinson's diagonal estimator, and numerical experiments showing the improvement in the convergence speed when using the estimated preconditioner with SGD. The preconditioned SGD approach can potentially enable a simple and unified approach to ab initio reconstruction and high-resolution refinement with faster convergence speed and higher flexibility, and our results are a promising step in this direction.
Abstract: I will introduce and explain a new symmetry structure for type IIA string theory, called string^h. Using string^h I will explain how some objects of stable homotopy theory relating to elliptic cohomology enter into type IIA string theory.
Persistent homology is infeasible to compute when a dataset is very large. Inspired by the bootstrapping method, Chazal et al. (2014) proposed a multiple subsampling approach to approximate the persistence landscape of a massive dataset. In this talk, I will present an extension of the multiple subsampling method to a broader class of vectorizations of persistence diagrams and to persistence diagrams directly. First, I will review the statistical foundation of the multiple subsampling approach as applied to persistence landscapes in Chazal et al. (2014). Next, I will talk about how this analysis extends to a class of vectorized persistence diagrams called Hölder continuous vectorizations. Finally, I will address the challenges in applying this method to raw persistence diagrams for two measures of centrality: the mean persistence measure and the Fréchet mean of persistence diagrams. I will demonstrate these methods through simulation results and applications in estimating data shapes.
This week's Fridays@2 will feature a panel discussion on how to manage your time during your degree. The panel will share their thoughts and experiences in a Q&A session, discussing some of the practicalities of juggling lectures, the many ways to study independently and non-maths activities.
The Bruhat-Tits building is a crucial combinatorial tool in the study of reductive p-adic groups and their representation theory. Given a p-adic group, its Bruhat-Tits building is a simplicial complex upon which it acts with remarkable properties. In this talk I will give an introduction to the Bruhat-Tits building by sketching its definition and going over some of its basic properties. I will then show the usefulness of the Bruhat-Tits by determining the maximal compact subgroups of a p-adic group up to conjugacy by using the Bruhat-Tits building.
At the heart of both cross-section calculations at the Large Hadron Collider and gravitational wave physics lie the evaluation of Feynman integrals. These integrals are meromorphic functions (or distributions) of the parameters on which they depend and understanding their analytic structure has been an ongoing quest for over 60 years. In this talk, I will demonstrate how these integrals fits within the framework of generalized hypergeometry by Gelfand, Kapranov, and Zelevinsky (GKZ). In this framework the singularities are simply calculated by the principal A-determinant and I will show that some Feynman integrals can be used to generate Cohen-Macaulay rings which greatly simplify their analysis. However, not every integral fits within the GKZ framework and I will show how the singularities of every Feynman integral can be calculated using Whitney stratifications.
Cells must reliably coordinate responses to noisy external stimuli for proper functionality whether deciding where to move or initiate a response to threats. In this talk I will present a perspective on such cellular decision making problems with extreme statistics. The central premise is that when a single stochastic process exhibits large variability (unreliable), the extrema of multiple processes has a remarkably tight distribution (reliable). In this talk I will present some background on extreme statistics followed by two applications. The first regards antigen discrimination - the recognition by the T cell receptor of foreign antigen. The second concerns directional sensing - the process in which cells acquire a direction to move towards a target. In both cases, we find that extreme statistics explains how cells can make accurate and rapid decisions, and importantly, before any steady state is reached.
This study introduces a novel suite of historical large language models (LLMs) pre-trained specifically for accounting and finance, utilising a diverse set of major textual resources. The models are unique in that they are year-specific, spanning from 2007 to 2023, effectively eliminating look-ahead bias, a limitation present in other LLMs. Empirical analysis reveals that, in trading, these specialised models outperform much larger models, including the state-of-the-art LLaMA 1, 2, and 3, which are approximately 50 times their size. The findings are further validated through a range of robustness checks, confirming the superior performance of these LLMs.