A Short Introduction to the Fractional Laplacian
Structure: 1 x 2hr Lecture
Structure: 1 x 2hr Lecture
Twisted K-theory is an enrichment of topological K-theory that allows local coefficient systems called twists. For spaces and twists equipped with an action by a group, equivariant twisted K-theory provides an even finer invariant. Equivariant twists over Lie groups gained increasing importance in the subject due to a result by Freed, Hopkins and Teleman that relates the corresponding K-groups to the Verlinde ring of the associated loop group. From the point of view of homotopy theory only a small subgroup of all possible twists is considered in classical treatments. In this talk I will discuss a construction that is joint work with David Evans and produces interesting examples of non-classical twists over the Lie groups SU(n) and over tori constructed from exponential functors. They arise naturally as Fell bundles and are equivariant with respect to the conjugation action of the group on itself. For the determinant functor our construction reproduces the basic gerbe over SU(n) used by Freed, Hopkins and Teleman.
Hermitian matrix models with non-trivial covariance will be introduced. The Kontsevich Model is the prime example, which was used to prove Witten's conjecture about the generating function of intersection numbers of the moduli space $\overline{\mathcal{M}}_{g,n}$. However, we will discuss these models in a different direction, namely as a quantum field theory. As a formal matrix model, the correlation functions of these models have a unique combinatorial/perturbative interpretation in the sense of Feynman diagrams. In particular, the additional structure (in comparison to ordinary quantum field theories) gives the possibility to compute exact expressions, which are resummations of infinitely many Feynman diagrams. For the easiest topologies, these exact expressions (given by implicitly defined functions) will be presented and discussed. If time remains, higher topologies are discussed by a connection to Topological Recursion.
In the event of food-borne disease outbreaks, conventional epidemiological approaches to identify the causative food product are time-intensive and often inconclusive. Data-driven tools could help to reduce the number of products under suspicion by efficiently generating food-source hypotheses. We frame the problem of generating hypotheses about the food-source as one of identifying the source network from a set of food supply networks (e.g. vegetables, eggs) that most likely gave rise to the illness outbreak distribution over consumers at the terminal stage of the supply network. We introduce an information-theoretic measure that quantifies the degree to which an outbreak distribution can be explained by a supply network’s structure and allows comparison across networks. The method leverages a previously-developed food-borne contamination diffusion model and probability distribution for the source location in the supply chain, quantifying the amount of information in the probability distribution produced by a particular network-outbreak combination. We illustrate the method using supply network models from Germany and demonstrate its application potential for outbreak investigations through simulated outbreak scenarios and a retrospective analysis of a real-world outbreak.
In this leisure talk I will show how a sum of squares decomposition problem can be transformed to a problem of semi-definite optimization. Then the practicality of such reformulations will be discussed, illustrated by an explicit example of Artin's solutions to Hilberts 17th problem. Finally I will show how a numerical solution could be turned into a mathematically certified one, using the order structure on the cone of sums of squares.
The talk requires no pre-requisite knowledge of neither optimization or programming and only undergraduate mathematics.
Please note unusual time.
In his very first note on noncommutative differential geometry, Connes
showed that the position and momentum operators on the line could be used to
construct constant curvature connections over an irrational noncommutative
2-torus $\mathcal{A}_\theta$. When $\theta$ is a quadratic irrationality,
this yields, in particular, constant curvature connections on non-trivial
noncommutative line bundles---is there an underlying monopole on some
non-trivial noncommutative principal $U(1)$-bundle? We use this case study
to illustrate how approaches to quantum principal bundles introduced by
Brzeziński–Majid and Đurđević, respectively, can be fruitfully synthesized
to reframe classical gauge theory on quantum principal bundles in terms of
synthesis of total spaces (as noncommutative manifolds) from vertical and
horizontal geometric data.
The Jacquet-Langlands correspondence gives a relationship between automorphic representations on $GL_2$ and its twisted forms, which are the unit groups of quaternion algebras. Writing this out in more classical language gives a combinatorial way of producing the eigenvalues of Hecke operators acting on modular forms. In this talk, we will first go over notions of modular forms and quaternion algebras, and then dive into an explicit example by computing some eigenvalues of the lowest level quaternionic modular form of weight $2$ over $\mathbb{Q}$.
Classical approaches to optimal portfolio selection problems are based
on probabilistic models for the asset returns or prices. However, by
now it is well observed that the performance of optimal portfolios are
highly sensitive to model misspecifications. To account for various
type of model risk, robust and model-free approaches have gained more
and more importance in portfolio theory. Based on a rough path
foundation, we develop a model-free approach to stochastic portfolio
theory and Cover's universal portfolio. The use of rough path theory
allows treating significantly more general portfolios in a model-free
setting, compared to previous model-free approaches. Without the
assumption of any underlying probabilistic model, we present pathwise
Master formulae analogously to the classical ones in stochastic
portfolio theory, describing the growth of wealth processes generated
by pathwise portfolios relative to the wealth process of the market
portfolio, and we show that the appropriately scaled asymptotic growth
rate of Cover's universal portfolio is equal to the one of the best
retrospectively chosen portfolio. The talk is based on joint work with
Andrew Allan, Christa Cuchiero and Chong Liu.
Consider simple rank-one Lie groups $SO(n, 1)$, $SU(n, 1)$ and $Sp(n ,1)$ ($n>1$). They are the isometry groups of real, complex and quaternionic hyperbolic spaces respectively.
By a result of Kostant, the trivial representation of $Sp(n ,1)$ is isolated in the space of irreducible unitary representations on Hilbert spaces. That is, $Sp(n ,1)$ has Kazhdan’s property (T) which is equivalent to the vanishing of 1st cohomology of the group in all unitary representations. This is in contrast to the case of $SO(n ,1)$ and $SU(n ,1)$ where they have the Haagerup approximation property, a strong negation of property (T).
This dichotomy between $SO(n ,1)$, $SU(n ,1)$ and $Sp(n ,1)$ disappears when we consider so-called uniformly bounded representations on Hilbert spaces. By a result of Cowling in 1980’s, the trivial representation of $Sp(n ,1)$ is no longer isolated in the space of uniformly bounded representations. Moreover, there is a uniformly bounded representation of $Sp(n ,1)$ with non-zero first cohomology group.
The goal of this talk is to describe these facts.
It is expected that complete noncompact Calabi-Yau manifolds are in some sense governed by their asymptotics at infinity. In the maximal volume growth case, the asymptotics at infinity are given by Calabi-Yau cones. We are interested in deformations of such metrics that fix the asymptotic cones at infinity. In the asymptotically conical case, Conlon-Hein proved uniqueness under such deformations. Their method is based on the corresponding linearized problem, namely the study of subquadratic harmonic functions. We generalize their work to the maximal volume growth case, allowing the tangent cones at infinity to have non-isolated singularities. Part of the talk is based on work in progress joint with Gabor Szekelyhidi.
The success of operator splitting techniques for convex optimization has led to an explosion of methods for solving large-scale and non convex optimization problems via convex relaxation.
This success is at the cost of overlooking direct approaches to operator splitting that embrace some of the more inconvenient aspects of many model problems, namely nonconvexity, non smoothness and infeasibility. I will introduce some of the tools we have developed for handling these issues, and present sketches of the basic results we can obtain.
The formalism is in general metric spaces, but most applications have their basis in Euclidean spaces. Along the way I will try to point out connections to other areas of intense interest, such as optimal mass transport.
NOTE UNUSUAL TIME: 1pm
In this talk I will discuss an algorithm to piecewise dualise linear quivers into their mirror duals. This applies to the 3d N=4 version of mirror symmetry as well as its recently introduced 4d counterpart, which I will review. The algorithm uses two basic duality moves, which mimic the local S-duality of the 5-branes in the brane set-up of the 3d theories, and the properties of the S-wall. The S-wall is known to correspond to the N=4 T[SU(N)] theory in 3d and I will argue that its 4d avatar corresponds to an N=1 theory called E[USp(2N)], which flows to T[SU(N)] in a suitable 3d limit. All the basic duality moves and S-wall properties needed in the algorithm are derived in terms of some more fundamental Seiberg-like duality, which is the Intriligator--Pouliot duality in 4d and the Aharony duality in 3d.
This seminar will only be in person.
Superconformal field theories (SCFTs) of Argyres-Dougles type are inherently strongly coupled and provide a window onto remarkable non-perturbative phenomena (such as mutually non-local massless dyons and relevant Coulomb branch operators of fractional dimension). I am going to discuss the first explicit proposal for the holographic duals of a class of SCFTs of Argyres-Douglas type. The theories under examination are realised by a stack of M5-branes wrapped on a sphere with one irregular puncture and one regular puncture. In the dual 11d supergravity solutions, the irregular puncture is realised as an internal M5-brane source.
A collection of bite-size 10-15 minute talks from current DPhil students in the Algebra group. The talks will be accessible to masters students and above.
With plenty of opportunity to chat to current students about what doing a PhD in algebra and representation theory is like!
What do we use metrics on persistent modules for? Is it only to asure stability of some constructions?
In my talk I will describe why I care about such metrics, show how to construct a rich space of them and illustrate how to use
them for analysis.
The injection of CO2 into porous subsurface reservoirs is a technological means for removing anthropogenic emissions, which relies on a series of complex porous flow properties. During injection of CO2 small-scale heterogeneities, often in the form of sedimentary layering, can play a significant role in focusing the flow of less viscous CO2 into high permeability pathways, with large-scale implications for the overall motion of the CO2 plume. In these settings, capillary forces between the CO2 and water preferentially rearrange CO2 into the most permeable layers (with larger pore space), and may accelerate plume migration by as much as 200%. Numerous factors affect overall plume acceleration, including the structure of the layering, the permeability contrast between layers, and the playoff between the capillary, gravitational and viscous forces that act upon the flow. However, despite the sensitivity of the flow to these heterogeneities, it is difficult to acquire detailed field measurements of the heterogeneities owing to the vast range of scales involved, presenting an outstanding challenge. As a first step towards tackling this uncertainty, we use a simple modelling approach, based on an upscaled thin-film equation, to create ensemble forecasts for many different types and arrangements of sedimentary layers. In this way, a suite of predictions can be made to elucidate the most likely scenarios for injection and the uncertainty associated with such predictions.
This presentation will focus on the role of mathematical modelling and predictive toxicology in the safety assessment of chemicals and consumer products. The starting point will be regulatory assessment of chemicals based on their potential for harming human health or the environment. This will set the scene for describing current practices in the development and application of mathematical and computational models. A wide variety of methodological approaches are employed, ranging from relatively simple statistical models to more advanced machine learning approaches. The modelling context also ranges from discovering the underlying mechanisms of chemical toxicity to the safe and sustainable design of chemical products. The main modelling approaches will be reviewed, along with the challenges and opportunities associated with their use. The presentation will conclude by identifying current research needs, including progress towards a Unified Theory of Chemical Toxicology.
Following on from Christoph's talk last week, I will present a version of the supercooled Stefan problem with noise. I will start by discussing the physical intuition and then give a probabilistic representation of solutions. From there, I will identify a simple relationship between the initial heat profile and a single parameter for how the liquid solidifies, which, if violated, forces the temperature to develop a discontinuity in finite time with positive probability. On the other hand, when the relationship is satisfied, the temperature remains globally continuous with probability one. The work is part of a new preprint that should soon be available on arXiv.
Junior strings is a seminar series where DPhil students present topics of comment interest that do not necessarily overlap with their own research areas. This is primarly aimed at PhD students and post-docs but everyone is welcome.
The past few years have been an exciting time for my work related to rational approximation. This talk will present four developments:
1. AAA approximation (2016, with Nakatsukasa & Sète)
2. Root-exponential convergence and tapered exponential clustering (2020, with Nakatsukasa & Weideman)
3. Lightning (2017-2020, with Gopal & Brubeck)
4. Log-lightning (2020-21, with Nakatsukasa & Baddoo)
Two other topics will not be discussed:
X. AAA-Lawson approximation (2018, with Nakatsukasa)
Y. AAA-LS approximation (2021, with Costa)
The organized movement of intracellular material is part of the functioning of cells and the development of organisms. These flows can arise from the action of molecular machines on the flexible, and often transitory, scaffoldings of the cell. Understanding phenomena in this realm has necessitated the development of new simulation tools, and of new coarse-grained mathematical models to analyze and simulate. In that context, I'll discuss how a symmetry-breaking "swirling" instability of a motor-laden cytoskeleton may be an important part of the development of an oocyte, modeling active material in the spindle, and what models of active, immersed polymers tell us about chromatin dynamics in the nucleus.
(This is Part II of a two-part talk.)
Forcing axioms spell out the dictum that if a statement can be forced, then it is already true. The P_max axiom (*) goes beyond that by claiming that if a statement is consistent, then it is already true. Here, the statement in question needs to come from a resticted class of statements, and "consistent" needs to mean "consistent in a strong sense". It turns out that (*) is actually equivalent to a forcing axiom, and the proof is by showing that the (strong) consistency of certain theories gives rise to a corresponding notion of forcing producing a model of that theory. Our result builds upon earlier work of R. Jensen and (ultimately) Keisler's "consistency properties".