Unital associahedra and homotopy unital homotopy associative algebras
Abstract
The classical associahedra are cell complexes, in fact polytopes,
introduced by Stasheff to parametrize the multivariate operations
naturally occurring on loop spaces of connected spaces.
They form a topological operad $ Ass_\infty $ (which provides a resolution
of the operad $ Ass $ governing spaces-with-associative-multiplication)
and the complexes of cellular chains on the associahedra form a dg
operad governing $A_\infty$-algebras (that is, a resolution of the
operad governing associative algebras).
In classical applications it was not necessary to consider units for
multiplication, or it was assumed units were strict. The introduction
of non-strict units into the picture was considerably harder:
Fukaya-Ono-Oh-Ohta introduced homotopy units for $A_\infty$-algebras in
their work on Lagrangian intersection Floer theory, and equivalent
descriptions of the dg operad for homotopy unital $A_\infty$-algebras
have now been given, for example, by Lyubashenko and by Milles-Hirsch.
In this talk we present the "missing link": a cellular topological
operad $uAss_\infty$ of "unital associahedra", providing a resolution
for the operad governing topological monoids, such that the cellular
chains on $uAss_\infty$ is precisely the dg operad of
Fukaya-Ono-Oh-Ohta.
(joint work with Fernando Muro, arxiv:1110.1959, to appear Forum Math)
How does a uniformly sampled Markov chain behave ?
Abstract
This is joint work with P. Caputo and D. Chafai. In this talk, we
will consider various probability distributions on the set of stochastic
matrices with n states and on the set of Laplacian/Kirchhoff
matrices on n states. They will arise naturally from the conductance model on
n states with i.i.d conductances. With the help of random matrix
theory, we will study the spectrum of these processes.
The projections of fractal percolation (Joint work with Michal Rams,IMPAN Warsaw
Abstract
To study turbulence,B. Mandelbrot introduced a random fractal which is called
now Mandelbrot percolation or fractal percolation. The construction is as follows:
given an integer M _ 2 and a probability 0
Three-sphere partition function, counterterms and supergravity
Abstract
The partition function of 3d N=2 superconformal theories on the
3-sphere can be computed exactly by localization methods. I will explain
some sublteties associated to that important result. As a by-product, this
analysis establishes the so-called F-maximization principle for N=2 SCFTs in
3d: the exact superconformal R-charge maximizes the 3-sphere free energy
F=-log Z.
Cactus products and Outer space with generalised boundaries
Abstract
A cactus product is much like a wedge product of pointed spaces, but instead of being uniquely defined there is a moduli space of possible cactus products. I will discuss how this space can be interpreted geometrically and how its combinatorics calculates the homology of the automorphism group of a free product with no free group factors. Then I will reinterpret the moduli space with Outer space in mind: the lobes of the cacti now behave like boundaries and our free products can now include free group factors.
16:30
Mathematics of Phase Transitions From pde' s to many particle systems and back?
Abstract
What is a phase transition?
The first thing that comes to mind is boiling and freezing of water. The material clearly changes its behaviour without any chemical reaction. One way to arrive at a mathematical model is to associate different material behavior, ie., constitutive laws, to different phases. This is a continuum physics viewpoint, and when a law for the switching between phases is specified, we arrive at pde problems. The oldest paper on such a problem by Clapeyron and Lame is nearly 200 years old; it is basically on what has later been called the Stefan problem for the heat equation.
The law for switching is given e.g. by the melting temperature. This can be taken to be a phenomenological law or thermodynamically justified as an equilibrium condition.
The theory does not explain delayed switching (undercooling) and it does not give insight in structural differences between the phases.
To some extent the first can be explained with the help of a free energy associated with the interface between different phases. This was proposed by Gibbs, is relevant on small space scales, and leads to mean curvature equations for the interface – the so-called Gibbs Thompson condition.
The equations do not by themselves lead to a unique evolution. Indeed to close the resulting pde’s with a reasonable switching or nucleation law is an open problem.
Based on atomistic concepts, making use of surface energy in a purely phenomenological way, Becker and Döring developed a model for nucleation as a kinetic theory for size distributions of nuclei. The internal structure of each phase is still not considered in this ansatz.
An easier problem concerns solid-solid phase transitions. The theory is furthest developped in the context of equilibrium statistical mechanics on lattices, starting with the Ising model for ferromagnets. In this context phases correspond to (extremal) equilibrium Gibbs measures in infinite volume. Interfacial free energy appears as a finite volume correction to free energy.
The drawback is that the theory is still basically equilibrium and isothermal. There is no satisfactory theory of metastable states and of local kinetic energy in this framework.
14:15
Best Gain Loss Ratio in Continuous Time
Abstract
The use of gain-loss ratio as a measure of attractiveness has been
introduced by Bernardo and Ledoit. In their well-known paper, they
show that gain-loss ratio restrictions have a dual representation in
terms of restricted pricing kernels.
In spite of its clear financial significance, gain-loss ratio has
been largely ignored in the mathematical finance literature, with few
exceptions (Cherny and Madan, Pinar). The main reason is intrinsic
lack of good mathematical properties. This paper aims to be a
rigorous study of gain-loss ratio and its dual representations
in a continuous-time market setting, placing it in the context of
risk measures and acceptability indexes. We also point out (and
correctly reformulate) an erroneous statement made by Bernardo and
Ledoit in their main result. This is joint work with M. Pinar.
Dynamic regulatory networks govern T-cell proliferation and differentiation
Abstract
*Please note that this is a joint seminar with the William Dunn School of Pathology and will take place in EPA Seminar Room which is located inside the Sir William Dunn School of Pathology and must be entered from the main entrance on South Parks Road. Link http://g.co/maps/8cbbx
"Pattern of Life" and traffic
Abstract
'Pattern-of-life' is a current buzz-word in sensor systems. One aspect to this is the automatic estimation of traffic flow patterns, perhaps where existing road maps are not available. For example, a sensor might measure the position of a number of vehicles in 2D, with a finite time interval between each observation of the scene. It is desired to estimate the time-average spatial density, current density, sources and sinks etc. Are there practical methods to do this without tracking individual vehicles, given that there may also be false 'clutter' detections, the density of vehicles may be high, and each vehicle may not be detected in every timestep? And what if the traffic flow has periodicity, e.g. variations on the timescale of a day?
Imaginaries in valued fields with analytic structure
Abstract
I will give an overview of the description of imaginaries in algebraically closed (and some other) valued fields, and then discuss the related issue for valued fields with analytic structure (in the sense of Lipshitz-Robinson, and Denef – van Den Dries). In particular, I will describe joint work with Haskell and Hrushovski showing that in characteristic 0, elimination of imaginaries in the `geometric sorts’ of ACVF no longer holds if restricted exponentiation is definable.
Explicit rational points on elliptic curves
Abstract
I will discuss an efficient algorithm for computing certain special values of p-adic L-functions, giving an application to the explicit construction of
rational points on elliptic curves.
Breakup of Spiralling Liquid Jets
Abstract
The industrial prilling process is amongst the most favourite technique employed in generating monodisperse droplets. In such a process long curved jets are generated from a rotating drum which in turn breakup and from droplets. In this talk we describe the experimental set-up and the theory to model this process. We will consider the effects of changing the rheology of the fluid as well as the addition of surface agents to modify breakup characterstics. Both temporal and spatial instability will be considered as well as nonlinear numerical simulations with comparisons between experiments.
Two-Grid hp-Adaptive Discontinuous Galerkin Finite Element Methods for Second-Order Quasilinear Elliptic PDEs
Abstract
In this talk we present an overview of some recent developments concerning the a posteriori error analysis and adaptive mesh design of $h$- and $hp$-version discontinuous Galerkin finite element methods for the numerical approximation of second-order quasilinear elliptic boundary value problems. In particular, we consider the derivation of computable bounds on the error measured in terms of an appropriate (mesh-dependent) energy norm in the case when a two-grid approximation is employed. In this setting, the fully nonlinear problem is first computed on a coarse finite element space $V_{H,P}$. The resulting 'coarse' numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretization on the finer space $V_{h,p}$; thereby, only a linear system of equations is solved on the richer space $V_{h,p}$. Here, an adaptive $hp$-refinement algorithm is proposed which automatically selects the local mesh size and local polynomial degrees on both the coarse and fine spaces $V_{H,P}$ and $V_{h,p}$, respectively. Numerical experiments confirming the reliability and efficiency of the proposed mesh refinement algorithm are presented.
Applications of non-linear analysis to geometry
Abstract
I will claim (and maybe show) that a lot of problems in differential geometry can be reformulated in terms of non-linear elliptic differential operators. After reviewing the theory of linear elliptic operators, I will show what can be said about the non-linear setting.
13:00
Finite elements approximation of second order linear elliptic equations in divergence form with right-hand side in L<sup>1</sup>
Abstract
In this lecture I will report on joint work with J. Casado-Díaz, T. Chacáon Rebollo, V. Girault and M.~Gómez Marmol which was published in Numerische Mathematik, vol. 105, (2007), pp. 337-510.
We consider, in dimension $d\ge 2$, the standard $P^1$ finite elements approximation of the second order linear elliptic equation in divergence form with coefficients in $L^\infty(\Omega)$ which generalizes Laplace's equation. We assume that the family of triangulations is regular and that it satisfies an hypothesis close to the classical hypothesis which implies the discrete maximum principle. When the right-hand side belongs to $L^1(\Omega)$, we prove that the unique solution of the discrete problem converges in $W^{1,q}_0(\Omega)$ (for every $q$ with $1 \leq q $ < $ {d \over d-1} $) to the unique renormalized solution of the problem. We obtain a weaker result when the right-hand side is a bounded Radon measure. In the case where the dimension is $d=2$ or $d=3$ and where the coefficients are smooth, we give an error estimate in $W^{1,q}_0(\Omega)$ when the right-hand side belongs to $L^r(\Omega)$ for some $r$ > $1$.Solution of Hyperbolic Systems of Equations on Sixty-Five Thousand Processors... In Python!
Abstract
As Herb Sutter predicted in 2005, "The Free Lunch is Over", software programmers can no longer rely on exponential performance improvements from Moore's Law. Computationally intensive software now rely on concurrency for improved performance, as at the high end supercomputers are being built with millions of processing cores, and at the low end GPU-accelerated workstations feature hundreds of simultaneous execution cores. It is clear that the numerical software of the future will be highly parallel, but what language will it be written in?
Over the past few decades, high-level scientific programming languages have become an important platform for numerical codes. Languages such as MATLAB, IDL, and R, offer powerful advantages: they allow code to be written in a language more familiar to scientists and they permit development to occur in an evolutionary fashion, bypassing the relatively slow edit/compile/run/plot cycle of Fortran or C. Because a scientist’s programming time is typically much more valuable than the computing cycles their code will use, these are substantial benefits. However, programs written in such languages are not portable to high performance computing platforms and may be too slow to be useful for realistic problems on desktop machines. Additionally, the development of such interpreted language codes is partially wasteful in the sense that it typically involves reimplementation (with associated debugging) of some algorithms that already exist in well-tested Fortran and C codes. Python stands out as the only high-level language with both the capability to run on parallel supercomputers and the flexibility to interface with existing libraries in C and Fortran.
Our code, PyClaw, began as a Python interface, written by University of Washington graduate student Kyle Mandli, to the Fortran library Clawpack, written by University of Washington Professor Randy LeVeque. PyClaw was designed to build on the strengths of Clawpack by providing greater accessibility. In this talk I will describe the design and implementation of PyClaw, which incorporates the advantages of a high-level language, yet achieves serial performance similar to a hand-coded Fortran implementation and runs on the world's fastest supercomputers. It brings new numerical functionality to Clawpack, while making maximal reuse of code from that package. The goal of this talk is to introduce the design principles we considered in implementing PyClaw, demonstrate our testing infrastructure for developing within PyClaw, and illustrate how we elegantly and efficiently distributed problems over tens of thousands of cores using the PETSc library for portable parallel performance. I will also briefly highlight a new mathematical result recently obtained from PyClaw, an investigation of solitary wave formation in periodic media in 2 dimensions.
17:00
"Tits alternatives for graph products of groups".
Abstract
Graph products of groups naturally generalize direct and free products and have a rich subgroup structure. Basic examples of graph products are right angled Coxeter and Artin groups. I will discuss various forms of Tits Alternative for subgroups and
their stability under graph products. The talk will be based on a joint work with Yago Antolin Pichel.
Generalized Buckley-Leverett System
Abstract
We show the solvability of a proposed Generalized Buckley-LeverettSystem, which is related to multidimensional Muskat Problem. More-over, we discuss some important questions concerning singular limitsof the proposed model.
Local symplectic field theory and stable hypersurfaces in symplectic blow-ups
Abstract
Symplectic field theory (SFT) can be viewed as TQFT approach to Gromov-Witten theory. As in Gromov-Witten theory, transversality for the Cauchy-Riemann operator is not satisfied in general, due to the presence of multiply-covered curves. When the underlying simple curve is sufficiently nice, I will outline that the transversality problem for their multiple covers can be elegantly solved using finite-dimensional obstruction bundles of constant rank. By fixing the underlying holomorphic curve, we furthermore define a local version of SFT by counting only multiple covers of this chosen curve. After introducing gravitational descendants, we use this new version of SFT to prove that a stable hypersurface intersecting an exceptional sphere (in a homologically nontrivial way) in a closed four-dimensional symplectic manifold must carry an elliptic orbit. Here we use that the local Gromov-Witten potential of the exceptional sphere factors through the local SFT invariants of the breaking orbits appearing after neck-stretching along the hypersurface.
On packing and covering in hypergraphs
Abstract
We discuss some recent developments on the following long-standing problem known as Ryser's
conjecture. Let $H$ be an $r$-partite $r$-uniform hypergraph. A matching in $H$ is a set of disjoint
edges, and we denote by $\nu(H)$ the maximum size of a matching in $H$. A cover of $H$ is a set of
vertices that intersects every edge of $H$. It is clear that there exists a cover of $H$ of size at
most $r\nu(H)$, but it is conjectured that there is always a cover of size at most $(r-1)\nu(H)$.
12:00
Peeling of the Weyl tensor and gravitational radiation in higher dimensions.
Abstract
Abstract: In this talk, I will discuss the peeling behaviour of the Weyl tensor near null infinity for asymptotically flat higher dimensional spacetimes. The result is qualitatively different from the peeling property in 4d. Also, I will discuss the rewriting of the Bondi energy flux in terms of "Newman-Penrose" Weyl components.
Mean Curvature Flow from Cones
Abstract
This talk will consist of a pure PDE part, and an applied part. The unifying topic is mean curvature flow (MCF), and particularly mean curvature flow starting at cones. This latter subject originates from the abstract consideration of uniqueness questions for flows in the presence of singularities. Recently, this theory has found applications in several quite different areas, and I will explain the connections with Harnack estimates (which I will explain from scratch) and also with the study of the dynamics of charged fluid droplets.
There are essentially no prerequisites. It would help to be familiar with basic submanifold geometry (e.g. second fundamental form) and intuition concerning the heat equation, but I will try to explain everything and give the talk at colloquium level.
Joint work with Sebastian Helmensdorfer.
Counting Rational Points on Algebraic Varieties, the Determinant Method and Applications
Infinity categories and infinity operads
Abstract
I will discuss some aspects of the simplicial theory of
infinity-categories which originates with Boardman and Vogt, and has
recently been developed by Joyal, Lurie and others. The main purpose of
the talk will be to present an extension of this theory which covers
infinity-operads. It is based on a modification of the notion of
simplicial set, called 'dendroidal set'. One of the main results is that
the category of dendroidal sets carries a monoidal Quillen model
structure, in which the fibrant objects are precisely the infinity
operads,and which contains the Joyal model structure for
infinity-categories as a full subcategory.
(The lecture will be mainly based on joint work with Denis-Charles
Cisinski.)
Optimal transport, concentration of measure and functional inequalities.
Abstract
This talk is devoted to Talagrand's transport-entropy inequality and its deep connections to the concentration of measure phenomenon, large deviation theory and logarithmic Sobolev inequalities. After an introductive part on the field, I will present recent results obtained with P-M Samson and C. Roberto establishing the equivalence of Talagrand's inequality to a restricted version of the Log-Sobolev inequality. If time enables, I will also present some works in progress about transport inequalities in a discrete setting.
Long-time behaviour of stochastic delay equations
Abstract
Abstract: First we provide a survey on the long-time behaviour of stochastic delay equations with bounded memory, addressing existence and uniqueness of invariant measures, Lyapunov spectra, and exponential growth rates.
Then, we study the very simple one-dimensional equation $dX(t)=X(t-1)dW(t)$ in more detail and establish the existence of a deterministic exponential growth rate of a suitable norm of the solution via a Furstenberg-Hasminskii-type formula.
Parts of the talk are based on joint work with Martin Hairer and Jonathan Mattingly.
14:15
Solutions of the Strominger System via stable bundles on Calabi-Yau threefolds.
Holographic stripes and helical superconductors
Abstract
The AdS/CFT correspondence is a powerful tool to analyse strongly coupled quantum field
theories. Over the past few years there has been a surge of activity aimed at finding
possible applications to condensed matter systems. One focus has been to holographically
realise various kinds of phases via the construction of fascinating new classes of black
hole solutions. In this framework, I will discuss the possibility of describing finite
temperature phase transitions leading to spontaneous breaking of translational invariance of
the dual field theory at strong coupling. Along with the general setup I will also discuss
specific string/M theory embeddings of the corresponding symmetry breaking modes leading to
the description of such phases.
Ocean forcing of ice sheet change in West Antarctica
Abstract
The part of the West Antarctic Ice Sheet that drains into the Amundsen Sea is currently thinning at such a rate that it contributes nearly 10 percent of the observed rise in global mean sea level. Acceleration of the outlet glaciers means that the sea level contribution has grown over the past decades, while the likely future contribution remains a key unknown. The synchronous response of several independent glaciers, coupled with the observation that thinning is most rapid at their downstream ends, where the ice goes afloat, hints at an oceanic driver. The general assumption is that the changes are a response to an increase in submarine melting of the floating ice shelves that has been driven in turn by an increase in the transport of ocean heat towards the ice sheet. Understanding the causes of these changes and their relationship with climate variability is imperative if we are to make quantitative estimates of sea level into the future.
Observations made since the mid‐1990s on the Amundsen Sea continental shelf have revealed that the seabed troughs carved by previous glacial advances guide seawater around 3‐4°C above the freezing point from the deep ocean to the ice sheet margin, fuelling rapid melting of the floating ice. This talk summarises the results of several pieces of work that investigate the chain of processes linking large‐scale atmospheric processes with ocean circulation over the continental shelf and beneath the floating ice shelves and the eventual transfer of heat to the ice. While our understanding of the processes is far from complete, the pieces of the jigsaw that have been put into place give us insight into the potential causes of variability in ice shelf melting, and allow us to at least formulate some key questions that still need to be answered in order to make reliable projections of future ice sheet evolution in West Antarctica.
14:15
Comparison between the Mean Variance Optimal and the Mean Quadratic Variation Optimal Trading Strategies
Abstract
Algorithmic trade execution has become a standard technique
for institutional market players in recent years,
particularly in the equity market where electronic
trading is most prevalent. A trade execution algorithm
typically seeks to execute a trade decision optimally
upon receiving inputs from a human trader.
A common form of optimality criterion seeks to
strike a balance between minimizing pricing impact and
minimizing timing risk. For example, in the case of
selling a large number of shares, a fast liquidation will
cause the share price to drop, whereas a slow liquidation
will expose the seller to timing risk due to the
stochastic nature of the share price.
We compare optimal liquidation policies in continuous time in
the presence of trading impact using numerical solutions of
Hamilton Jacobi Bellman (HJB)partial differential equations
(PDE). In particular, we compare the time-consistent
mean-quadratic-variation strategy (Almgren and Chriss) with the
time-inconsistent (pre-commitment) mean-variance strategy.
The Almgren and Chriss strategy should be viewed as the
industry standard.
We show that the two different risk measures lead to very different
strategies and liquidation profiles.
In terms of the mean variance efficient frontier, the
original Almgren/Chriss strategy is signficently sub-optimal
compared to the (pre-commitment) mean-variance strategy.
This is joint work with Stephen Tse, Heath Windcliff and
Shannon Kennedy.
Homotopy Type Theory
Abstract
In recent years, surprising connections between type theory and homotopy theory have been discovered. In this talk I will recall the notions of intensional type theories and identity types. I will describe "infinity groupoids", formal algebraic models of topological spaces, and explain how identity types carry the structure of an infinity groupoid. I will finish by discussing categorical semantics of intensional type theories.
The talk will take place in Lecture Theatre B, at the Department of Computer Science.
computer imaging (producing accurate measurements of an object in front of a camera)
Abstract
Problem #1: (marker-less scaling) Poikos ltd. has created algorithms for matching photographs of humans to three-dimensional body scans. Due to variability in camera lenses and body sizes, the resulting three-dimensional data is normalised to have unit height and has no absolute scale. The problem is to assign an absolute scale to normalised three-dimensional data.
Prior Knowledge: A database of similar (but different) reference objects with known scales. An imperfect 1:1 mapping from the input coordinates to the coordinates of each object within the reference database. A projection matrix mapping the three-dimensional data to the two-dimensional space of the photograph (involves a non-linear and non-invertible transform; x=(M*v)_x/(M*v)_z, y=(M*v)_y/(M*v)_z).
Problem #2: (improved silhouette fitting) Poikos ltd. has created algorithms for converting RGB photographs of humans in (approximate) poses into silhouettes. Currently, a multivariate Gaussian mixture model is used as a first pass. This is imperfect, and would benefit from an improved statistical method. The problem is to determine the probability that a given three-component colour at a given two-component location should be considered as "foreground" or "background".
Prior Knowledge: A sparse set of colours which are very likely to be skin (foreground), and their locations. May include some outliers. A (larger) sparse set of colours which are very likely to be clothing (foreground), and their locations. May include several distributions in the case of multi-coloured clothing, and will probably include vast variations in luminosity. A (larger still) sparse set of colours which are very likely to be background. Will probably overlap with skin and/or clothing colours. A very approximate skeleton for the subject.
Limitations: Sample colours are chosen "safely". That is, they are chosen in areas known to be away from edges. This causes two problems; highlights and shadows are not accounted for, and colours from arms and legs are under-represented in the model. All colours may be "saturated"; that is, information is lost about colours which are "brighter than white". All colours are subject to noise; each colour can be considered as a true colour plus a random variable from a gaussian distribution. The weight of this gaussian model is constant across all luminosities, that is, darker colours contain more relative noise than brighter colours.
The Determination of an Obstacle from its Scattering Cross Section
Abstract
The inverse acoustic obstacle scattering problem, in its most general
form, seeks to determine the nature of an unknown scatterer from knowl-
edge of its far eld or radiation pattern. The problem which is the main
concern here is:
If the scattering cross section, i.e the absolute value of the radiation
pattern, of an unknown scatterer is known determine its shape.
In this talk we explore the problem from a number of points of view.
These include questions of uniqueness, methods of solution including it-
erative methods, the Minkowski problem and level set methods. We con-
clude by looking at the problem of acoustically invisible gateways and its
connections with cloaking
"Tensor products of unipotent characters of general linear groups over finite fields"
High frequency scattering by non-convex polygons
Abstract
Standard numerical schemes for acoustic scattering problems suffer from the restriction that the number of degrees of freedom required to achieve a prescribed level of accuracy must grow at least linearly with respect to frequency in order to maintain accuracy as frequency increases. In this talk, we review recent progress on the development and analysis of hybrid numerical-asymptotic boundary integral equation methods for these problems. The key idea of this approach is to form an ansatz for the solution based on knowledge of the high frequency asymptotics, allowing one to achieve any required accuracy via the approximation of only (in many cases provably) non-oscillatory functions. In particular, we discuss very recent work extending these ideas for the first time to non-convex scatterers.
Pseudo-Holomorphic Curves in Generalized Geometry
Abstract
After giving a brief physical motivation I will define the notion of generalized pseudo-holomorphic curves, as well as tamed and compatible generalized complex structures. The latter can be used to give a generalization of an energy identity. Moreover, I will explain some aspects of the local and global theory of generalized pseudo-holomorphic curves.
13:00
11:30
Quadratic differentials as stability conditions
Abstract
I will explain how moduli spaces of quadratic differentials on Riemann surfaces can be interpreted as spaces of stability conditions for certain 3-Calabi-Yau triangulated categories. These categories are defined via quivers with potentials, but can also be interpreted as Fukaya categories. This work (joint with Ivan Smith) was inspired by the papers of Gaiotto, Moore and Neitzke, but connections with hyperkahler metrics, Fock-Goncharov coordinates etc. will not be covered in this talk.
Lion and Man: Can both win?
Abstract
Rado introduced the following `lion and man' game in the 1930's: two players (the lion and the man) are in the closed unit disc and they can run at the same speed. The lion would like to catch the man and the man would like to avoid being captured.
This game has a chequered history with several false `winning strategies' before Besicovitch finally gave a genuine winning strategy.
We ask the surprising question: can both players win?
13:30
Limit Order Books
Abstract
Determining the price at which to conduct a trade is an age-old problem. The first (albeit primitive) pricing mechanism dates back to the Neolithic era, when people met in physical proximity in order to agree upon mutually beneficial exchanges of goods and services, and over time increasingly complex mechanisms have played a role in determining prices. In the highly competitive and relentlessly fast-paced markets of today’s financial world, it is the limit order book that matches buyers and sellers to trade at an agreed price in more than half of the world’s markets. In this talk I will describe the limit order book trade-matching mechanism, and explain how the extra flexibility it provides has vastly impacted the problem of how a market participant should optimally behave in a given set of circumstances.
12:00
Correlation functions, Wilson loops, and local operators in twistor space
Abstract
Abstract:
Motivated by the correlation functions-Wilson loop correspondence in
maximally supersymmetric Yang-Mills theory, we will investigate a
conjecture of Alday, Buchbinder, and Tseytlin regarding correlators of
null polygonal Wilson loops with local operators in general position.
By translating the problem to twistor space, we can show that such
correlators arise by taking null limits of correlation functions in the
gauge theory, thereby providing a proof for the conjecture.
Additionally, twistor methods allow us to derive a recursive formula for
computing these correlators, akin to the BCFW recursion for scattering
amplitudes.