InFoMM CDT Group Meeting
Challenges in the optimisation of warehouse efficiency
Abstract
In certain business environments, it is essential to the success of the business that workers stick closely to their plans and are not distracted, diverted or stopped. A warehouse is a great example of this for businesses where customers order goods online and the merchants commit to delivery dates. In a warehouse, somewhere, a team of workers are scheduled to pick the items which will make up those orders and get them shipped on time. If the workers do not deliver to plan, then orders will not be shipped on time, reputations will be damaged, customer will be lost and companies will go out of business.
StayLinked builds software which measures what these warehouse workers do and measures the factors which cause them to be distracted, diverted or stopped. We measure whenever they start or end a task or process (e.g. start an order, pick an item in an order, complete an order). Some of the influencing factors we measure include the way the worker interacts with the device (using keyboard, scanner, gesture), navigates through the application (screens 1-3-4-2 instead of 1-2-3-4), the performance of the battery (dead battery stops work), the performance of the network (connected to access point or not, high or low latency), the device types being used, device form factor, physical location (warehouse 1, warehouse 2), profile of worker, etc.
We are seeking to build a configurable real-time mathematical model which will allow us to take all these factors into account and confidently demonstrate a measure of their impact (positive or negative) on the business process and therefore on the worker’s productivity. We also want to alert operational staff as soon as we can identify that important events have happened. These alerts can then be quickly acted upon and problems resolved at the earliest possible opportunity.
In this project, we would like to collaborate with the maths faculty to understand the appropriate mathematical techniques and tools to use to build this functionality. This product is being used right now by our customers so it would also be a great opportunity for a student to quickly see the results of their work in action in a real-world environment.
16:00
Quiver varieties revisited
Abstract
Quiver varieties are an attractive research topic of many branches of contemporary mathematics - (geometric) representation theory, (hyper)Kähler differential geometry, (symplectic) algebraic geometry and quantum algebra.
In the talk, I will define different types of quiver varieties, along with some interesting examples. Afterwards, I will focus on Nakajima quiver varieties (hyperkähler moduli spaces obtained from framed-double-quiver representations), stating main results on their topology and geometry. If the time permits, I will say a bit about the symplectic topology of them.
Joint Logic/ Number Theory Seminar: Virtual rigid motives of semi-algebraic sets in valued fields
Abstract
Let k be a field of characteristic zero and K=k((t)). Semi-algebraic sets over K are boolean combinations of algebraic sets and sets defined by valuative inequalities. The associated Grothendieck ring has been studied by Hrushovski and Kazhdan who link it via motivic integration to the Grothendieck ring of varieties over k. I will present a morphism from the former to the Grothendieck ring of motives of rigid analytic varieties over K in the sense of Ayoub. This allows to refine the comparison by Ayoub, Ivorra and Sebag between motivic Milnor fibre and motivic nearby cycle functor.
16:00
Joint Number Theory / Logic Seminar: Virtual rigid motives of semi-algebraic sets in valued fields
Abstract
Let k be a field of characteristic zero and K=k((t)). Semi-algebraic sets over K are boolean combinations of algebraic sets and sets defined by valuative inequalities. The associated Grothendieck ring has been studied by Hrushovski and Kazhdan who link it via motivic integration to the Grothendieck ring of varieties over k. I will present a morphism from the former to the Grothendieck ring of motives of rigid analytic varieties over K in the sense of Ayoub. This allows to refine the comparison by Ayoub, Ivorra and Sebag between motivic Milnor fibre and motivic nearby cycle functor.
Brain morphology in foetal life
Abstract
Brain convolutions are specificity of mammals. Varying in intensity according to the animal species, it is measured by an index called the gyrification index, ratio between the effective surface of the cortex compared to its apparent surface. Its value is closed to 1 for rodents (smooth brain), 2.6 for new-borns and 5 for dolphins. For humans, any significant deviation is a signature of a pathology occurring in fetal life, which can be detected now by magnetic resonance imaging (MRI). We propose a simple model of growth for a bilayer made of the grey and white matter, the grey matter being in cortical position. We analytically solved the Neo-Hookean approximation in the short and large wavelength limits. When the upper layer is softer than the bottom layer (possibly, the case of the human brain), the selection mechanism is dominated by the physical properties of the upper layer. When the anisotropy favours the growth tangentially as for the human brain, it decreases the threshold value for gyri formation. The gyrification index is predicted by a post-buckling analysis and compared with experimental data. We also discuss some pathologies in the model framework.
Solving discrete conic optimization problems using disjunctive programming
Abstract
Several optimization problems combine nonlinear constraints with the integrality of a subset of variables. For an important class of problems called Mixed Integer Second-Order Cone Optimization (MISOCO), with applications in facility location, robust optimization, and finance, among others, these nonlinear constraints are second-order (or Lorentz) cones.
For such problems, as for many discrete optimization problems, it is crucial to understand the properties of the union of two disjoint sets of feasible solutions. To this end, we apply the disjunctive programming paradigm to MISOCO and present conditions under which the convex hull of two disjoint sets can be obtained by intersecting the feasible set with a specially constructed second-order cone. Computational results show that such cone has a positive impact on the solution of MISOCO problems.
12:00
The Cauchy problem for the Landau-Lifshitz-Gilbert equation in BMO and self-similar solutions
Abstract
The Landau-Lifshitz-Gilbert equation (LLG) is a continuum model describing the dynamics for the spin in ferromagnetic materials. In the first part of this talk we describe our work concerning the properties and dynamical behaviour of the family of self-similar solutions under the one-dimensional LLG-equation. Motivated by the properties of this family of self-similar solutions, in the second part of this talk we consider the Cauchy problem for the LLG-equation with Gilbert damping and provide a global well-posedness result provided that the BMO norm of the initial data is small. Several consequences of this result will be also given.
Trees, Lattices and Superrigidity
Abstract
If $G$ is an irreducible lattice in a semisimple Lie group, every action of $G$ on a tree has a global fixed point. I will give an elementary discussion of Y. Shalom's proof of this result, focussing on the case of $SL_2(\mathbb{R}) \times SL_2(\mathbb{R})$. Emphasis will be placed on the geometric aspects of the proof and on the importance of reduced cohomology, while other representation theoretic/functional analytic tools will be relegated to a couple of black boxes.
11:00
Exploring modular forms through modular symbols.
Abstract
Modular forms holomorphic functions on the upper half of the complex plane, H, invariant under certain matrix transformations of H. The have a very rich structure - they form a graded algebra over C and come equipped with a family of linear operators called Hecke operators. We can also view them as functions on a Riemann surface, which we refer to as a modular curve. It transpires that the integral homology of this curve is equipped with such a rich structure that we can use it to compute modular forms in an algorithmic way. The modular symbols are a finite presentation for this homology, and we will explore this a little and their connection to modular symbols.
********* Algebraic Geometry Seminar ********* Title: An asymptotic Nullstellensatz for curves
Abstract
Hilbert's Nullstellensatz asserts the existence of a complex point satisfying lying on a given variety, provided there is no (ideal-theoretic) proof to the contrary.
I will describe an analogue for curves (of unbounded degree), with respect to conditions specifying that they lie on a given smooth variety, and have homology class
near a specified ray. In particular, an analogue of the Lefschetz principle (relating large positive characteristic to characteristic zero) becomes available for such questions.
The proof is very close to a theorem of Boucksom-Demailly-Pau-Peternell on moveable curves, but requires a certain sharpening. This is part of a joint project with Itai Ben Yaacov, investigating the logic of the product formula; the algebro-geometric statement is needed for proving the existential closure of $\Cc(t)^{alg}$ in this language.
An asymptotic Nullstellensatz for curves
Abstract
Hilbert's Nullstellensatz asserts the existence of a complex point satisfying lying on a given variety, provided there is no (ideal-theoretic) proof to the contrary.
I will describe an analogue for curves (of unbounded degree), with respect to conditions specifying that they lie on a given smooth variety, and have homology class
near a specified ray. In particular, an analogue of the Lefschetz principle (relating large positive characteristic to characteristic zero) becomes available for such questions.
The proof is very close to a theorem of Boucksom-Demailly-Pau-Peternell on moveable curves, but requires a certain sharpening. This is part of a joint project with Itai Ben Yaacov, investigating the logic of the product formula; the algebro-geometric statement is needed for proving the existential closure of $\Cc(t)^{alg}$ in this language.
Network Block Decomposition for Revenue Management
Abstract
In this talk we introduce a novel dynamic programming (DP) approximation that exploits the inherent network structure present in revenue management problems. In particular, our approximation provides a new lower bound on the value function for the DP, which enables conservative revenue forecasts to be made. Existing state of the art approximations of the revenue management DP neglect the network structure, apportioning the prices of each product, whereas our proposed method does not: we partition the network of products into clusters by apportioning the capacities of resources. Our proposed approach allows, in principle, for better approximations of the DP to be made than the decomposition methods currently implemented in industry and we see it as an important stepping stone towards better approximate DP methods in practice.
14:30
Zero forcing in random and pseudorandom graphs
Abstract
A subset S of initially infected vertices of a graph G is called forcing if we can infect the entire graph by iteratively applying the following process. At each step, any infected vertex which has a unique uninfected neighbour, infects this neighbour. The forcing number of G is the minimum cardinality of a forcing set in G. It was introduced independently as a bound for the minimum rank of a graph, and as a tool in quantum information theory.
The focus of this talk is on the forcing number of the random graph. Furthermore, we will state our bounds on the forcing number of pseudorandom graphs and related problems. The results are joint work with Thomas Kalinowski and Benny Sudakov.
Dimers with boundary, associated algebras and module categories
Abstract
Dimer models with boundary were introduced in joint work with King and Marsh as a natural
generalisation of dimers. We use these to derive certain infinite dimensional algebras and
consider idempotent subalgebras w.r.t. the boundary.
The dimer models can be embedded in a surface with boundary. In the disk case, the
maximal CM modules over the boundary algebra are a Frobenius category which
categorifies the cluster structure of the Grassmannian.
Gaussian Processes for Demand Unconstraining
Abstract
One of the key challenges in revenue management is unconstraining demand data. Existing state of the art single-class unconstraining methods make restrictive assumptions about the form of the underlying demand and can perform poorly when applied to data which breaks these assumptions. In this talk, we propose a novel unconstraining method that uses Gaussian process (GP) regression. We develop a novel GP model by constructing and implementing a new non-stationary covariance function for the GP which enables it to learn and extrapolate the underlying demand trend. We show that this method can cope with important features of realistic demand data, including nonlinear demand trends, variations in total demand, lengthy periods of constraining, non-exponential inter-arrival times, and discontinuities/changepoints in demand data. In all such circumstances, our results indicate that GPs outperform existing single-class unconstraining methods.
Applications of R-graphs to DNA modelling
Abstract
Finding implementable descriptions of the possible configurations of a knotted DNA molecule has remarkable importance from a biological point of view, and it is a hard and well studied problem in mathematics.
Here we present two newly developed mathematical tools that describe the configuration space of knots and model the action of Type I and II Topoisomerases on a covalently closed circular DNA molecule: the Reidemeister graphs.
We determine some local and global properties of these graphs and prove that in one case the graph-isomorphism type is a complete knot invariant up to mirroring.
Finally, we indicate how the Reidemeister graphs can be used to infer information about the proteins' action.
Convergence and new perspectives in perturbative algebraic quantum field theory
Abstract
In this talk I will present recent results obtained within the
framework of perturbative algebraic quantum field theory. This novel
approach to mathematical foundations of quantum field theory allows to
combine the axiomatic framework of algebraic QFT by Haag and Kastler with
perturbative methods. Recently also non-perturbative results have been
obtained within this approach. I will report on these results and present
new perspectives that they open for better understanding of foundations of
QFT.
On some problems in random geometry and PDE's
Abstract
We consider a couple of problems belonging to Random Geometry, and describe some new analytical challenges they pose for planar PDE's via Beltrami equations. The talk is based on joint work with various people including K. Astala, P. Jones, A. Kupiainen, Steffen Rohde and T. Tao.
15:45
A Reduced Tensor Product of Braided Fusion Categories containing a Symmetric Fusion Category
Abstract
In this talk I will construct a reduced tensor product of braided fusion categories containing a symmetric fusion category $\mathcal{A}$. This tensor product takes into account the relative braiding with respect to objects of $\mathcal{A}$ in these braided fusion categories. The resulting category is again a braided fusion category containing $\mathcal{A}$. This tensor product is inspired by the tensor product of $G$-equivariant once-extended three-dimensional quantum field theories, for a finite group $G$.
To define this reduced tensor product, we equip the Drinfeld centre $\mathcal{Z}(\mathcal{A})$ of the symmetric fusion category $\mathcal{A}$ with an unusual tensor product, making $\mathcal{Z}(\mathcal{A})$ into a 2-fold monoidal category. Using this 2-fold structure, we introduce a new type of category enriched over the Drinfeld centre to capture the braiding behaviour with respect to $\mathcal{A}$ in the braided fusion categories, and use this encoding to define the reduced tensor product.
The signature approach for the supervised learning problem with sequential data input and its application
Abstract
In the talk, we discuss how to combine the recurrent neural network with the signature feature set to tackle the supervised learning problem where the input is a data stream. We will apply this method to different datasets, including the synthetic datasets( learning the solution to SDEs ) and empirical datasets(action recognition) and demonstrate the effectiveness of this method.
On some heavy-tail phenomena occurring in large deviations
Abstract
In this talk, we will revisit the proof of the large deviations principle of Wiener chaoses partially given by Borell, and then by Ledoux in its full form. We show that some heavy-tail phenomena observed in large deviations can be explained by the same mechanism as for the Wiener chaoses, meaning that the deviations are created, in a sense, by translations. More precisely, we prove a general large deviations principle for a certain class of functionals $f_n : \mathbb{R}^n \to \mathcal{X}$, where $\mathcal{X}$ is some metric space, under the probability measure $\nu_{\alpha}^n$, where $\nu_{\alpha} =Z_{\alpha}^{-1}e^{-|x|^{\alpha}}dx$, $\alpha \in (0,2]$, for which the large deviations are due to translations. We retrieve, as an application, the large deviations principles known for the so-called Wigner matrices without Gaussian tails of the empirical spectral measure, the largest eigenvalue, and traces of polynomials. We also apply our large deviations result to the last-passage time which yields a large deviations principle when the weight matrix has law $\mu_{\alpha}^{n^2}$, where $\mu_{\alpha}$ is the probability measure on $\mathbb{R}^+$ with density $2Z_{\alpha}^{-1}e^{-x^{\alpha}}$ when $\alpha \in (0,1)$.
Cubic fourfolds, K3 surfaces, and mirror symmetry
Abstract
While many cubic fourfolds are known to be rational, it is expected that the very general cubic fourfold is irrational (although none have been
proven to be so). There is a conjecture for precisely which cubics are rational, which can be expressed in Hodge-theoretic terms (by work of Hassett)
or in terms of derived categories (by work of Kuznetsov). The conjecture can be phrased as saying that one can associate a `noncommutative K3 surface' to any cubic fourfold, and the rational ones are precisely those for which this noncommutative K3 is `geometric', i.e., equivalent to an honest K3 surface. It turns out that the noncommutative K3 associated to a cubic fourfold has a conjectural symplectic mirror (due to Batyrev-Borisov). In contrast to the algebraic side of the story, the mirror is always `geometric': i.e., it is always just an honest K3 surface equipped with an appropriate Kähler form. After explaining this background, I will state a theorem: homological mirror symmetry holds in this context (joint work with Ivan Smith).
12:45
Supersymmetric Partition Functions and Higher Dimensional A-twist
Abstract
I will talk about three-dimensional N=2 supersymmetric gauge theories on a class of Seifert manifold. More precisely, I will compute the supersymmetric partition functions and correlation functions of BPS loop operators on M_{g,p}, which is defined by a circle bundle of degree p over a genus g Riemann surface. I will also talk about four-dimensional uplift of this construction, which computes the generalized index of N=1 gauge theories defined on elliptic fiberation over genus g Riemann surface. We will find that the partition function or the index can be written as a sum over "Bethe vacua” of two-dimensional A-twisted theory obtained by a circle compactification. With this framework, I will show how the partition functions on manifolds with different topologies are related to each other. We will also find that these observables are very useful to study the action of Seiberg-like dualities on co-dimension two BPS operators.
Robert Calderbank - the Art of Signaling
Abstract
Coding theory revolves around the question of what can be accomplished with only memory and redundancy. When we ask what enables the things that transmit and store information, we discover codes at work, connecting the world of geometry to the world of algorithms.
This talk will focus on those connections that link the real world of Euclidean geometry to the world of binary geometry that we associate with Hamming.
14:30
Peter Sarnak - Integer points on affine cubic surfaces
Abstract
A cubic polynomial equation in four or more variables tends to have many integer solutions, while one in two variables has a limited number of such solutions. There is a body of work establishing results along these lines. On the other hand very little is known in the critical case of three variables. For special such cubics, which we call Markoff surfaces, a theory can be developed. We will review some of the tools used to deal with these and related problems.
Joint works with Bourgain/Gamburd and with Ghosh
14:15
Modelling wave–ice floe interactions and the overwash phenomenon
Abstract
Following several decades of development by applied mathematicians, models of ocean wave interactions with sea ice floes are now in high demand due to the rapid recent changes in the world’s sea ice cover. From a mathematical perspective, the models are of interest due to the thinness of the floes, leading to elastic responses of the floes to waves, and the vast number of floes that waves encounter. Existing models are typically based on linear theories, but the thinness of the floes leads to the unique and highly nonlinear phenomenon of overwash, where waves run over the floes, in doing so dissipating wave energy and impacting the floes thermodynamically. I will give an overview of methods developed for the wave-floe problem, and present a new, bespoke overwash model, along with supporting laboratory experiments and numerical CFD simulations.
Revolutionizing medicine through machine learning and artificial intelligence
Abstract
Current medical practice is driven by the experience of clinicians, by the difficulties of integrating enormous amounts of complex and heterogeneous static and dynamic data and by clinical guidelines designed for the “average” patient. In this talk, I will describe some of my research on developing novel, specially-crafted machine learning theories, methods and systems aimed at extracting actionable intelligence from the wide variety of information that is becoming available (in electronic health records and elsewhere) and enabling every aspect of medical care to be personalized to the patient at hand. Because of the unique and complex characteristics of medical data and medical questions, many familiar machine-learning methods are inadequate. My work therefore develops and applies novel machine learning theory and methods to construct risk scores, early warning systems and clinical decision support systems for screening and diagnosis and for prognosis and treatment. This work achieves enormous improvements over current clinical practice and over existing state-of-the-art machine learning methods. By design, these systems are easily interpretable and so allow clinicians to extract from data the necessary knowledge and representations to derive data-driven medical epistemology and to permit easy adoption in hospitals and clinical practice. My team has collaborated with researchers and clinicians in oncology, emergency care, cardiology, transplantation, internal medicine, etc. You can find more information about our past research in this area at: http://medianetlab.ee.ucla.edu/MedAdvance.
Talks by Phd Students
Abstract
Christoph Siebenbrunner:
Clearing Algorithms and Network Centrality
I show that the solution of a standard clearing model commonly used in contagion analyses for financial systems can be expressed as a specific form of a generalized Katz centrality measure under conditions that correspond to a system-wide shock. This result provides a formal explanation for earlier empirical results which showed that Katz-type centrality measures are closely related to contagiousness. It also allows assessing the assumptions that one is making when using such centrality measures as systemic risk indicators. I conclude that these assumptions should be considered too strong and that, from a theoretical perspective, clearing models should be given preference over centrality measures in systemic risk analyses.
Andreas Sojmark:
An SPDE Model for Systemic Risk with Default Contagion
In this talk, I will present a structural model for systemic risk, phrased as an interacting particle system for $N$ financial institutions, where each institution is removed upon default and this has a contagious effect on the rest of the system. Moreover, the financial instituions display herding behavior and they are exposed to correlated noise, which turns out to be an important driver of the contagion mechanism. Ultimately, the motivation is to provide a clearer connection between the insights from dynamic mean field models and the detailed study of contagion in the (mostly static) network-based literature. Mathematically, we prove a propagation of chaos type result for the large population limit, where the limiting object is characterized as the unique solution to a nonlinear SPDE on the positive half-line with Dirichlet boundary. This is based on joint work with Ben Hambly and I will also point out some interesting future directions, which are part of ongoing work with Sean Ledger.
16:00
The Drinfeld Centre of a Symmetric Fusion Category
Abstract
This talk will be a gentle introduction to braided fusion categories, with the eventual aim to explain a result from my thesis about symmetric fusion categories.
Fusion categories are certain kinds of monoidal categories. They can be viewed as a categorification of the finite dimensional algebras, and appear in low-dimensional topological quantum field theories, as well as being studied in their own right. A braided fusion category is additionally commutative up to a natural isomorphism, symmetry is an additional condition on this natural isomorphism. Computations in these categories can be done pictorially, using so-called string diagrams (also known as ``those cool pictures'').
In this talk I will introduce fusion categories using these string diagrams. I will then discuss the Drinfeld centre construction that takes a fusion category and returns a braided fusion category. We then show, if the input is a symmetric fusion category, that this Drinfeld centre carries an additional tensor product. All of this also serves as a good excuse to draw lots of pictures.
16:00
Smooth values of polynomials
Abstract
Recall that an integer n is called y-smooth when each of its prime divisors is less than or equal to y. It is conjectured that, for any a>0, any polynomial of positive degree having integral coefficients should possess infinitely many values at integral arguments n that are n^a-smooth. One could consider this problem to be morally “dual” to the cognate problem of establishing that irreducible polynomials assume prime values infinitely often, unless local conditions preclude this possibility. This smooth values conjecture is known to be true in several different ways for linear polynomials, but in general remains unproven for any degree exceeding 1. We will describe some limited progress in the direction of the conjecture, highlighting along the way analogous conclusions for polynomial smoothness. Despite being motivated by a problem in analytic number theory, most of the methods make use of little more than pre-Galois theory. A guest appearance will be made by several hyperelliptic curves. [This talk is based on work joint with Jonathan Bober, Dan Fretwell and Greg Martin].
Into the crease: nucleation of a discontinuous solution in nonlinear elasticity
Abstract
Discontinuous solutions, such as cracks or cavities, can suddenly appear in elastic solids when a limiting condition is reached. Similarly, self-contacting folds can nucleate at a free surface of a soft material subjected to a critical compression. Unlike other elastic instabilities, such as buckling and wrinkling, creasing is still poorly understood. Being invisible to linearization techniques, crease nucleation is a problem of high mathematical complexity.
In this talk, I will discuss some recent theoretical insights solving the quest for both the nucleation threshold and the emerging crease morphology. The analytic predictions are in agreement with experimental and numerical data. They prove a fundamental insight either for understanding the creasing onset in living matter, e.g. brain convolutions, or for guiding engineering applications, e.g. morphable meta-materials.
Bounds for VIX Futures Given S&P 500 Smiles
Abstract
We derive sharp bounds for the prices of VIX futures using the full information of S&P 500 smiles. To that end, we formulate the model-free sub/superreplication of the VIX by trading in the S&P 500 and its vanilla options as well as the forward-starting log-contracts. A dual problem of minimizing/maximizing certain risk-neutral expectations is introduced and shown to yield the same value. The classical bounds for VIX futures given the smiles only use a calendar spread of log-contracts on the S&P 500. We analyze for which smiles the classical bounds are sharp and how they can be improved when they are not. In particular, we introduce a tractable family of functionally generated portfolios which often improves the classical spread while still being tractable, more precisely, determined by a single concave/convex function on the line. Numerical experiments on market data and SABR smiles show that the classical lower bound can be improved dramatically, whereas the upper bound is often close to optimal.
15:00
Dynamic Gauge Linear Sigma Models from Six Dimensions
Abstract
Compactifications of 6D Superconformal Field Theories (SCFTs) on four-manidolfds lead to novel interacting 2D SCFTs. I will describe the various Lagrangian and non-Lagrangian sectors of the resulting 2D theories, as well as their interactions. In general this construction can be embedded in compactifications of the physical superstring, providing a general template for realizing 2D conformal field theories coupled to worldsheet gravity, i.e. a UV completion for non-critical string theories.
Scattering by fractal screens - functional analysis and computation
Abstract
The mathematical analysis and numerical simulation of acoustic and electromagnetic wave scattering by planar screens is a classical topic. The standard technique involves reformulating the problem as a boundary integral equation on the screen, which can be solved numerically using a boundary element method. Theory and computation are both well-developed for the case where the screen is an open subset of the plane with smooth (e.g. Lipschitz or smoother) boundary. In this talk I will explore the case where the screen is an arbitrary subset of the plane; in particular, the screen could have fractal boundary, or itself be a fractal. Such problems are of interest in the study of fractal antennas in electrical engineering, light scattering by snowflakes/ice crystals in atmospheric physics, and in certain diffraction problems in laser optics. The roughness of the screen presents challenging questions concerning how boundary conditions should be enforced, and the appropriate function space setting. But progress is possible and there is interesting behaviour to be discovered: for example, a sound-soft screen with zero area (planar measure zero) can scatter waves provided the fractal dimension of the set is large enough. Accurate computations are also challenging because of the need to adapt the mesh to the fine structure of the fractal. As well as presenting numerical results, I will outline some of the outstanding open questions from the point of view of numerical analysis. This is joint work with Simon Chandler-Wilde (Reading) and Andrea Moiola (Pavia).
Maximal Hypersurfaces with boundary conditions
Abstract
We construct maximal surfaces with Neumann boundary conditions in Minkowski space using mean curvature flow. In particular we find curvature conditions on a boundary manifold so that mean curvature flow may be shown to exist for all time, and give conditions under which the maximal hypersurfaces are stable under the flow.
Vicky Neale - Closing the Gap: the quest to understand prime numbers
Abstract
Prime numbers have intrigued, inspired and infuriated mathematicians for millennia and yet mathematicians' difficulty with answering simple questions about them reveals their depth and subtlety.
Join Vicky to learn about recent progress towards proving the famous Twin Primes Conjecture and to hear the very different ways in which these breakthroughs have been made - a solo mathematician working in isolation, a young mathematician displaying creativity at the start of a career, a large collaboration that reveals much about how mathematicians go about their work.
Vicky Neale is Whitehead Lecturer at the Mathematical Institute, University of Oxford and Supernumerary Fellow at Balliol College.
Please email @email to register.
Conformal dimension
Abstract
I will present a gentle introduction to the theory of conformal dimension, focusing on its applications to the boundaries of hyperbolic groups, and the difficulty of classifying groups whose boundaries have conformal dimension 1.
Penrose Tilings: a light introduction
Abstract
This talk will hopefully highlight the general framework in which Penrose tilings are proved to be aperiodic and in fact a tiling.
16:00
Globally Valued Fields, fullness and amalgamation
Abstract
Globally Valued Fields, studied jointly with E. Hrushovski, are a formalism for fields in which the sum formula for valuations holds, such as number fields or function fields of curves. They form an elementary class (in continuous first order logic), and model-theoretic questions regarding this class give rise to difficult yet fascinating geometric questions.
I intend to present « Lyon school » approach to studying GVFs. This consists of reducing as much as possible to local considerations, among other things via the "fullness" axiom.
From period integrals to toric degenerations of Fano manifolds
Abstract
Given a Fano manifold we will consider two ways of attaching a (usually infinite) collection of polytopes, and a certain combinatorial transformation relating them, to it. The first is via Mirror Symmetry, following a proposal of Coates--Corti--Kasprzyk--Galkin--Golyshev. The second is via symplectic topology, and comes from considering degenerating Lagrangian torus fibrations. We then relate these two collections using the Gross--Siebert program. I will also comment on the situation in higher dimensions, noting particularly that by 'inverting' the second method (degenerating Lagrangian fibrations) we can produce topological constructions of Fano threefolds.
14:30
Intersecting Families of Permutations
Abstract
Enumerating families of combinatorial objects with given properties and describing the typical structure of these objects are fundamental problems in extremal combinatorics. In this talk, we will investigate intersecting families of discrete structures in various settings, determining their typical structure as the size of the underlying ground set tends to infinity. Our new approach outlines a general framework for a number of similar problems; in particular, we prove analogous results for hypergraphs, permutations, and vector spaces using the same technique. This is joint work with József Balogh, Shagnik Das, Hong Liu, and Maryam Sharifzadeh.
White Noise Coupling for Multilevel Monte Carlo
Abstract
In this talk we describe a new approach that enables the use of elliptic PDEs with white noise forcing to sample Matérn fields within the multilevel Monte Carlo (MLMC) framework.
When MLMC is used to quantify the uncertainty in the solution of PDEs with random coefficients, two key ingredients are needed: 1) a sampling technique for the coefficients that satisfies the MLMC telescopic sum and 2) a numerical solver for the forward PDE problem.
When the dimensionality of the uncertainty in the problem is infinite (i.e. coefficients are random fields), the sampling techniques commonly used in the literature are Karhunen–Loève expansions or circulant embeddings. In the specific case in which the coefficients are Gaussian fields of Mat ́ern covariance structure another sampling technique available relies on the solution of a linear elliptic PDE with white noise forcing.
When the finite element method (FEM) is used for the forward problem, the latter option can become advantageous as elliptic PDEs can be quickly and efficiently solved with the FEM, the sampling can be performed in parallel and the same FEM software can be used without the need for external packages. However, it is unclear how to enforce a good stochastic coupling of white noise between MLMC levels so as to respect the MLMC telescopic sum. In this talk we show how this coupling can be enforced in theory and in practice.
From classical tilting to 2-term silting
Abstract
We give a short reminder about central results of classical tilting theory,
including the Brenner-Butler tilting theorem, and
homological properties of tilted and quasi-tilted algebras. We then discuss
2-term silting complexes and endomorphism algebras of such objects,
and in particular show that some of these classical results have very natural
generalizations in this setting.
(joint work with Yu Zhou)
Multilevel weighted least squares polynomial approximation
Abstract
We propose and analyze a multilevel weighted least squares polynomial approximation method. Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method, which employs samples with different accuracies and is able to match the accuracy of single level approximations at reduced computational work. We prove complexity bounds under certain assumptions on polynomial approximability and sample work. Furthermore, we propose an adaptive
algorithm for situations where such assumptions cannot be verified a priori. Numerical experiments underline the practical applicability of our method.