14:15
Giant Gravitons in the ABJM Duality
Abstract
I shall describe the construction of the four-brane giant graviton on $\mathrm{AdS}_4\times \mathbb{CP}^3$ (extended and moving in the complex projective space), which is dual to a subdeterminant operator in the ABJM model. This dynamically stable, BPS configuration factorizes at maximum size into two topologically stable four-branes (each wrapped on a different $\mathbb{CP}^2 \subset \mathbb{CP}^3$ cycle) dual to ABJM dibaryons. Our study of the spectrum of small fluctuations around this four-brane giant provides good evidence for a dependence in the spectrum on the size, $\alpha_0$, which is a direct result of the changing shape of the giant’s worldvolume as it grows in size. I shall finally comment upon the implications for operators in the non-BPS, holomorphic sector of the ABJM model.
14:15
Monte Carlo Portfolio Optimization
Abstract
We develop the idea of using Monte Carlo sampling of random portfolios to solve portfolio investment problems. We explore the need for more general optimization tools, and consider the means by which constrained random portfolios may be generated. DeVroye’s approach to sampling the interior of a simplex (a collection of non-negative random variables adding to unity) is already available for interior solutions of simple fully-invested long-only systems, and we extend this to treat, lower bound constraints, bounded short positions and to sample non-interior points by the method of Face-Edge-Vertex-biased sampling. A practical scheme for long-only and bounded short problems is developed and tested. Non-convex and disconnected regions can be treated by applying rejection for other constraints. The advantage of Monte Carlo methods is that they may be extended to risk functions that are more complicated functions of the return distribution, without explicit gradients, and that the underlying return distribution may be modeled parametrically or empirically based on general distributions. The optimization of expected utility, Omega, Sortino ratios may be handled in a similar manner to quadratic risk, VaR and CVaR, irrespective of whether a reduction to LP or QP form is available. Robustification is also possible, and a Monte Carlo approach allows the possibility of relaxing the general maxi-min approach to one of varying degrees of conservatism. Grid computing technology is an excellent platform for the development of such computations due to the intrinsically parallel nature of the computation. Good comparisons with established results in Mean-Variance and CVaR optimization are obtained, and we give some applications to Omega and expected Utility optimization. Extensions to deploy Sobol and Niederreiter quasi-random methods for random weights are also proposed. The method proposed is a two-stage process. First we have an initial global search which produces a good feasible solution for any number of assets with any risk function and return distribution. This solution is already close to optimal in lower dimensions based on an investigation of several test problems. Further precision, and solutions in 10-100 dimensions, are obtained by invoking a second stage in which the solution is iterated based on Monte-Carlo simulation based on a series of contracting hypercubes.
Systems approaches to biochemical complexity
Abstract
Please note that this is a joint seminar with the William Dunn School of Pathology and will take place in the EPA Seminar Room, which is located inside the Sir William Dunn School of Pathology and must be entered from the main entrance on South Parks Road. link: http://g.co/maps/8cbbx
Derived Algebraic Geometry: a global picture II
Abstract
This is the second of two talks about Derived Algebraic Geometry. We will go through the various geometries one can develop from the Homotopical Algebraic Geometry setting. We will review stack theory in the sense of Laumon and Moret-Bailly and higher stack theory by Simpson from a new and more general point of view, and this will culminate in Derived Algebraic Geometry. We will try to point out how some classical objects are actually secretly already in the realm of Derived Algebraic Geometry, and, once we acknowledge this new point of view, this makes us able to reinterpret, reformulate and generalize some classical aspects. Finally, we will describe more exotic geometries. In the last part of this talk, we will focus on two main examples, one addressed more to algebraic geometers and representation theorists and the second one to symplectic geometers.
Selling category theory to the masses: a tale of food, spiders and Google
Abstract
We will demonstrate the following. Category theory, usually conceived as some very abstract form of metamathematics, is present everywhere around us. Explicitly, we show how it provides a kindergarten version of quantum theory, an how it will help Google to understand sentences rather than words.
Some references are:
-[light] BC (2010) "Quantum picturalism". Contemporary Physics 51, 59-83. arXiv:0908.1787
-[a bit heavier] BC and Ross Duncan (2011) "Interacting quantum observables: categorical algebra and diagrammatics". New Journal of Physics 13, 043016. arXiv:0906.4725
-[light] New Scientist (8 December 2010) "Quantum links let computers understand language". www.cs.ox.ac.uk/people/bob.coecke/NewScientist.pdf
-[a bit heavier] BC, Mehrnoosh Sadrzadeh and Stephen Clark (2011) "Mathematical foundations for a compositional distributional model of meaning". Linguistic Analysis - Lambek Festschrift. arXiv:1003.439
Groups definable in ACFA
Abstract
Recall that a difference field is a field with a distinguished automorphism. ACFA is the theory of existentially closed difference fields. I will discuss results on groups definable in models of ACFA, in particular when they are one-based and what are the consequences of one-basedness.
Inverse problems, wavelets, and linear viscoelasticity
Abstract
It is an inherent premise in Boltzmann's formulation of linear viscoelasticity, that for shear deformations at constant pressure and constant temperature, every material has a unique continuous relaxation spectrum. This spectrum defines the memory kernel of the material. Only a few models for representing the continuous spectrum have been proposed, and these are entirely empirical in nature.
Extensive laboratory time is spent worldwide in collecting dynamic data from which the relaxation spectra of different materials may be inferred. In general the process involves the solution of one or more exponentially ill-posed inverse problems.
In this talk I shall present rigorous models for the continuous relaxation spectrum. These arise naturally from the theory of continuous wavelet transforms. In solving the inverse problem I shall discuss the role of sparsity as one means of regularization, but there is also a secondary regularization parameter which is linked, as always, to resolution. The topic of model-induced super-resolution is discussed, and I shall give numerical results for both synthetic and real experimental data.
The talk is based on joint work with Neil Goulding (Cardiff University).
New perspectives on the Breuil-Mézard conjecture
Abstract
I will discuss joint work with Matthew Emerton on geometric
approaches to the Breuil-Mézard conjecture, generalising a geometric
approach of Breuil and Mézard. I will discuss a proof of the geometric
version of the original conjecture, as well as work in progress on a
geometric version of the conjecture which does not make use of a fixed
residual representation.
The geometric Weil representation
Abstract
This is a sequel to Lecture I (given in the algebra seminar, Tuesday). It will be slightly more specialized. The finite Weil representation is the algebra object that governs the symmetries of the Hilbert space H =C(Z/p): The main objective of this talk is to introduce the geometric Weil representation which is an algebra-geometric (l-adic perverse
Weil sheaf) counterpart of the finite Weil representation. Then, I will explain how the geometric Weil representation is used to prove the main technical results stated in Lecture I. In the course, I will explain the Grothendieck geometrization procedure by which sets are replaced by algebraic varieties and functions by sheaf theoretic objects. This is a joint work with R. Hadani (Austin).
Antibandwidth maximization: a graph colouring problem
13:00
Limit Order Books in Foreign Exchange Markets
Abstract
In recent years, limit order books have been adopted as the pricing mechanism in more than half of the world's financial markets. Thanks to recent technological advances, traders around the globe also now have real-time access to limit order book trading platforms and can develop trading strategies that make use of this "ultimate microscopic level of description". In this talk I will briefly describe the limit order book trade-matching mechanism, and explain how the extra flexibility it provides has vastly impacted the problem of how a market participant should optimally behave in a given set of circumstances. I will then discuss the findings from my recent statistical analysis of real limit order book data for spot trades of 3 highly liquid currency pairs (namely, EUR/USD, GBP/USD, and EUR/GBP) on a large electronic trading platform during May and June 2010, and discuss how a number of my findings highlight weaknesses in current models of limit order books.
12:30
Analysis of Global weak solutions for a class of Hydrodynamical Systems describilng Quantum Fluids
Abstract
In this seminar I will expose some results obtained jointly with P. Marcati, concerning the global existence of weak solutions for the Quantum Hydrodynamics System in the space of energy. We don not require any additional regularity and/or smallness assumptions on the initial data. Our approach replaces the WKB formalism with a polar decomposition theory which is not limited by the presence of vacuum regions. In this way we set up a self consistent theory, based only on particle density and current density, which does not need to define velocity fields in the nodal regions. The mathematical techniques we use in this paper are based on uniform (with respect to the approximating parameter) Strichartz estimates and the local smoothing property.
I will then discuss some possible future extensions of the theory.
Derived Algebraic Geometry: a global picture I
Abstract
This is the first of two talks about Derived Algebraic Geometry. Due to the vastity of the theory, the talks are conceived more as a kind of advertisement on this theory and some of its interesting new features one should contemplate and try to understand, as it might reveal interesting new insights also on classical objects, rather than a detailed and precise exposition. We will start with an introduction on the very basic idea of this theory, and we will expose some motivations for introducing it. After a brief review on the existing literature and a speculation about homotopy theories and higher categorical structures, we will review the theory of dg-categories, model categories, S-categories and Segal categories. This is the technical part of the seminar and it will give us the tools to understand the basic setting of Topos theory and Homotopical Algebraic Geometry, whose applications will be exploited in the next talk.
On the Unit Conjecture for Group Rings -- St Hugh's 80WR18
Abstract
I will present a history of the problem, relate it to other conjectures, and, with time permitting, indicate recent developments. The focus will primarily be group-theoretic and intended for the non-specialist.
17:00
Representation Theoretic Patterns in Digital Signal Processing I: Computing the Matched Filter in Linear Time
Abstract
In the digital radar problem we design a function (waveform) S(t) in the Hilbert space H=C(Z/p) of complex valued functions on Z/p={0,...,p-1}, the integers modulo a prime number p>>0. We transmit the function S(t) using the radar to the object that we want to detect. The wave S(t) hits the object, and is reflected back via the echo wave R(t) in H, which has the form
R(t) = exp{2πiωt/p}⋅S(t+τ) + W(t),
where W(t) in H is a white noise, and τ,ω in ℤ/p, encode the distance from, and velocity of, the object.
Problem (digital radar problem) Extract τ,ω from R and S.
I first introduce the classical matched filter (MF) algorithm that suggests the 'traditional' way (using fast Fourier transform) to solve the digital radar problem in order of p^2⋅log(p) operations. I will then explain how to use techniques from group representation theory to design (construct) waveforms S(t) which enable us to introduce a fast matched filter (FMF) algorithm, that we call the "flag algorithm", which solves the digital radar problem in a much faster way of order of p⋅log(p) operations. I will demonstrate additional applications to mobile communication, and global positioning system (GPS).
This is a joint work with A. Fish (Math, Madison), R. Hadani (Math, Austin), A. Sayeed (Electrical Engineering, Madison), and O. Schwartz (Electrical Engineering and Computer Science, Berkeley).
(HoRSe seminar) Towards mirror symmetry for varieties of general type II
Abstract
Assuming the natural compactification X of a hypersurface in (C^*)^n is smooth, it can exhibit any Kodaira dimension depending on the size and shape of the Newton polyhedron of X. In a joint work with Mark Gross and Ludmil Katzarkov, we give a construction for the expected mirror symmetry partner of a complete intersection X in a toric variety which works for any Kodaira dimension of X. The mirror dual might be a reducible and is equipped with a sheaf of vanishing cycles. We give evidence for the duality by proving the symmetry of the Hodge numbers when X is a hypersurface. The leading example will be the mirror of a genus two curve. If time permits, I will explain relations to homological mirror symmetry and the Gross-Siebert construction.
14:15
(HoRSe seminar) Towards mirror symmetry for varieties of general type I
Abstract
Assuming the natural compactification X of a hypersurface in (C^*)^n is smooth, it can exhibit any Kodaira dimension depending on the size and shape of the Newton polyhedron of X. In a joint work with Mark Gross and Ludmil Katzarkov, we give a construction for the expected mirror symmetry partner of a complete intersection X in a toric variety which works for any Kodaira dimension of X. The mirror dual might be a reducible and is equipped with a sheaf of vanishing cycles. We give evidence for the duality by proving the symmetry of the Hodge numbers when X is a hypersurface. The leading example will be the mirror of a genus two curve. If time permits, I will explain relations to homological mirror symmetry and the Gross-Siebert construction.
12:00
The Wess-Zumino-Witten model
Abstract
The WZW functional for a map from a surface to a Lie group has a role in the theory of harmonic maps, and it also arises as the determinant of a d-bar operator on the surface, as the action functional for a 2-dimensional quantum field theory, as the partition function of 3-dimensional Chern-Simons theory on a manifold with boundary, and as the norm-squared of a state-vector. It is intimately related to the quantization of the symplectic manifold of flat bundles on the surface, a fascinating test-case for different approaches to geometric quantization. It is also interesting as an example of interpolation between commutative and noncommutative geometry. I shall try to give an overview of the area, focussing on the aspects which are still not well understood.
11:00
Rossby wave dynamics of the extra-tropical response to El Nino. Part 2
Ornstein's L<sup>1</sup> non-inequalities and Rank-one convexity.
String topology of classifying spaces
Abstract
Chataur and Menichi showed that the homology of the free loop space of the classifying space of a compact Lie group admits a rich algebraic structure: It is part of a homological field theory, and so admits operations parametrised by the homology of mapping class groups. I will present a new construction of this field theory that improves on the original in several ways: It enlarges the family of admissible Lie groups. It extends the field theory to an open-closed one. And most importantly, it allows for the construction of co-units in the theory. This is joint work with Anssi Lahtinen.
14:15
Generalized quark-antiquark potential of N=4 SYM at weak and strong coupling
Abstract
I will present a two-parameter family of Wilson loop operators in N = 4 supersymmetric Yang-Mills theory which interpolates smoothly between the 1/2 BPS line or circle and a pair of antiparallel lines. These observables capture a natural generalization of the quark-antiquark potential. These loops are calculated on the gauge theory side to second order in perturbation theory and in a semiclassical expansion in string theory to one-loop order. The resulting determinants are given in integral form and can be evaluated numerically for general values of the parameters or analytically in a systematic expansion around the 1/2 BPS configuration. I will comment about the feasibility of deriving all-loop results for these Wilson loops.
Excursions in Algebraic Topology
Abstract
Three short talks by the authors of essays on topics related to c3 Algebraic topology: Whitehead's theorem, Cohomology of fibre bundles, Division algebras
OCCAM Group Meeting
Abstract
- Cameron Hall - Dislocations and discrete-to-continuum asymptotics: the summary
- Kostas Zygalakis - Multi scale methods: theory numerics and applications
- Lian Duan - Barcode Detection and Deconvolution in Well Testing
Spectral decompositions and nonnormality of boundary integral operators in acoustic scattering
Abstract
Nonnormality is a well studied subject in the context of partial differential operators. Yet, only little is known for boundary integral operators. The only well studied case is the unit ball, where the standard single layer, double layer and conjugate double layer potential operators in acoustic scattering diagonalise in a unitary basis. In this talk we present recent results for the analysis of spectral decompositions and nonnormality of boundary integral operators on more general domains. One particular application is the analysis of stability constants for boundary element discretisations. We demonstrate how these are effected by nonnormality and give several numerical examples, illustrating these issues on various domains.
The relativistic heat equation via optimal transportation methods
Abstract
The aim of this talk is to explain how to construct solutions to a
relativistic transport equation via a time discrete scheme based on an
optimal transportation problem.
First of all, I will present a joint work with J. Bertrand, where we prove the existence of an optimal map
for the Monge-Kantorovich problem associated to relativistic cost functions.
Then, I will explain a joint work with Robert McCann, where
we study the limiting process between the discrete and the continuous
equation.
A formula for the maximum voltage drop in on-chip power distribution networks.
Abstract
We will consider a simplified model for on-chip power distribution networks of array bonded integrated circuits. In this model the voltage is the solution of a Poisson equation in an infinite planar domain whose boundary is an array of circular or square pads of size $\epsilon$. We deal with the singular limit as $\epsilon\to 0$ and we are interested in deriving an explicit formula for the maximum voltage drop in the domain in terms of a power series in $\epsilon$. A procedure based on the method of matched asymptotic expansions will be presented to compute all the successive terms in the approximation, which can be interpreted as using multipole solutions of equations involving spatial derivatives of $\delta$-functions.
(HoRSE seminar) Real variation of stabilities and equivariant quantum cohomology II
Abstract
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
(HoRSE seminar) Real variation of stabilities and equivariant quantum cohomology I
Abstract
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
Orthogonality and stability in large matrix iterative algorithms
Abstract
Many iterative algorithms for large sparse matrix problems are based on orthogonality (or $A$-orthogonality, bi-orthogonality, etc.), but these properties can be lost very rapidly using vector orthogonalization (subtracting multiples of earlier supposedly orthogonal vectors from the latest vector to produce the next orthogonal vector). Yet many of these algorithms are some of the best we have for very large sparse problems, such as Conjugate Gradients, Lanczos' method for the eigenproblem, Golub and Kahan bidiagonalization, and MGS-GMRES.
\\
\\
Here we describe an ideal form of orthogonal matrix that arises from any sequence of supposedly orthogonal vectors. We illustrate some of its fascinating properties, including a beautiful measure of orthogonality of the original set of vectors. We will indicate how the ideal orthogonal matrix leads to expressions for new concepts of stability of such iterative algorithm. These are expansions of the concept of backward stability for matrix transformation algorithms that was so effectively developed and applied by J. H. Wilkinson (FRS). The resulting new expressions can be used to understand the subtle and effective performance of some (and hopefully eventually all) of these iterative algorithms.
applying loads in bone tissue engineering problems
Abstract
Please note that this is taking place in the afternoon - partly to avoid a clash with the OCCAM group meeting in the morning.
OCCAM Group Meeting
Abstract
- Ian Griffiths - Control and optimization in filtration and tissue engineering
- Vladimir Zubkov - Comparison of the Navier-Stokes and the lubrication models for the tear film dynamics
- Victor Burlakov - Applying the ideas of 1-st order phase transformations to various nano-systems
Some linear algebra problems arising in the analysis of complex networks
The role of carbon in past and future climate
Abstract
There is much current concern over the future evolution of climate under conditions of increased atmospheric carbon. Much of the focus is on a bottom-up approach in which weather/climate models of severe complexity are solved and extrapolated beyond their presently validated parameter ranges. An alternative view takes a top-down approach, in which the past Earth itself is used as a laboratory; in this view, ice-core records show a strong association of carbon with atmospheric temperature throughout the Pleistocene ice ages. This suggests that carbon variations drove the ice ages. In this talk I build the simplest model which can accommodate this observation, and I show that it is reasonably able to explain the observations. The model can then be extrapolated to offer commentary on the cooling of the planet since the Eocene, and the likely evolution of climate under the current industrial production of atmospheric carbon.
Multilevel dual approach for pricing American style derivatives
Abstract
In this article we propose a novel approach to reduce the computational
complexity of the dual method for pricing American options.
We consider a sequence of martingales that converges to a given
target martingale and decompose the original dual representation into a sum of
representations that correspond to different levels of approximation to the
target martingale. By next replacing in each representation true conditional expectations with their
Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte
Carlo algorithm. The analysis of this algorithm reveals that the computational
complexity of getting the corresponding target upper bound, due to the target martingale,
can be significantly reduced. In particular, it turns out that using our new
approach, we may construct a multilevel version of the well-known nested Monte
Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually
equivalent to a non-nested algorithm. The performance of this multilevel
algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)
Arguing about risks: a request for assistance
Abstract
The standard mathematical treatment of risk combines numerical measures of uncertainty (usually probabilistic) and loss (money and other natural estimators of utility). There are significant practical and theoretical problems with this interpretation. A particular concern is that the estimation of quantitative parameters is frequently problematic, particularly when dealing with one-off events such as political, economic or environmental disasters. Practical decision-making under risk, therefore, frequently requires extensions to the standard treatment.
An intuitive approach to reasoning under uncertainty has recently become established in computer science and cognitive science in which general theories (formalised in a non-classical first-order logic) are applied to descriptions of specific situations in order to construct arguments for and/or against claims about possible events. Collections of arguments can be aggregated to characterize the type or degree of risk, using the logical grounds of the arguments to explain, and assess the credibility of, the supporting evidence for competing claims. Discussions about whether a complex piece of equipment or software could fail, the possible consequences of such failure and their mitigation, for example, can be based on the balance and relative credibility of all the arguments. This approach has been shown to offer versatile risk management tools in a number of domains, including clinical medicine and toxicology (e.g. www.infermed.com; www.lhasa.com). Argumentation frameworks are also being used to support open discussion and debates about important issues (e.g. see debate on environmental risks at www.debategraph.org).
Despite the practical success of argument-based methods for risk assessment and other kinds of decision making they typically ignore measurement of uncertainty even if some quantitative data are available, or combine logical inference with quantitative uncertainty calculations in ad hoc ways. After a brief introduction to the argumentation approach I will demonstrate medical risk management applications of both kinds and invite suggestions for solutions which are mathematically more satisfactory.
Definitions (Hubbard: http://en.wikipedia.org/wiki/Risk)
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example:"There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
The conceptual background to the argumentation approach to reasoning under uncertainty is reviewed in the attached paper “Arguing about the Evidence: a logical approach”.
Tsunami asymptotics
Abstract
Tsunami asymptotics: For most of their propagation, tsunamis are linear dispersive waves whose speed is limited by the depth of the ocean and which can be regarded as diffraction-decorated caustics in spacetime. For constant depth, uniform asymptotics gives a very accurate compact description of the tsunami profile generated by an arbitrary initial disturbance. Variations in depth can focus tsunamis onto cusped caustics, and this 'singularity on a singularity' constitutes an unusual diffraction problem, whose solution indicates that focusing can amplify the tsunami energy by an order of magnitude.
Sharpening `Manin-Mumford' for certain algebraic groups of dimension 2
Abstract
(Joint work with P. Corvaja and D.
Masser.)
The topic of the talk arises from the
Manin-Mumford conjecture and its extensions, where we shall
focus on the case of (complex connected) commutative
algebraic groups $G$ of dimension $2$. The `Manin-Mumford'
context in these cases predicts finiteness for the set of
torsion points in an algebraic curve inside $G$, unless the
curve is of `special' type, i.e. a translate of an algebraic
subgroup of $G$.
In the talk we shall consider not merely the set of torsion
points, but its topological closure in $G$ (which turns out
to be also the maximal compact subgroup). In the case of
abelian varieties this closure is the whole space, but this is
not so for other $G$; actually, we shall prove that in certain
cases (where a natural dimensional condition is fulfilled) the
intersection of this larger set with a non-special curve
must still be a finite set.
We shall conclude by stating in brief some extensions of
this problem to higher dimensions.
Uniformizing Bun(G) by the affine Grassmannian
Abstract
I'll present the work of Gaitsgory arXiv:1108.1741. In it he uses Beilinson-Drinfeld factorization techniques in order to uniformize the moduli stack of G-bundles on a curve. The main difference with the gauge theoretic technique is that the the affine Grassmannian is far from being contractible but the fibers of the map to Bun(G) are contractible.
Lectures on: Bifurcation Theory and Applications to Elliptic Boundary-Value Problems
Abstract
• Sufficient conditions for bifurcation from points that are not isolated eigenvalues of the linearisation.
• Odd potential operators.
• Defining min-max critical values using sets of finite genus.
• Formulating some necessary conditions for bifurcation.
Climate, Assimilation of Data and Models - When Data Fail Us
Abstract
The fundamental task in climate variability research is to eke
out structure from climate signals. Ideally we want a causal
connection between a physical process and the structure of the
signal. Sometimes we have to settle for a correlation between
these. The challenge is that the data is often poorly
constrained and/or sparse. Even though many data gathering
campaigns are taking place or are being planned, the very high
dimensional state space of the system makes the prospects of
climate variability analysis from data alone impractical.
Progress in the analysis is possible by the use of models and
data. Data assimilation is one such strategy. In this talk we
will describe the methodology, illustrate some of its
challenges, and highlight some of the ways our group has
proposed to improving the methodology.