Effective behaviour of random media: From an error analysis to elliptic regularity theory
Abstract
Forthcoming events in this series
There has been a great deal of attention paid to "Big Data" over the last few years. However, often as not, the problem with the analysis of data is not as much the size as the complexity of the data. Even very small data sets can exhibit substantial complexity. There is therefore a need for methods for representing complex data sets, beyond the usual linear or even polynomial models. The mathematical notion of shape, encoded in a metric, provides a very useful way to represent complex data sets. On the other hand, Topology is the mathematical sub discipline which concerns itself with studying shape, in all dimensions. In recent years, methods from topology have been adapted to the study of data sets, i.e. finite metric spaces. In this talk, we will discuss what has been
done in this direction and what the future might hold, with numerous examples.
I will review Bott's classical periodicity result about topological K-theory (with period 2 in the case of complex K-theory, and period 8 in the case of real K-theory), and provide an easy sketch of proof, based on the algebraic periodicity of Clifford algebras. I will then introduce the `higher real K-theory' of Hopkins and Miller, also known as TMF. I'll discuss its periodicity (with period 576), and present a conjecture about a corresponding algebraic periodicity of `higher Clifford algebras'. Finally, applications to physics will be discussed.
Some physical and mathematical theories have the unfortunate feature that if one takes them at face value, many quantities of interest appear to be infinite! Various techniques, usually going under the common name of “renormalisation” have been developed over the years to address this, allowing mathematicians and physicists to tame these infinities. We will tip our toes into some of the mathematical aspects of these techniques and we will see how they have recently been used to make precise analytical statements about the solutions of some equations whose meaning was not even clear until recently.
Optimization methods for large-scale machine learning must confront a number of challenges that are unique to this discipline. In addition to being scalable, parallelizable and capable of handling nonlinearity (even non-convexity), they must also be good learning algorithms. These challenges have spurred a great amount of research that I will review, paying particular attention to variance reduction methods. I will propose a new algorithm of this kind and illustrate its performance on text and image classification problems.
Based upon our joint work with M. Marcolli, I will introduce some algebraic geometric models in cosmology related to the "boundaries" of space-time: Big Bang, Mixmaster Universe, and Roger Penrose's crossovers between aeons. We suggest to model the kinematics of Big Bang using the algebraic geometric (or analytic) blow up of a point $x$. This creates a boundary which consists of the projective space of tangent directions to $x$ and possibly of the light cone of $x$. We argue that time on the boundary undergoes the Wick rotation and becomes purely imaginary. The Mixmaster (Bianchi IX) model of the early history of the universe is neatly explained in this picture by postulating that the reverse Wick rotation follows a hyperbolic geodesic connecting imaginary time axis to the real one. Roger Penrose's idea to see the Big Bang as a sign of crossover from "the end of the previous aeon" of the expanding and cooling Universe to the "beginning of the next aeon" is interpreted as an identification of a natural boundary of Minkowski space at infinity with the Bing Bang boundary.
Quantum Mechanics presents a radically different perspective on physical reality compared with the world of classical physics. In particular, results such as the Bell and Kochen-Specker theorems highlight the essentially non-local and contextual nature of quantum mechanics. The rapidly developing field of quantum information seeks to exploit these non-classical features of quantum physics to transcend classical bounds on information processing tasks.
In this talk, we shall explore the rich mathematical structures underlying these results. The study of non-locality and contextuality can be expressed in a unified and generalised form in the language of sheaves or bundles, in terms of obstructions to global sections. These obstructions can, in many cases, be witnessed by cohomology invariants. There are also strong connections with logic. For example, Bell inequalities, one of the major tools of quantum information and foundations, arise systematically from logical consistency conditions.
These general mathematical characterisations of non-locality and contextuality also allow precise connections to be made with a number of seemingly unrelated topics, in classical computation, logic, and natural language semantics. By varying the semiring in which distributions are valued, the same structures and results can be recognised in databases and constraint satisfaction as in probability models arising from quantum mechanics. A rich field of contextual semantics, applicable to many of the situations where the pervasive phenomenon of contextuality arises, promises to emerge.
Universal fluctuations are shown to exist when well-known and widely used numerical algorithms are applied with random data. Similar universal behavior is shown in stochastic algorithms and algorithms that model neural computation. The question of whether universality is present in all, or nearly all, computation is raised. (Joint work with G.Menon, S.Olver and T. Trogdon.)
Evolution by natural selection has resulted in a remarkable diversity of organism morphologies. But is it possible for developmental processes to create “any possible shape?” Or are there intrinsic constraints? I will discuss our recent exploration into the shapes of bird beaks. Initially, inspired by the discovery of genes controlling the shapes of beaks of Darwin's finches, we showed that the morphological diversity in the beaks of Darwin’s Finches is quantitatively accounted for by the mathematical group of affine transformations. We have extended this to show that the space of shapes of bird beaks is not large, and that a large phylogeny (including finches, cardinals, sparrows, etc.) are accurately spanned by only three independent parameters -- the shapes of these bird beaks are all pieces of conic sections. After summarizing the evidence for these conclusions, I will delve into our efforts to create mathematical models that connect these patterns to the developmental mechanism leading to a beak. It turns out that there are simple (but precise) constraints on any mathematical model that leads to the observed phenomenology, leading to explicit predictions for the time dynamics of beak development in song birds. Experiments testing these predictions for the development of zebra finch beaks will be presented.
Based on the following papers:
http://www.pnas.org/content/107/8/3356.short
http://www.nature.com/ncomms/2014/140416/ncomms4700/full/ncomms4700.html
The Plateau's problem, named after the Belgian physicist J. Plateau, is a classic in the calculus of variations and regards minimizing the area among all surfaces spanning a given contour. Although Plateau's original concern were $2$-dimensional surfaces in the $3$-dimensional space, generations of mathematicians have considered such problem in its generality. A successful existence theory, that of integral currents, was developed by De Giorgi in the case of hypersurfaces in the fifties and by Federer and Fleming in the general case in the sixties. When dealing with hypersurfaces, the minimizers found in this way are rather regular: the corresponding regularity theory has been the achievement of several mathematicians in the 60es, 70es and 80es (De Giorgi, Fleming, Almgren, Simons, Bombieri, Giusti, Simon among others).
In codimension higher than one, a phenomenon which is absent for hypersurfaces, namely that of branching, causes very serious problems: a famous theorem of Wirtinger and Federer shows that any holomorphic subvariety in $\mathbb C^n$ is indeed an area-minimizing current. A celebrated monograph of Almgren solved the issue at the beginning of the 80es, proving that the singular set of a general area-minimizing (integral) current has (real) codimension at least 2. However, his original (typewritten) manuscript was more than 1700 pages long. In a recent series of works with Emanuele Spadaro we have given a substantially shorter and simpler version of Almgren's theory, building upon large portions of his program but also bringing some new ideas from partial differential equations, metric analysis and metric geometry. In this talk I will try to give a feeling for the difficulties in the proof and how they can be overcome.
The surface subgroup problem asks whether a given group contains a subgroup that is isomorphic to the fundamental group of a closed surface. In this talk I will survey the role that the surface subgroup problem plays in some important solved and unsolved problems in the theory of 3-manifolds, the geometric group theory, and the theory of arithmetic manifolds.
The height of a rational number a/b (a,b integers which are coprime) is defined as max(|a|, |b|). A rational number with small (resp. big) height is a simple (resp. complicated) number. Though the notion height is so naive, height has played a fundamental role in number theory. There are important variants of this notion. In 1983, when Faltings proved the Mordell conjecture (a conjecture formulated in 1921), he first proved the Tate conjecture for abelian varieties (it was also a great conjecture) by defining heights of abelian varieties, and then deducing Mordell conjecture from this. The height of an abelian variety tells how complicated are the numbers we need to define the abelian variety. In this talk, after these initial explanations, I will explain that this height is generalized to heights of motives. (A motive is a kind of generalisation of abelian variety.) This generalisation of height is related to open problems in number theory. If we can prove finiteness of the number of motives of bounded height, we can prove important conjectures in number theory such as general Tate conjecture and Mordell-Weil type conjectures in many cases.
"We introduce some type of generalized Poisson formula which is equivalent
to Langlands' automorphic transfer from an arbitrary reductive group over a
global field to a general linear group."
The quantification and management of risk in financial markets
is at the center of modern financial mathematics. But until recently, risk
assessment models did not consider the effects of inter-connectedness of
financial agents and the way risk diversification impacts the stability of
markets. I will give an introduction to these problems and discuss the
implications of some mathematical models for dealing with them.
There are many recent points of contact of model theory and other
parts of mathematics: o-minimality and Diophantine geometry, geometric group
theory, additive combinatorics, rigid geometry,... I will probably
emphasize long-standing themes around stability, Diophantine geometry, and
analogies between ODE's and bimeromorphic geometry.
Many geophysical flows over topography can be modeled by two-dimensional
depth-averaged fluid dynamics equations. The shallow water equations
are the simplest example of this type, and are often sufficiently
accurate for simulating tsunamis and other large-scale flows such
as storm surge. These hyperbolic partial differential equations
can be modeled using high-resolution finite volume methods. However,
several features of these flows lead to new algorithmic challenges,
e.g. the need for well-balanced methods to capture small perturbations
to the ocean at rest, the desire to model inundation and flooding,
and that vastly differing spatial scales that must often be modeled,
making adaptive mesh refinement essential. I will discuss some of
the algorithms implemented in the open source software GeoClaw that
is aimed at solving real-world geophysical flow problems over
topography. I'll also show results of some recent studies of the
11 March 2011 Tohoku Tsunami and discuss the use of tsunami modeling
in probabilistic hazard assessment.
A map betweem metric spaces is a bilipschitz homeomorphism if it
is Lipschitz and has a Lipschitz inverse; a map is a bilipschitz embedding
if it is a bilipschitz homeomorphism onto its image. Given metric spaces
X and Y, one may ask if there is a bilipschitz embedding X--->Y, and if
so, one may try to find an embedding with minimal distortion, or at least
estimate the best bilipschitz constant. Such bilipschitz embedding
problems arise in various areas of mathematics, including geometric group
theory, Banach space geometry, and geometric analysis; in the last 10
years they have also attracted a lot of attention in theoretical computer
science.
The lecture will be a survey bilipschitz embedding in Banach spaces from
the viewpoint of geometric analysis.
Consider a fully-connected social network of people, companies,
or countries, modeled as an undirected complete graph with real numbers on
its edges. Positive edges link friends; negative edges link enemies.
I'll discuss two simple models of how the edge weights of such networks
might evolve over time, as they seek a balanced state in which "the enemy of
my enemy is my friend." The mathematical techniques involve elementary
ideas from linear algebra, random graphs, statistical physics, and
differential equations. Some motivating examples from international
relations and social psychology will also be discussed. This is joint work
with Seth Marvel, Jon Kleinberg, and Bobby Kleinberg.
What is a phase transition?
The first thing that comes to mind is boiling and freezing of water. The material clearly changes its behaviour without any chemical reaction. One way to arrive at a mathematical model is to associate different material behavior, ie., constitutive laws, to different phases. This is a continuum physics viewpoint, and when a law for the switching between phases is specified, we arrive at pde problems. The oldest paper on such a problem by Clapeyron and Lame is nearly 200 years old; it is basically on what has later been called the Stefan problem for the heat equation.
The law for switching is given e.g. by the melting temperature. This can be taken to be a phenomenological law or thermodynamically justified as an equilibrium condition.
The theory does not explain delayed switching (undercooling) and it does not give insight in structural differences between the phases.
To some extent the first can be explained with the help of a free energy associated with the interface between different phases. This was proposed by Gibbs, is relevant on small space scales, and leads to mean curvature equations for the interface – the so-called Gibbs Thompson condition.
The equations do not by themselves lead to a unique evolution. Indeed to close the resulting pde’s with a reasonable switching or nucleation law is an open problem.
Based on atomistic concepts, making use of surface energy in a purely phenomenological way, Becker and Döring developed a model for nucleation as a kinetic theory for size distributions of nuclei. The internal structure of each phase is still not considered in this ansatz.
An easier problem concerns solid-solid phase transitions. The theory is furthest developped in the context of equilibrium statistical mechanics on lattices, starting with the Ising model for ferromagnets. In this context phases correspond to (extremal) equilibrium Gibbs measures in infinite volume. Interfacial free energy appears as a finite volume correction to free energy.
The drawback is that the theory is still basically equilibrium and isothermal. There is no satisfactory theory of metastable states and of local kinetic energy in this framework.
Free groups, free abelian groups and fundamental groups of
closed orientable surfaces are the most basic and well-understood examples
of infinite discrete groups. The automorphism groups of these groups, in
contrast, are some of the most complex and intriguing groups in all of
mathematics. I will give some general comments about geometric group
theory and then describe the basic geometric object, called Outer space,
associated to automorphism groups of free groups.
This Colloquium talk is the first of a series of three lectures given by
Professor Vogtmann, who is the European Mathematical Society Lecturer. In
this series of three lectures, she will discuss groups of automorphisms
of free groups, while drawing analogies with the general linear group over
the integers and surface mapping class groups. She will explain modern
techniques for studying automorphism groups of free groups, which include
a mixture of topological, algebraic and geometric methods.
Yves Couder and co-workers have recently reported the results of a startling series of experiments in which droplets bouncing on a fluid surface exhibit several dynamical features previously thought to be peculiar to the microscopic realm. In an attempt to
develop a connection between the fluid and quantum systems, we explore the Madelung transformation, whereby Schrodinger's equation is recast in a hydrodynamic form. New experiments are presented, and indicate the potential value of this hydrodynamic approach to both visualizing and understanding quantum mechanics.
Voiculescu showed how the large N limit of the expected value of the trace of a word on n independent hermitian NxN matrices gives a well known von Neumann algebra. In joint work with Guionnet and Shlyakhtenko it was shown that this idea makes sense in the context of very general planar algebras where one works directly in the large N limit. This allowed us to define matrix models with a non-integral number of random matrices. I will present this work and some of the subsequent work, together with future hopes for the theory.
Graeme Segal shall describe some of Dan Quillen’s work, focusing on his amazingly productive period around 1970, when he not only invented algebraic K-theory in the form we know it today, but also opened up several other lines of research which are still in the front line of mathematical activity. The aim of the talk will be to give an idea of some of the mathematical influences which shaped him, of his mathematical perspective, and also of his style and his way of approaching mathematical problems.
"Scattering amplitudes in gauge theories and gravity have extraordinary properties that are completely invisible in the textbook formulation of quantum field theory using Feynman diagrams. In this usual approach, space-time locality and quantum-mechanical unitarity are made manifest at the cost of introducing huge gauge redundancies in our description of physics. As a consequence, apart from the very simplest processes, Feynman diagram calculations are enormously complicated, while the final results turn out to be amazingly simple, exhibiting hidden infinite-dimensional symmetries. This strongly suggests the existence of a new formulation of quantum field theory where locality and unitarity are derived concepts, while other physical principles are made more manifest. The past few years have seen rapid advances towards uncovering this new picture, especially for the maximally supersymmetric gauge theory in four dimensions.
These developments have interwoven and exposed connections between a remarkable collection of ideas from string theory, twistor theory and integrable systems, as well as a number of new mathematical structures in algebraic geometry. In this talk I will review the current state of this subject and describe a number of ongoing directions of research."
There are nontrivial solutions of the incompressible Euler equations which are compactly supported in space and time. If they were to model the motion of a real fluid, we would see it suddenly start moving after staying at rest for a while, without any action by an external force. There are C1 isometric embeddings of a fixed flat rectangle in arbitrarily small balls of the three dimensional space. You should therefore be able to put a fairly large piece of paper in a pocket of your jacket without folding it or crumpling it. I will discuss the corresponding mathematical theorems, point out some surprising relations and give evidences that, maybe, they are not merely a mathematical game.
Anomalous ( non local) diffusion processes appear in many subjects: phase transition, fracture dynamics, game theory I will describe some of the issues involved, and in particular, existence and regularity for some non local versions of the p Laplacian, of non variational nature, that appear in non local tug of war.
The isoperimetric inequality is a fundamental tool in many geometric and analytical issues, beside being the starting point for a great variety of other important inequalities.
We shall present some recent results dealing with the quantitative version of this inequality, an old question raised by Bonnesen at the beginning of last century. Applications of the sharp quantitative isoperimetric inequality to other classic inequalities and to eigenvalue problems will be also discussed.
Let L be a positive definite lattice. There are only finitely many positive definite lattices
L' which are isomorphic to L modulo N for every N > 0: in fact, there is a formula for the number of such lattices, called the Siegel mass formula. In this talk, I'll review the Siegel mass formula and how it can be deduced from a conjecture of Weil on volumes of adelic points of algebraic groups. This conjecture was proven for number fields by Kottwitz, building on earlier work of Langlands and Lai. I will conclude by sketching joint work (in progress) with Dennis Gaitsgory, which uses topological ideas to attack Weil's conjecture in the case of function fields.
Since the work of Feigenbaum and Coullet-Tresser on universality in the period doubling bifurcation, it is been understood that crucial features of unimodal (one-dimensional) dynamics depend on the behavior of a renormalization (and infinite dimensional) dynamical system. While the initial analysis of renormalization was mostly focused on the proof of existence of hyperbolic fixed points, Sullivan was the first to address more global aspects, starting a program to prove that the renormalization operator has a uniformly hyperbolic (hence chaotic) attractor. Key to this program is the proof of exponential convergence of renormalization along suitable ``deformation classes'' of the complexified dynamical system. Subsequent works of McMullen and Lyubich have addressed many important cases, mostly by showing that some fine geometric characteristics of the complex dynamics imply exponential convergence.
We will describe recent work (joint with Lyubich) which moves the focus to the abstract analysis of holomorphic iteration in deformation spaces. It shows that exponential convergence does follow from rougher aspects of the complex dynamics (corresponding to precompactness features of the renormalization dynamics), which enables us to conclude exponential convergence in all cases.
We shall report on the use of algebraic geometry for the calculation of Feynman amplitudes (work of Bloch, Brown, Esnault and Kreimer). Or how to combine Grothendieck's motives with high energy physics in an unexpected way, radically distinct from string theory.
Many problems from combinatorics, number theory, quantum field theory and topology lead to power series of a special kind called q-hypergeometric series. Sometimes, like in the famous Rogers-Ramanujan identities, these q-series turn out to be modular functions or modular forms. A beautiful conjecture of W. Nahm, inspired by quantum theory, relates this phenomenon to algebraic K-theory.
In a different direction, quantum invariants of knots and 3-manifolds also sometimes seem to have modular or near-modular properties, leading to new objects called "quantum modular forms".
A key birational invariant of a compact complex manifold is its "canonical ring."
The ring of modular forms in one or more variables is an example of a canonical ring. Recent developments in higher dimensional algebraic geometry imply that the canonical ring is always finitely generated:this is a long-awaited major foundational result in algebraic geometry.
In this talk I define all the terms and discuss the result, some applications, and a recent remarkable direct proof by Lazic.
An overview of the early history of the soliton (1960-1970) and equipartition in nonlinear 1D lattices : From Fermi-Pasta-Ulam to Korteweg de Vries, to Nonlinear Schrodinger*…., and recent developments .
I shall give a gentle introduction to the cohomology of finite groups from the point of view of algebra, topology, group actions and number theory
A common question in evolutionary biology is whether evolutionary processes leave some sort of signature in the shape of the phylogenetic tree of a collection of present day species.
Similarly, computer scientists wonder if the current structure of a network that has grown over time reveals something about the dynamics of that growth.
Motivated by such questions, it is natural to seek to construct``statistics'' that somehow summarise the shape of trees and more general graphs, and to determine the behaviour of these quantities when the graphs are generated by specific mechanisms.
The eigenvalues of the adjacency and Laplacian matrices of a graph are obvious candidates for such descriptors.
I will discuss how relatively simple techniques from linear algebra and probability may be used to understand the eigenvalues of a very broad class of large random trees. These methods differ from those that have been used thusfar to study other classes of large random matrices such as those appearing in compact Lie groups, operator algebras, physics, number theory, and communications engineering.
This is joint work with Shankar Bhamidi (U. of British Columbia) and Arnab Sen (U.C. Berkeley).
A random environment (in Z^d) is a collection of (random) transition probabilities, indexed by sites. Perform now a random walk using these transitions. This model is easy to describe, yet presents significant challenges to analysis. In particular, even elementary questions concerning long term behavior, such as the existence of a law of large numbers, are open. I will review in this talk the model, its history, and recent advance, focusing on examples of unexpected behavior.
New techniques in cell and molecular biology have produced huge advances in our understanding of signal transduction and cellular response in many systems, and this has led to better cell-level models for problems ranging from biofilm formation to embryonic development. However, many problems involve very large numbers of cells, and detailed cell-based descriptions are computationally prohibitive at present. Thus rational techniques for incorporating cell-level knowledge into macroscopic equations are needed for these problems. In this talk we discuss several examples that arise in the context of cell motility and pattern formation. We will discuss systems in which the micro-to-macro transition can be made more or less completely, and also describe other systems that will require new insights and techniques.
A lattice in the plane is a discrete subgroup in R^2 isomorphic to Z^2 ; it is unimodular if the area of the quotient is 1. The space of unimodular lattices is a venerable object in mathematics related to topology, dynamics and number theory. In this talk, I'd like to present a guided tour of this space, focusing on its topological aspect. I will describe in particular the periodic orbits of the modular flow, giving rise to beautiful "modular knots". I will show some animations
We shall begin with simple Weyl type asymptotic formulae for the spectrum of Dirichlet Laplacians and eventually prove a new result which I have recently obtained, jointly with J. Dolbeault and M. Loss. Following Eden and Foias, we derive a matrix version of a generalised Sobolev inequality in one dimension. This allows us to improve on the known estimates of best constants in Lieb-Thirring inequalities for the sum of the negative eigenvalues for multi-dimensional Schrödinger operators.
Bio: Ari Laptev received his PhD in Mathematics from Leningrad University (LU) in 1978, under the supervision of Michael Solomyak. He is well known for his contributions to the Spectral Theory of Differential Operators. Between 1972 - 77 and 1977- 82 he was employed as a junior researcher and as Assistant Professor at the Mathematics & Mechanics Department of LU. In 1981- 82 he held a post-doc position at the University of Stockholm and in 1982 he lost his position at LU due to his marriage to a British subject. Up until his emigration to England in 1987 he was working as a builder, constructing houses in small villages in the Novgorod district of Russia. In 1987 he was employed in Sweden, first as a lecturer at Linköping University and then from 1992 at the Royal Institute of Technology (KTH). In 1999 he became a professor at KTH and also Vice Chairman of its Mathematics Department. In 1992 he was granted Swedish citizenship. Ari Laptev was the President of the Swedish Mathematical Society from 2001 to 2003 and the President of the Organizing Committee of the Fourth European Congress of Mathematics in Stockholm in 2004. From January 2007 he has been employed by Imperial College London. Ari Laptev has supervised twelve PhD students. From January 2007 until the end of 2010 he is President of the European Mathematical Society.
Random planar curves arise in a natural way in statistical mechanics, for example as the boundaries of clusters in critical percolation or the Ising model. There has been a great deal of mathematical activity in recent years in understanding the measure on these curves in the scaling limit, under the name of Schramm-Loewner Evolution (SLE) and its extensions. On the other hand, the scaling limit of these lattice models is also believed to be described, in a certain sense, by conformal field theory (CFT). In this talk, after an introduction to these two sets of ideas, I will give a theoretical physicist's viewpoint on possible direct connections between them.
John Cardy studied Mathematics at Cambridge. After some time at CERN, Geneva he joined the physics faculty at Santa Barbara. He moved to Oxford in 1993 where he is a Senior Research Fellow at All Souls College and a Professor of Physics. From 2002-2003 and 2004-2005 he was a member of the IAS, Princeton. Among other work on the applications of quantum field theory, in the 1980s he helped develop the methods of conformal field theory. Professor Cardy is a Fellow of the Royal Society, a recipient of the 2000 Paul Dirac Medal and Prize of the Institute of Physics, and of the 2004 Lars Onsager Prize of the American Physical Society "for his profound and original applications of conformal invariance to the bulk and boundary properties of two-dimensional statistical systems."
I shall report on a programme of research which is joint with Terence Tao. Our
goal is to count the number of solutions to a system of linear equations, in
which all variables are prime, in as much generality as possible. One success of
the programme so far has been an asymptotic for the number of four-term
arithmetic progressions p_1 < p_2 < p_3 < p_4 <= N of primes, defined by the
pair of linear equations p_1 + p_3 = 2p_2, p_2 + p_4 = 2p_3. The talk will be
accessible to a general audience.
Aggregation refers to the thermodynamically favoured coalescence of individual molecular units (monomers) into dense clusters. The formation of liquid drops in oversaturated vapour, or the precipitation of solids from liquid solutions, are 'everyday' examples. A more exotic example, the crystallization of hydrophobic proteins in lipid bilayers, comes from current biophysics.
This talk begins with the basic physics of the simplest classical model, in which clusters grow by absorbing or expelling monomers, and the free monomers are transported by diffusion. Next, comes the description of three successive 'eras' of the aggregation process: NUCLEATION is the initial creation of clusters whose sizes are sufficiently large that they most likely continue to grow, instead of dissolving back into monomers.
The essential physical idea is growth by unlikely fluctuations past a high free energy barrier. The GROWTH of the clusters after nucleation depletes the initial oversaturation of monomer. The free energy barrier against nucleation increases, effectively shutting off any further nucleation. Finally, the oversaturation is so depleted, that the largest clusters grow only by dissolution of the smallest. This final era is called COARSENING.
The initial rate of nucleation and the evolution of the cluster size distribution during coarsening are the subjects of classical, well known models. The 'new meat' of this talk is a 'global' model of aggregation that quantitates the nucleation era, and provides an effective initial condition for the evolution of the cluster size distribution during growth and coarsening. One by-product is the determination of explicit scales of time and cluster size for all three eras. In particular, if G_* is the initial free energy barrier against nucleation, then the characteristic time of the nucleation era is proportional to exp(2G_*/5k_bT), and the characteristic number of monomers in a cluster during the nucleation era is exp(3G_*/5k_bT). Finally, the 'global' model of aggregation informs the selection of the self similar cluster size distribution that characterizes 'mature' coarsening.
\\common\dfs\htdocs\www\maintainers\reception\enb\abstracts\colloquia\tt06\mahadevan