Explicit rational points on elliptic curves
Abstract
I will discuss an efficient algorithm for computing certain special values of p-adic L-functions, giving an application to the explicit construction of
rational points on elliptic curves.
I will discuss an efficient algorithm for computing certain special values of p-adic L-functions, giving an application to the explicit construction of
rational points on elliptic curves.
The industrial prilling process is amongst the most favourite technique employed in generating monodisperse droplets. In such a process long curved jets are generated from a rotating drum which in turn breakup and from droplets. In this talk we describe the experimental set-up and the theory to model this process. We will consider the effects of changing the rheology of the fluid as well as the addition of surface agents to modify breakup characterstics. Both temporal and spatial instability will be considered as well as nonlinear numerical simulations with comparisons between experiments.
In this talk we present an overview of some recent developments concerning the a posteriori error analysis and adaptive mesh design of $h$- and $hp$-version discontinuous Galerkin finite element methods for the numerical approximation of second-order quasilinear elliptic boundary value problems. In particular, we consider the derivation of computable bounds on the error measured in terms of an appropriate (mesh-dependent) energy norm in the case when a two-grid approximation is employed. In this setting, the fully nonlinear problem is first computed on a coarse finite element space $V_{H,P}$. The resulting 'coarse' numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretization on the finer space $V_{h,p}$; thereby, only a linear system of equations is solved on the richer space $V_{h,p}$. Here, an adaptive $hp$-refinement algorithm is proposed which automatically selects the local mesh size and local polynomial degrees on both the coarse and fine spaces $V_{H,P}$ and $V_{h,p}$, respectively. Numerical experiments confirming the reliability and efficiency of the proposed mesh refinement algorithm are presented.
I will claim (and maybe show) that a lot of problems in differential geometry can be reformulated in terms of non-linear elliptic differential operators. After reviewing the theory of linear elliptic operators, I will show what can be said about the non-linear setting.
In this lecture I will report on joint work with J. Casado-Díaz, T. Chacáon Rebollo, V. Girault and M.~Gómez Marmol which was published in Numerische Mathematik, vol. 105, (2007), pp. 337-510.
We consider, in dimension $d\ge 2$, the standard $P^1$ finite elements approximation of the second order linear elliptic equation in divergence form with coefficients in $L^\infty(\Omega)$ which generalizes Laplace's equation. We assume that the family of triangulations is regular and that it satisfies an hypothesis close to the classical hypothesis which implies the discrete maximum principle. When the right-hand side belongs to $L^1(\Omega)$, we prove that the unique solution of the discrete problem converges in $W^{1,q}_0(\Omega)$ (for every $q$ with $1 \leq q $ < $ {d \over d-1} $) to the unique renormalized solution of the problem. We obtain a weaker result when the right-hand side is a bounded Radon measure. In the case where the dimension is $d=2$ or $d=3$ and where the coefficients are smooth, we give an error estimate in $W^{1,q}_0(\Omega)$ when the right-hand side belongs to $L^r(\Omega)$ for some $r$ > $1$.As Herb Sutter predicted in 2005, "The Free Lunch is Over", software programmers can no longer rely on exponential performance improvements from Moore's Law. Computationally intensive software now rely on concurrency for improved performance, as at the high end supercomputers are being built with millions of processing cores, and at the low end GPU-accelerated workstations feature hundreds of simultaneous execution cores. It is clear that the numerical software of the future will be highly parallel, but what language will it be written in?
Over the past few decades, high-level scientific programming languages have become an important platform for numerical codes. Languages such as MATLAB, IDL, and R, offer powerful advantages: they allow code to be written in a language more familiar to scientists and they permit development to occur in an evolutionary fashion, bypassing the relatively slow edit/compile/run/plot cycle of Fortran or C. Because a scientist’s programming time is typically much more valuable than the computing cycles their code will use, these are substantial benefits. However, programs written in such languages are not portable to high performance computing platforms and may be too slow to be useful for realistic problems on desktop machines. Additionally, the development of such interpreted language codes is partially wasteful in the sense that it typically involves reimplementation (with associated debugging) of some algorithms that already exist in well-tested Fortran and C codes. Python stands out as the only high-level language with both the capability to run on parallel supercomputers and the flexibility to interface with existing libraries in C and Fortran.
Our code, PyClaw, began as a Python interface, written by University of Washington graduate student Kyle Mandli, to the Fortran library Clawpack, written by University of Washington Professor Randy LeVeque. PyClaw was designed to build on the strengths of Clawpack by providing greater accessibility. In this talk I will describe the design and implementation of PyClaw, which incorporates the advantages of a high-level language, yet achieves serial performance similar to a hand-coded Fortran implementation and runs on the world's fastest supercomputers. It brings new numerical functionality to Clawpack, while making maximal reuse of code from that package. The goal of this talk is to introduce the design principles we considered in implementing PyClaw, demonstrate our testing infrastructure for developing within PyClaw, and illustrate how we elegantly and efficiently distributed problems over tens of thousands of cores using the PETSc library for portable parallel performance. I will also briefly highlight a new mathematical result recently obtained from PyClaw, an investigation of solitary wave formation in periodic media in 2 dimensions.
Graph products of groups naturally generalize direct and free products and have a rich subgroup structure. Basic examples of graph products are right angled Coxeter and Artin groups. I will discuss various forms of Tits Alternative for subgroups and
their stability under graph products. The talk will be based on a joint work with Yago Antolin Pichel.
We show the solvability of a proposed Generalized Buckley-LeverettSystem, which is related to multidimensional Muskat Problem. More-over, we discuss some important questions concerning singular limitsof the proposed model.
Symplectic field theory (SFT) can be viewed as TQFT approach to Gromov-Witten theory. As in Gromov-Witten theory, transversality for the Cauchy-Riemann operator is not satisfied in general, due to the presence of multiply-covered curves. When the underlying simple curve is sufficiently nice, I will outline that the transversality problem for their multiple covers can be elegantly solved using finite-dimensional obstruction bundles of constant rank. By fixing the underlying holomorphic curve, we furthermore define a local version of SFT by counting only multiple covers of this chosen curve. After introducing gravitational descendants, we use this new version of SFT to prove that a stable hypersurface intersecting an exceptional sphere (in a homologically nontrivial way) in a closed four-dimensional symplectic manifold must carry an elliptic orbit. Here we use that the local Gromov-Witten potential of the exceptional sphere factors through the local SFT invariants of the breaking orbits appearing after neck-stretching along the hypersurface.
We discuss some recent developments on the following long-standing problem known as Ryser's
conjecture. Let $H$ be an $r$-partite $r$-uniform hypergraph. A matching in $H$ is a set of disjoint
edges, and we denote by $\nu(H)$ the maximum size of a matching in $H$. A cover of $H$ is a set of
vertices that intersects every edge of $H$. It is clear that there exists a cover of $H$ of size at
most $r\nu(H)$, but it is conjectured that there is always a cover of size at most $(r-1)\nu(H)$.
Abstract: In this talk, I will discuss the peeling behaviour of the Weyl tensor near null infinity for asymptotically flat higher dimensional spacetimes. The result is qualitatively different from the peeling property in 4d. Also, I will discuss the rewriting of the Bondi energy flux in terms of "Newman-Penrose" Weyl components.
This talk will consist of a pure PDE part, and an applied part. The unifying topic is mean curvature flow (MCF), and particularly mean curvature flow starting at cones. This latter subject originates from the abstract consideration of uniqueness questions for flows in the presence of singularities. Recently, this theory has found applications in several quite different areas, and I will explain the connections with Harnack estimates (which I will explain from scratch) and also with the study of the dynamics of charged fluid droplets.
There are essentially no prerequisites. It would help to be familiar with basic submanifold geometry (e.g. second fundamental form) and intuition concerning the heat equation, but I will try to explain everything and give the talk at colloquium level.
Joint work with Sebastian Helmensdorfer.
I will discuss some aspects of the simplicial theory of
infinity-categories which originates with Boardman and Vogt, and has
recently been developed by Joyal, Lurie and others. The main purpose of
the talk will be to present an extension of this theory which covers
infinity-operads. It is based on a modification of the notion of
simplicial set, called 'dendroidal set'. One of the main results is that
the category of dendroidal sets carries a monoidal Quillen model
structure, in which the fibrant objects are precisely the infinity
operads,and which contains the Joyal model structure for
infinity-categories as a full subcategory.
(The lecture will be mainly based on joint work with Denis-Charles
Cisinski.)
This talk is devoted to Talagrand's transport-entropy inequality and its deep connections to the concentration of measure phenomenon, large deviation theory and logarithmic Sobolev inequalities. After an introductive part on the field, I will present recent results obtained with P-M Samson and C. Roberto establishing the equivalence of Talagrand's inequality to a restricted version of the Log-Sobolev inequality. If time enables, I will also present some works in progress about transport inequalities in a discrete setting.
Abstract: First we provide a survey on the long-time behaviour of stochastic delay equations with bounded memory, addressing existence and uniqueness of invariant measures, Lyapunov spectra, and exponential growth rates.
Then, we study the very simple one-dimensional equation $dX(t)=X(t-1)dW(t)$ in more detail and establish the existence of a deterministic exponential growth rate of a suitable norm of the solution via a Furstenberg-Hasminskii-type formula.
Parts of the talk are based on joint work with Martin Hairer and Jonathan Mattingly.
The AdS/CFT correspondence is a powerful tool to analyse strongly coupled quantum field
theories. Over the past few years there has been a surge of activity aimed at finding
possible applications to condensed matter systems. One focus has been to holographically
realise various kinds of phases via the construction of fascinating new classes of black
hole solutions. In this framework, I will discuss the possibility of describing finite
temperature phase transitions leading to spontaneous breaking of translational invariance of
the dual field theory at strong coupling. Along with the general setup I will also discuss
specific string/M theory embeddings of the corresponding symmetry breaking modes leading to
the description of such phases.
The part of the West Antarctic Ice Sheet that drains into the Amundsen Sea is currently thinning at such a rate that it contributes nearly 10 percent of the observed rise in global mean sea level. Acceleration of the outlet glaciers means that the sea level contribution has grown over the past decades, while the likely future contribution remains a key unknown. The synchronous response of several independent glaciers, coupled with the observation that thinning is most rapid at their downstream ends, where the ice goes afloat, hints at an oceanic driver. The general assumption is that the changes are a response to an increase in submarine melting of the floating ice shelves that has been driven in turn by an increase in the transport of ocean heat towards the ice sheet. Understanding the causes of these changes and their relationship with climate variability is imperative if we are to make quantitative estimates of sea level into the future.
Observations made since the mid‐1990s on the Amundsen Sea continental shelf have revealed that the seabed troughs carved by previous glacial advances guide seawater around 3‐4°C above the freezing point from the deep ocean to the ice sheet margin, fuelling rapid melting of the floating ice. This talk summarises the results of several pieces of work that investigate the chain of processes linking large‐scale atmospheric processes with ocean circulation over the continental shelf and beneath the floating ice shelves and the eventual transfer of heat to the ice. While our understanding of the processes is far from complete, the pieces of the jigsaw that have been put into place give us insight into the potential causes of variability in ice shelf melting, and allow us to at least formulate some key questions that still need to be answered in order to make reliable projections of future ice sheet evolution in West Antarctica.
Algorithmic trade execution has become a standard technique
for institutional market players in recent years,
particularly in the equity market where electronic
trading is most prevalent. A trade execution algorithm
typically seeks to execute a trade decision optimally
upon receiving inputs from a human trader.
A common form of optimality criterion seeks to
strike a balance between minimizing pricing impact and
minimizing timing risk. For example, in the case of
selling a large number of shares, a fast liquidation will
cause the share price to drop, whereas a slow liquidation
will expose the seller to timing risk due to the
stochastic nature of the share price.
We compare optimal liquidation policies in continuous time in
the presence of trading impact using numerical solutions of
Hamilton Jacobi Bellman (HJB)partial differential equations
(PDE). In particular, we compare the time-consistent
mean-quadratic-variation strategy (Almgren and Chriss) with the
time-inconsistent (pre-commitment) mean-variance strategy.
The Almgren and Chriss strategy should be viewed as the
industry standard.
We show that the two different risk measures lead to very different
strategies and liquidation profiles.
In terms of the mean variance efficient frontier, the
original Almgren/Chriss strategy is signficently sub-optimal
compared to the (pre-commitment) mean-variance strategy.
This is joint work with Stephen Tse, Heath Windcliff and
Shannon Kennedy.
In recent years, surprising connections between type theory and homotopy theory have been discovered. In this talk I will recall the notions of intensional type theories and identity types. I will describe "infinity groupoids", formal algebraic models of topological spaces, and explain how identity types carry the structure of an infinity groupoid. I will finish by discussing categorical semantics of intensional type theories.
The talk will take place in Lecture Theatre B, at the Department of Computer Science.
Problem #1: (marker-less scaling) Poikos ltd. has created algorithms for matching photographs of humans to three-dimensional body scans. Due to variability in camera lenses and body sizes, the resulting three-dimensional data is normalised to have unit height and has no absolute scale. The problem is to assign an absolute scale to normalised three-dimensional data.
Prior Knowledge: A database of similar (but different) reference objects with known scales. An imperfect 1:1 mapping from the input coordinates to the coordinates of each object within the reference database. A projection matrix mapping the three-dimensional data to the two-dimensional space of the photograph (involves a non-linear and non-invertible transform; x=(M*v)_x/(M*v)_z, y=(M*v)_y/(M*v)_z).
Problem #2: (improved silhouette fitting) Poikos ltd. has created algorithms for converting RGB photographs of humans in (approximate) poses into silhouettes. Currently, a multivariate Gaussian mixture model is used as a first pass. This is imperfect, and would benefit from an improved statistical method. The problem is to determine the probability that a given three-component colour at a given two-component location should be considered as "foreground" or "background".
Prior Knowledge: A sparse set of colours which are very likely to be skin (foreground), and their locations. May include some outliers. A (larger) sparse set of colours which are very likely to be clothing (foreground), and their locations. May include several distributions in the case of multi-coloured clothing, and will probably include vast variations in luminosity. A (larger still) sparse set of colours which are very likely to be background. Will probably overlap with skin and/or clothing colours. A very approximate skeleton for the subject.
Limitations: Sample colours are chosen "safely". That is, they are chosen in areas known to be away from edges. This causes two problems; highlights and shadows are not accounted for, and colours from arms and legs are under-represented in the model. All colours may be "saturated"; that is, information is lost about colours which are "brighter than white". All colours are subject to noise; each colour can be considered as a true colour plus a random variable from a gaussian distribution. The weight of this gaussian model is constant across all luminosities, that is, darker colours contain more relative noise than brighter colours.