News

Tuesday, 23 April 2019

How do additive and multiplicative structures interact?

Oxford Mathematician Joni Teräväinen talks about his work concerning prime factorisations of consecutive integers and its applications in number theory.

"Analytic number theorists study the properties of the positive integers $1,2,3,\ldots$, and in particular the multiplicative structure of this set. From an additive point of view, the positive integers are rather simple: every integer is generated by $1$ (meaning that any number can be written as a sum of some number of ones). But what are the multiplicative generators of the integers? These are the prime numbers $2,3,5,7,11,\ldots$ (meaning that every integer is the product of primes in a unique way), and their distribution is a subtle and interesting question that has been at the centre of analytic number theory research for the past 150 years. The celebrated prime number theorem from 1897 states that the number of primes up to a large number $x$, denoted by $\pi(x)$, is approximately $x/\log x$ (more precisely, the ratio of $\pi(x)$ and $x/\log x$ approaches $1$ as $x$ tends to infinity). But how good exactly is this approximation? It turns out that the optimal answer to that question is equivalent to solving the Riemann hypothesis, one of the great unsolved problems in modern mathematics.

The distribution of prime numbers can be viewed as a purely multiplicative question. However, many of the most notorious conjectures in number theory arise when one combines additive structure with multiplicative structure. As an example, we have the unsolved twin prime conjecture: Are there infinitely many primes $p$ such that $p+2$ is also a prime? The list of such $p$ begins $3,5,11,17,29,41,\ldots$, and the largest known twin prime to date has $388 342$ decimal digits. The twin prime problem combines addition (by looking at the shift $p+2$) with an inherently multiplicative structure, the primes. The resolution of the twin prime conjecture seems to be out of reach (despite spectacular progress by Zhang, Maynard, Tao and others on gaps between primes, demonstrating that $p$ and $p+M$ are infinitely often simulatenously prime for some fixed $M\geq 2$), so one is led to look for related problems that are somewhat more manageable.

The Liouville function $\lambda(n)$ is an important number-theoretic function given by the formula $\lambda(n)=(-1)^{\Omega(n)}$, where $\Omega(n)$ is the number of prime divisors of $n$ (counting multiplicities, so for example $12=2\cdot 2\cdot 3$ has $\Omega(12)=3$). The Liouville function is intimately related to the primes and to multiplicative properties of the integers, since one easily sees that $\lambda(p)=-1$ for all primes $p$ and $\lambda(mn)=\lambda(m)\lambda(n)$ for all $m,n\geq 1.$ For example, one can show that the prime number theorem stated above is equivalent to finding cancellation in the partial sums of $\lambda(n)$ (or more precisely, it is equivalent to $\lim_{x\to \infty}\frac{1}{x}\sum_{n\leq x}\lambda(n)=0$).

If one looks at the values of the Liouville function up to a large threshold (the sequence starts $+1,-1,-1,+1,-1,+1,-1,-1,+1,+1,\ldots$), it appears to behave just like a random sequence of signs, with no apparent structure in it. This is called the Möbius randomness principle in number theory (because it's often formulated for the  Möbius function, a function closely related to the Liouville function $\lambda(n)$). One instance of this principle is that the partial sums of $\lambda(n)$ should behave like the sums of random variables taking values $\pm 1$ with equal probability (so there would be square-root cancellation in the partial sums $\sum_{n\leq x}\lambda(n)$). It has been shown that a rigorous proof of this is equivalent to the Riemann hypothesis, and therefore appears to be out of reach of current methods. A different but equally valid way to specify how the Liouville sequence appears random is to assert that the sequence $\lambda(n)$ forgets its history. This means that the value of $\lambda(n+1)$ is asymptotically independent of the value of $\lambda(n)$ (much in the same way as the $n+1$st result of a series of coin tosses is independent of the $n$th result). The same should hold for longer patterns of shifts, so if someone picks a large integer $n$ and lists you the values of $\lambda(n),\lambda(n+1),\ldots, \lambda(n+9)$, that should be of no use if you're trying to guess the value of $\lambda(n+10)$. A precise version of this statement was conjectured by Chowla in the 1960, and it says the following.

Conjecture (Chowla's conjecture): Let $k\geq 1$. Then for any signs $s_1,\ldots, s_k\in \{-1,+1\}$, the asymptotic proportion of those integers $n\geq 1$ for which $\lambda(n+1)=s_1$, $\lambda(n+2)=s_2,\ldots, \lambda(n+k)=s_k$ is equal to $2^{-k}$.

Chowla's conjecture is closely related to the twin prime conjecture mentioned earlier, but at the same time Chowla's conjecture seems more tractable, due to the multiplicative property of $\lambda(n)$. Note that the twin prime conjecture would certainly imply as a special case that $\lambda(n)=\lambda(n+2)=-1$ infinitely often. Conversely, it is known that a sufficiently strong version of Chowla's conjecture could be used to deduce the twin prime conjecture (but no one has proved Chowla's conjecture, let alone this strengthening of it).

In 2015, Tao was the first one to make significant progress on Chowla's conjecture, building on the work of Matomäki and Radziwill on multiplicative functions. He showed that the case $k=2$ of Chowla's conjecture holds (with the technical modification that he considered the logarithmic probability, rather than asymptotic probability). Thus for example $\lambda(n)=\lambda(n+1)=-1$ holds for exactly $25\%$ of all the integers (again with logarithmic probability). In 2017, in joint work with Tao, we showed that Chowla's conjecture holds for $k=3$ as well (and we showed that a different formulation of it, which states that the correlation sums $\sum_{n\leq x}\lambda(n)\lambda(n+1)\cdots \lambda(n+k-1)$ exhibit cancellation, holds for any odd value of $k$). Again, we had to use the logarithmic probability instead of the asymptotic one. In a later work, we partially removed the restriction to logarithmic probability. Now we can for example say that $\lambda(n)=\lambda(n+1)=\lambda(n+2)=-1$ holds for exactly $12.5\%$ of all the integers.

Chowla's conjecture turns out to be far from being an isolated problem, and in the last few years methods that were originally developed to tackle partial cases of Chowla's conjecture, such as the ones above, have proved to be fruitful for several other problems as well, including some problems outside of number theory. For example, Tao used his work on the $k=2$ case of Chowla's conjecture (or rather a generalisation of it) to settle the Erdős discrepancy problem, a famous open problem in combinatorics. This notorious problem is the superficially simple statement that if you take any sequence $s(n)$ of signs $\pm 1$, then the partial sums $\sum_{n=1}^{j}s(dn)$ can be made arbitrarily large in absolute value by choosing suitable values of $j$ and $d$. In a different interesting development, Frantzikinakis and Host used the previously mentioned work on Chowla's conjecture as an ingredient to make progress on Sarnak's conejcture, an important conjecture in ergodic theory that is also related to the randomness of the Liouville sequence. Clearly, much remains to be studied regarding Chowla's conjecture and its applications and generalisations."

For more on Joni's work with Terry Tao on the structure of logarithmically averaged correlations of multiplicative functions please click here; and for more on their research in to the structure of correlations of multiplicative functions at almost all scales click here.

Monday, 15 April 2019

Oxford Mathematics Public Lectures: Julia Wolf - The Power of Randomness. 30 April 2019

Far from taking us down the road of unpredictability and chaos, randomness has the power to help us solve a fascinating range of problems. Join Julia Wolf on a mathematical journey from penalty shoot-outs to internet security and patterns in the primes. 

Julia Wolf is University Lecturer in the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.

5-6pm 
Mathematical Institute
Oxford

Please email external-relations@maths.ox.ac.uk to register.

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/wolf

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Wednesday, 3 April 2019

Max-min-plus mathematics. Or how to make the trains run on time.

Oxford Mathematician Ebrahim Patel talks about his work using max-plus algebra to improve airport scheduling and to define optimal railway networks.

"Typically arising from scheduling problems, max-plus algebra (MPA) offers an elegant way to understand the behaviour of a complex system depicted as a network of nodes. For example, an arrow points from node a to node b if b is dependent on a. Thus, I used MPA to better schedule Dubai's airport activities as part of the Oxford-Emirates Data Science Lab. The task was to model the dependence on activities such as cleaning and passenger boarding times to ultimately reduce delays and their impact.

Fig 1 shows a network of some such activities that need to take place before the departure of a plane. For example, activity 28 (departure) is dependent on the completion of activities 18 and 27, taking 20 and 10 minutes respectively. Additionally, at least 30 minutes must elapse before this network of activities is repeated, which represents the preparation of a subsequent departure at the same gate. Scheduling is thus made periodic by connecting the last activity, 28, to activity 1, indicated by connecting node A. 

Figure 1: Network of activities prior to a plane departure

 

Suppose that we know $x_{28}(k)$, the $k^\text{th}$ departure time at this gate. Then the earliest time (in minutes) that this set of activities can start again for the next departure is $$ x_1(k+1) = x_{28}(k) + 30 \ .$$ The start times of all other activities are obtained similarly. In particular, for activities like 28, that depend on more than one activity, we have $$ x_{28}(k) = \max\{ x_{18}(k) + 20, x_{27}(k)+10 \} \ .$$ Overall, we obtain a system of equations, which has a natural interpretation in MPA. The eigenvalues of the max-plus system correspond to the turnaround time of the network; here, we obtain turnaround time $= 141$ minutes, which can also be found by employing critical path analysis. However, MPA gives a deterministic, linear algebraic handle on this process, and it can also deal with a more complicated network of activities, where the period of the activity times is larger than one. How is it linear? Via the simple operator transformations: $\max \longmapsto \oplus $ and $+ \longmapsto \otimes $. The operators $\oplus$ and $\otimes$ play the same role in MPA as $+$ and $\times$ do in conventional mathematics (this is easy to check!).

The work with Emirates ended before being implemented. But it was another transport application that inspired this model. In the railway network, we can think of nodes as stations and edges as tracks. By imposing the rule that trains at each station must only depart once trains have arrived from other stations, a max-plus equation is formed since the earliest departure time of a train is the maximum of the times of arriving trains. Whilst this might seem constraining, it is in fact a standard starting point for efficient mathematical modelling. The beauty of MPA is the intimate link between the mathematics and network theory: one only needs a standard understanding of the mathematics, following which problems can often be solved by straightforward optimisation of network structures.

Indeed, by supervising Masters level projects in this area, most recently for the Oxford MMSC course, a big question that has been addressed is, 'what is the optimal railway network?' Starting with an abstraction of the railway network, then strategically adding nodes and edges, it turns out that a one-way ring layout is best! Whilst this allows stations to wait only for one train before departures, it obviously looks impractical; for instance, to travel from London Euston to Oxford, it seems one would have to go through Edinburgh! But actually the additional nodes in this layout represent platforms within stations, and the dashed edges represent walks between these platforms. This is what produces the impracticality illusion; we have in fact split London Euston into multiple substations, i.e., platforms. So MPA allows a lot of room for manoeuvre to produce optimal railway networks. This work has also thrown up a multitude of interesting theoretical questions, such as the concepts of eigenvalues and eigenvectors in generalised models; so this is a classic example of a two-way benefit: pure mathematics feeds applied mathematics, which feeds back to pure.

 

Figure 2: The optimal railway network is a one-way ring?

 

In my latest work, I am developing an extension of MPA called Maxmin-$\omega$, which has provided novel areas of application. Here, we obtain a solely max-plus system when $\omega=1$ and a solely min-plus system when $\omega=0$ (where nodal activities are started upon receipt of the first instead of the last input, as in MPA). When ${\omega\in(0,1)}$, the system is governed by a combination of max- and min-plus algebra. The parameter $\omega$ therefore behaves as a threshold, and this yields a deterministic way of thinking about threshold models of information proliferation. I particularly want to apply this to periodic information diffusion data, such as retweet activity on Twitter and neural networks, which are constantly bouncing signals amongst neurons. The work will be presented at Sunbelt, the biggest social networks conference, this summer, to be held in Montreal. In keeping with the nature of this modern data-centric age, the possibilities for MPA and its extensions are exciting."

For more on Ebrahim's research click here.

Tuesday, 2 April 2019

The indicative votes on BREXIT as seen through the eyes of applied mathematics and statistics

The recent votes in the House of Commons on Brexit are a type of high-dimensional data which is hard to understand, as each MP votes on several motions. Oxford Statistician and Mathematician Florian Klimm has illustrated such data as 'bipartite networks’, in which nodes represent either MPs or motions which are connected if an MP voted in favour of a motion. In this layout, MPs that voted similarly are close together. We can also explore how single MPs voted and how parties are divided or unified. We also see that motions that have the support of a similar set of MPs are close by each other.

For many research projects in applied mathematics and statistics, such visualisations are a first step to understanding complex data and forming working hypotheses. On a larger temporal scale, for example, we can investigate how political voting networks change over time, as this study on the U.S. Senate demonstrates.

Please click on the images to enlarge and click here for the interactive version.

                         

 

Saturday, 30 March 2019

Jon Chapman and Mason Porter made SIAM Fellows

Oxford Mathematician Jon Chapman and Visiting Fellow Mason Porter have been made Fellows of the Society for Industrial and Applied Mathematics (SIAM).

Jon is Professor of Mathematics and its Applications here in Oxford and a Fellow of Mansfield College. His research interests cover a vast range of the Applied Mathematics landscape including industrial mathematics, mathematical modelling, matched asymptotic expansions, partial differential equations, mathematical physiology, tumour growth and nonlinear models of biological tissue.

In the words of his citation Jon is being recognized "for his outstanding contributions to physical and biological modeling as well as for his asymptotic methods development in applied mathematics."

Mason is a former member of the Oxford Mathematics Faculty and remains a Visiting Fellow as well as holding a full-time position as a Professor of Mathematics at UCLA in the United States. Mason's work spans a wide range of interests including nonlinear science, nonlinear dynamics and chaos, nonlinear waves, quantum chaos, network science, social network analysis and mathematical biology. Mason was cited for his "contributions to diverse problems and applications in networks, complex systems, and nonlinear systems."

Wednesday, 27 March 2019

Using higher-order networks to analyse complex data

In this collaboration with researchers from the Umeå University and the University of Zurich, Renaud Lambiotte from Oxford Mathematics explores the use of higher-order networks to analyse complex data.

"Network science provides powerful analytical and computational methods to describe the behaviour of complex systems. From a networks viewpoint, the system is seen as a collection of elements interacting through pairwise connections. Canonical examples include social networks, neuronal networks or the Web. Importantly, elements often interact directly with a relatively small number of other elements, while they may influence large parts of the system indirectly via chains of direct interactions. In other words, networks allow for a sparse architecture together with global connectivity. Compared with mean-field approaches, network models often have greater explanatory power because they account for the non-random topologies of real-life systems. However, new forms of high-dimensional and time-resolved data have now also shed light on the limitations of these models.

Rich data indicate who interacts with whom, but also what different types of interactions exist, when and in which order they occur, and whether interactions involve pairs or larger sets of nodes. In other words, they provide us with information on higher-order dependencies, which lay beyond the reach of models based on pairwise links. In this perspective published in Nature Physics, the authors review recent advances in the development of higher-order network models, which account for different types of higher-order dependencies in complex data. They focus in detail on models where chains of interactions are more than a combination of links, and when higher-order Markov models are required to reproduce realistic chains. As an illustration, the Figure shows (click to enlarge) chains of citations between journals as produced by standard network models (left) versus empirical data and higher-order models (right). As a next step, the authors illustrate how to generalise fundamental network science methods, namely community detection, node ranking and the modelling of dynamical processes. They conclude by discussing challenges in developing optimal higher-order models that take advantage of rich data on higher-order dependencies while avoiding the risk of overfitting."

 


 

Friday, 22 March 2019

Finding flagella: automating image analysis

Oxford Mathematician Benjamin Walker talks about his work on the automatic identification of flagella from images, opening up a world of data-driven analysis.

"Across the University of Oxford alone there are millions of images of microorganisms propelling themselves along with flagella, slender tail-like structures like those found on the sperm cells of most mammals. Studying the detailed motion of flagella is of intense scientific interest, not least amongst researchers in the fields of fertility medicine and human pathogens. Previously, flagella in each of these countless images would need to be traced (often by hand on a graphics tablet) by an unfortunate researcher before they can analyse the detailed motion of the fagellum. With even the more-sophisticated 'point-and-click' computerised methods, in general this still requires vast amounts of time and laborious effort. With artificial intelligence capable of accurately detecting cats in videos on YouTube, and a lot more besides, it seems that automation of this somewhat-simple task might be easily accomplished using the powerful computing resources available today.

In fact, we needn't turn to complex algorithms or simulated intelligence at all to automate our identification and remove the need for a patient researcher. Instead we look at something fundamental in the biology of the flagellum, persistent across a wide range of organisms. Eukaryotic flagella, like those of mammalian sperm and the human parasites Trypanosomatidae, are all made up of the same structural backbone, referred to as the 9+2 axoneme, which gives them a well-defined and consistent width when viewed through typical microscopes. If we compare this to the rest of the cell, for example for the Trypanosome shown below in (a), the flagellum stands out as by far the largest portion of the microorganism that is a consistent thickness.

The width of the organism is something that can be easily quantified via a medial axis transform. The transform takes a binary representation of the whole cell (b), obtainable automatically, and reduces it down to a skeleton along the approximate midline of the cell (c). We then colour each point on this skeleton by the distance to the edge of the cell, with the wider parts corresponding to darker sections of the skeleton. Tracing out the skeleton from left to right, we can record the width of the cell at each point, as shown on the graph to the left below.

The flagellum can be easily identified by eye as the large region of approximately constant width. The histogram to the right in fact shows something more, that the flagellar width (approximately 6 pixels here) corresponds to the mode of all the cell widths. This observation leads to a remarkably simple scheme for automatically identifying flagella: take a medial axis transform, and keep only the regions corresponding to the modal width. We have tested this simple method on a sample of 150,000 images of a Trypanosome, orders of magnitude more data than previously analysed for this organism, and accurately captured the flagella of over 120 different cells, enabling population-level studies of flagellar movement.

 

 

 

Thus, with only a simple image transform and exploiting a conserved structural feature of flagella, this automated methodology could enable rapid quantification of large image datasets, freeing up researchers to analyse data rather than trace it out."

Thursday, 21 March 2019

Would you like a taster of Postgraduate study at Oxford? Six weeks this Summer perhaps?

If you are Interested in postgraduate study and curious about what it would be like to do research at Oxford we are delighted to announce that Oxford Mathematics is taking part in the UNIQ+ pilot programme, a six-week summer school encouraging access to postgraduate study from under-represented groups in UK universities.

UNIQ+ is free to take part in and includes a £2,500 stipend, plus free accommodation in an Oxford college. It will give you the opportunity to experience postgraduate research at Oxford by carrying out a research project, and a chance to meet our staff and student community.

UNIQ+ will run from 1 July to 9 August 2019.

To see if you qualify for UNIQ+, and to apply, click here.

Wednesday, 20 March 2019

Are You Training A Horse?

Oxford Mathematician Vinayak Abrol talks about his and colleagues's work on using mathematical tools to provide insights in to deep learning.       

Why it Matters!

From 4.5 billion photos uploaded to Facebook/Instagram per day, 400 hr of video uploaded to YouTube per minute, to the 320GB the large Hadron collider records per second, we know it is the age of big data. Indeed, last year the amount of data existing worldwide is estimated to have reached 2.5 quintillion bytes and while only 30% of this data is described as useful, if analysed only 1% actually is. However, the boom in big data has brought success in many fields such as weather forecasting, image recognition and language translation.

So how do we deal with this big data challenge? To fundamentally understand real-world data, we need to look into the intersection of mathematics, signal processing and machine learning, and combine tools from these areas. One such emerging field of study is 'deep learning' under the broader category of 'artificial intelligence'. There have been remarkable advances in deep learning over the decade, and it has found uses in many day-to-day applications such as on smartphones where we have automatic face detection, recognition, tagging in photos or speech recognition for voice search and setting up reminders; and in homes predictive analysis and resource planning using the Internet of Things or smart home sensors. This shift in paradigm is due to the fact that in many tasks machines are efficient, can work continuously and perform better than humans. However, recently deep neural networks have been found to be vulnerable. For instance, they can be fooled by well-designed examples, called adversarials, which is one of the major risks for applying deep neural networks in, say safety-critical scenarios. In addition most such systems are like a black box without any intuitive explanation, and with little or no innate knowledge about human psychology, meaning that the huge buzz about automation due to artificial intelligence is yet to be justified. Hence, understanding how these networks are vulnerable to attacks is attracting great attention, for example in bio-metric access to banking systems.

We try to present a different way of approaching this problem. We know deep networks work well in many applications but they can't abstractly reason and generalise about the world or automate ordinary human activities. For instance, a robot which can pick up a bottle, can pick up a cup only if it is retrained (not the case with humans). Hence, we need to understand the limits of such learning systems. Any other questions are meaningless if our resulting system appears to be solving the problem but is actually not. This is dubbed as the 'Clever Hans effect'. So, instead, we are interested in: 1) Does the system actually address the problem? 2) What is the system actually learning? 3) How can we make it address the actual problem?

General Methodology

In Oxford Mathematics, this work is funded under the Oxford-Emirates Data Science Initiative, headed by Prof. Peter Grindrod. I am being supervised by Prof. Jared Tanner, and within the Data Science Research Group we are working to advance our understanding about deep learning approaches. Although, many recent studies have been proposed to explain and interpret deep networks, currently there is no unified coherent framework for understanding their insights. Our work here aims to mathematically study the reasons why deep architectures are working well, what they are learning and are they really addressing the problem they are built for. We aim to develop a theoretical understanding of deep architectures, which in general processes the data with a cascade of linear filters, non-linearities, and contraction mapping via existing well understood mathematical tools. In particular, we focus on the underlying invariants learned by the network such as multiscale contractions, hierarchical structures etc. Our earliest research explored the key concept of sparsity, i.e., the low complexity of high-dimensional data when represented in a suitable domain. In particular we are interested in the learning of representation systems (dictionaries) providing compact (sparse) descriptions for high dimensional data from a theoretical and algorithmic point of view, and in applying these systems to data processing and analysis tasks. Another direction is to employ tools from random matrix theory (RTM) that allow us to compute an approximation of such learning systems under a set of simplifying assumptions. Our previous studies have revealed that the representations obtained at different intermediate stages of a deep network have complimentary information and sparsity plays a very important role in the overall learning objective. In-fact the so called non-linearities and pooling operations which seems complicated can be explained via constrained matrix factorization problems along with hierarchical/sparsity aware signal processing. In terms of the choice of underlying building blocks of such systems such as convolutional networks (popular for image applications), these can be analysed via convolutional matrix factorization and wavelet analysis, and fully connected networks via discrete Weiner and Volterra Series analysis. Overall our main tools come from sparse approximation, RTM, geometric functional analysis, harmonic analysis and optimisation.

Wednesday, 20 March 2019

Automorphism groups of right-angled Artin groups

Oxford Mathematician Ric Wade explains how right-angled Artin groups, once neglected, are now central figures in low-dimension topology. 

"Right-angled Artin groups (colloquially known as a RAAGs or 'rags') form a family of finitely presented discrete groups that generalise free-abelian groups (where all the generators commute) and free groups (where no generators commute). Every finite graph $\Gamma$ defines a finitely presented group $A_\Gamma$ generated by the vertex set, with the restriction that $vw=wv$ if two vertices $v$ and $w$ are connected by an edge (in other words, adjacent vertices commute). If the graph is complete (so that any two vertices are connected by an edge) then $A_\Gamma$ is a free-abelian group, and if the graph $\Gamma$ has no edges, then $A_\Gamma$ is a free group. Loops and multiple edges between vertices do not affect the group that we get using this construction, so in graph-theoretic language we can assume that $\Gamma$ is simple.

 

    Figure 1: A graph $\Gamma$ and a topological space $X$ such that $\pi_1(X)=A_\Gamma$.

One example is given by the graph $\Gamma$ which consists of three edges joined in a line. In this case, the group $A_\Gamma$ that we get is obtained by taking the fundamental group of three tori, where the middle tours has its meridian and equator glued to the equators of the other two tori (see Figure 1).

Possibly because the definition of these groups sounds quite contrived, right-angled Artin groups saw relatively little interest until the 21st century. However, they have become central figures in recent progress in low-dimension topology. I can highlight two reasons for this:

- RAAGs have a nice simple combinatorial description, which is helpful for proving group-theoretic results.

- RAAGs have a surprisingly diverse subgroup structure, which means that many groups that are harder to understand can be embedded in these 'simpler'  groups.

The flagship result with respect to this viewpoint is a theorem of Agol (building upon work of Wise, Kahn-Markovic, Sageev, and others), which states that the fundamental group of every closed hyperbolic 3-manifold has a finite index subgroup which embeds into a right-angled Artin group in a particularly nice way. This was a key step in the resolution of Thurston's virtual fibering conjecturewhich was one of the major open problems in the area not to fall to Perelman's solution of the Poincare conjecture.

I'm interested in automorphism groups of right-angled Artin groups: the groups of their self-symmetries. Going back to thinking about RAAGs interpolating between free and free-abelian groups, these automorphism groups should have behaviour similar to $GL(n,\mathbb{Z})=Aut(\mathbb{Z}^n)$ from the free-abelian side, and automorphism groups of free groups on the free side. In practice, such interpolation results are much harder to obtain. In recent work with Matthew Day at the University of Arkansas, we have been studying the finiteness properties of these automorphism groups. Roughly speaking, this is the search for nice topological spaces (specifically, classifying spaces) that describe $Aut(A_\Gamma)$ for an arbitrary RAAG. For $GL(n,\mathbb{Z})$ the construction of such spaces goes back to work of Borel and Serre, who used deformation spaces of tori to build classifying spaces of certain finite index subgroups of $GL(n,\mathbb{Z})$ known as congruence subgroups. A classifying space for a finite-index subgroup of $Aut(F_n)$ was built by Culler and Vogtmann, using a deformation space of metric graphs (these appear as tropical curves in the world of tropical geometry). In a paper due to appear in the Journal of Topology, we prove the following theorem:

Theorem 1 (Day-W). For any graph $\Gamma$, the group $Aut(A_\Gamma)$ has a finite-index subgroup with a classifying space given by a finite cell complex.

Our classifying space is built by carefully splicing together deformation spaces of tori, deformation spaces of graphs, and classifying spaces for right-angled Artin groups called Salvetti complexes. This theorem gives restrictions on the algebraic invariants of $Aut(A_\Gamma)$; in particular it says that the cohomology ring $H^*(Aut(A_\Gamma);\mathbb{Q})$ is finitely generated as a $\mathbb{Q}$-vector space. Our construction is inductive rather than completely concrete, however the construction does allow for some calculations (via spectral sequence arguments, for instance). The next step for us in this programme is to give an algorithm to find the virtual cohomological dimension of any automorphism group $Aut(A_\Gamma)$, and we hope that other problems involving these automorphism groups can be understood through these classifying spaces as well."

Pages