Oxford Mathematicians Tamsin Lee and Peter Grindrod discuss their latest research on the brain, part of our series focusing on the complexities and applications of mathematical research and modelling.

"The brain consists of many neurons arranged in small, strongly connected directed networks, which in turn are connected up by a few directed edges. Let us call these small, strongly connected directed networks of neurons 'subgraphs.' Each subgraph receives messages from some upstream subgraph, and sends messages out to downstream subgraphs. Within each subgraph, when a single neuron fires a 'message' it goes into a refractory period. That is, it cannot send nor receive a message for a given period of time. Additionally, each connection from one neuron to another has a unique delay time, that is, a message from neuron A fired to B and C at the same time, will arrive at B and C at different times.

Within these dynamics we find that the system settles to a quasi-periodic state with almost periodic cyclic firings. Taking a closer look at the differences in firing times we find a quasi periodic pattern.  This time series can be embedded in an m-dimensional space using Taken's Theorem. A key example of Taken's Theorem uses the infamous Lorenz attractor, which plots three variables over time. The theorem shows that by plotting only one of these variables against itself, but shifted at three different time intervals, the result has the same topology as the original Lorenz attractor. To apply Taken's Theorem we create a matrix with our time series, but shifted at different intervals. Signal-to-noise separation can be obtained by simply locating a significant break in the ordered list of eigenvalues of this matrix (pink or white noise would produce a natural decay or plateau of the spectrum, without such large breaks). This break gives an upper bound on the number of dimensions required to 'plot' our time series, which is essentially a proxy for the complexity of the behaviour of a single subgraph system. 

To recap, neurons in a subgraph receive a message from some upstream subgraphs. This sets off a firing pattern across the subgraph that settles to a system such that the differences in firing times can be embedded in an m-dimensional space, where m is a proxy for the complexity of the system.

Our work suggests that the complexity, m, of the subgraph dynamics only increases logarithmically with its size, n. This is a profound result as it states that a brain composed of many small, strongly connected, subgraphs is considerably more efficient that one composed of large, strongly connected, subgraphs. And brains are of course limited in terms of both volume and energy. This is akin to a computer using several small core processors instead of using one large core processor."

Please contact us with feedback and comments about this page. Created on 12 Apr 2017 - 13:32.