Nonlinear Dynamics on Networks

Today I want to talk about dynamical systems on networks. A good review article can be found here. In this post I want to focus on the interplay between the topology of the underlying network, and the asymptotic dynamics of the system. I will focus on two aspects of this area that I have found particularly interesting.

The first of these is the use of Master Stability Conditions, or in the more general setting, Master Stability Functions, in relating network topology to temporal dynamics. Under some broad assumptions, these results show how the stability of the synchronous state (where all nodal variables are equal) of a dynamical system depends on the adjacency structure of the network as well as the dynamics at each node. This is particularly useful, since for a given dynamical system we can say precisely when this steady state gains or loses asymptotic stability. Physically, this allows us to say that for a given physical process (e.g. neurons firing), some topologies permit perfect synchronization and others do not. A good review of synchronization phenomena in general can be found here. I will briefly describe the simpler Master Stability Condition, and refer to the above-cited reviews for more information and generalizations.

This is very useful, but it only gives information about this particular local equilibrium, and often requires many assumptions about the homogeneity of the dynamics on the network. In general, we might be interested in the global behaviour of the system due to changes in the network topology, rather than just the behaviour of the synchronous state. There are not many general results in this direction that I am familiar with, so instead I will discuss a relatively recent paper by Anca Rǎdulescu and Sergio Verduzco-Flores just to give an idea of some things that can be done. The arXiv version of their paper can be found here if you do not have access to the journal. Among other things, their work explores how the dynamical behaviours the system exhibits change when the underlying graph structure is changed. This theme of connecting network structure to dynamics is something many people are interested in.

1. Master Stability Conditions

This presentation closely follows that of the tutorial article mentioned above on dynamical systems on networks. We consider a simple unweighted undirected graph as the domain for our system. Denote our set of nodes by {V=\{v_i\}_{i=1}^n} and a set of edges between each node {E=\{e_{ij}\}}. Let {\boldsymbol{A}} be the adjacency matrix for this network so that {A_{ij}=1} if there is an edge {e_{ij}\in E} from node {v_i} to node {v_j}, and {A_{ij}=0} otherwise. We denote our state variable at each node {v_i} depending on time {t} as {x_i(t)}. In general we consider a dynamical system of the form,

\displaystyle  \frac{dx_i}{dt} \equiv \dot{x_i} = f_i(x_i) + \sum_{j=1}^n A_{ij}g_{ij}(x_i,x_j), \ \ \ \ \ (1)

where the state at each node evolves according to the nodal dynamics {f_i(x_i)} and its interaction with its neighbours {g_{ij}(x_i,x_j)}. The sum can be thought of as just adding {g_{ij}(x_i,x_j)} to the equation for {x_i} whenever node {v_i} is connected to node {v_j}.

For the sake of simplicity assume that each node evolves according to the same local dynamics ({f_i(x_i) \equiv f(x_i)}), and interacts with all of its neighbours in a consistent way that only depends on the adjacent nodal variables ({g_{ij}(x_i,x_j) \equiv g(x_j)}). Then we can write Equation (1) as,

\displaystyle  \dot{x_i} = f(x_i) + \sum_{j=1}^n A_{ij}g(x_j). \ \ \ \ \ (2)

We now consider the linear stability of a uniform steady state. Let {x_i(t) \equiv x^*} for all {i} be our equilibrium state. We let {x_i(t) = x^* + \epsilon_i(t)} where we assume {|\epsilon_i| \ll 1}. Substituting this expansion into (2) and expanding each term in a Taylor series we have,

\displaystyle  \dot{\epsilon_i} = f(x^*) + \sum_{j=1}^n A_{ij}g(x^*) + f'(x^*)\epsilon_i + \sum_{j=1}^n A_{ij}\frac{\partial g}{\partial x_j}(x^*)\epsilon_j + O(\epsilon_i^2), \ \ \ \ \ (3)

where {f' \equiv \frac{df}{dx}}. The first two terms sum to zero as {x_i=x^*} was a steady state of Equation (2). We also neglect the nonlinear terms in {\epsilon} since we are doing linear stability analysis. Let {a = f'(x^*)} and {b=\frac{\partial g}{\partial x_j}(x^*)}. Then Equation (3) can be written in vector notation as,

\displaystyle  \dot{\boldsymbol{\epsilon}} = \left(a\boldsymbol{I} + b\boldsymbol{A}\right)\boldsymbol{\epsilon} \equiv \boldsymbol{M}\boldsymbol{\epsilon}. \ \ \ \ \ (4)

Since our graph was undirected, we know that {\boldsymbol{M}} must be symmetric, and therefore diagonalizable with real eigenvalues. Let {\boldsymbol{v_k}} be an eigenvector of the adjacency matrix {\boldsymbol{A}} with eigenvalue {\lambda_k}. Then we have,

\displaystyle  \boldsymbol{M}\boldsymbol{v_k} = (a + b\lambda_k)\boldsymbol{v_k}, \ \ \ \ \ (5)

so that {\boldsymbol{v_k}} is also an eigenvector of {\boldsymbol{M}} with eigenvalue {a + b\lambda_k}.

In order for the equilibrium {x_i = x^*} to be linearly asymptotically stable, we need all eigenvalues of {\boldsymbol{M}} to be negative. That is,

\displaystyle  a + b\lambda_k < 0, \quad \forall \lambda_k \text{ s.t. } \boldsymbol{A}\boldsymbol{v_k} = \lambda_k \boldsymbol{v_k} \ \ \ \ \ (6)

From this, we will show that {a < 0} is necessary for stability. The adjacency matrix {\boldsymbol{A}} has zero trace (no self-loops). We know that the eigenvalues of {\boldsymbol{A}} must all be real due to {\boldsymbol{A}} being symmetric. The trace of a matrix is precisely the sum of the eigenvalues, so we know {\boldsymbol{A}} must have positive and negative eigenvalues. Therefore we know that {a < 0}.

Now we consider the sign of {b}. If {b>0}, then we need {\lambda_k < -\frac{a}{b}}. If we assume that this is true for the largest eigenvalue of {\boldsymbol{A}}, {\lambda_n}, then it must also be true for all other eigenvalues. So we have that {\frac{1}{\lambda_n} > -\frac{b}{a}} is a necessary condition for asymptotic stability. If {b < 0}, then we need {\lambda_k > -\frac{a}{b}}. Again, if we assume that this is true for the smallest eigenvalue of {\boldsymbol{A}}, {\lambda_1}, then this condition will be true for all eigenvalues of {\boldsymbol{A}}. So we have {\frac{1}{\lambda_1} < -\frac{b}{a}} as another necessary condition for asymptotic stability. The reason for rewriting these necessary conditions in terms of reciprocals comes from the fact that they are compatible independent of the sign of {b} as long as we do not divide by {b}. So putting them together we have that {x_i=x^*} is an asymptotically stable steady state of Equation (2) if,

\displaystyle  \frac{1}{\lambda_1} < -\frac{b}{a} < \frac{1}{\lambda_n}. \ \ \ \ \ (7)

We now recall what {a} and {b} represented and write this as,

\displaystyle  \frac{1}{\lambda_1} < -\frac{\frac{\partial g}{\partial x_j}(x^*)}{f'(x^*)} < \frac{1}{\lambda_n}. \ \ \ \ \ (8)

This set of inequalities is known as a Master Stability Condition. It is powerful in that it relates the stability of the equilibrium state {x^*} to the topology of the network through the eigenvalues {\lambda_1} and {\lambda_n}, and the specified system dynamics at each node. In particular, the spectrum of the graph is sufficient information to tell us about the stability of a uniform equilibrium point associated to a given dynamical system. This illustrates one connection between the spectrum of a graph and dynamical systems on that graph that is frequently used in the literature to understand these kinds of systems.

An interesting aspect of this result is that if we think only of equilibria which satisfy the steady-state dynamics independent of the graph (e.g. {f(x^*) = g(x^*) = 0}), then the stability of these points is independent of the details of graphs which are isospectral or cospectral, meaning they have the same eigenvalues. The study of eigenvalues as graph invariants leads to many interesting questions, but they do not constitute a complete invariant in that graphs which are not isomorphic can have the same eigenvalues. As an aside, this is related to a question from spectral geometry with a rich history: Can you hear the shape of the drum?

So a natural question is: how does the relationship between dynamics and graph structure look like for other kinds of behaviour? There are many useful generalizations and applications of this approach to the synchronous steady state which can be found in the reviews mentioned at the beginning of this post. Rather than review those, I want to discuss the influence of graph structure on the global dynamics of a system. In particular, what kinds of asymptotic ({t \rightarrow \infty}) states or attractors can we expect in a network dynamical system, and how do they behave as the underlying network structure changes?

2. Nonlinear network dynamics under perturbations of the underlying graph

This is a very general and difficult question. Rather than try and mention the numerous contributions in this area, I will discuss a relatively simple model analyzed in the paper mentioned at the beginning, and talk about what their results mean in the context of dynamics and graph structure.

Consider a weighted directed graph consisting of two cliques (completely connected subgraphs), labelled {X} and {Y}, so that all nodes {x_i \in X} are mutually connected with edges of positive weight {g_{xx}} and all nodes {y_i \in Y} are mutually connected with edges of positive weight {g_{yy}}. Assume that each of these fully connected modules has {N} nodes. The connectivity between {X} and {Y} can be described by two {N\times N} blocks of the full adjacency matrix. Let {A_{ij} = g_{xy}} represent that there is an edge of positive weight {g_{xy}} from node {x_i} to node {y_j}. Let {B_{ij} = -g_{yx}} represent that there is an edge of negative weight {-g_{yx}} from node {y_i} to node {x_j}.

This corresponds to excitatory and inhibitory connections between these two modules, so that the nodes in {X} positively influence nodes in {Y}, and nodes in {Y} negatively influence nodes in {X}. This kind of excitation-inhibition modular structure is discussed in the neurophysiology literature, from which this model is motivated. In particular, Wilson–Cowan equations are models of coarse-grained brain regions with excitatory and inhibitory interactions, although they are often more complicated in practice. The model I will describe here is a simpler model of coupled nonlinear oscillators, but in their paper, there is some a discussion of how their results have implications for neuroscience research with more realistic equations.

These coupled oscillators are described using the following equations:

\displaystyle  \dot{x_k} = -x_k +(1-x_k)S_{b_x,\theta_x}\left (-\sum_{p=1}^N A_{kp}y_p + \sum_{p=1}^N g_{xx}x_p + P \right), \ \ \ \ \ (9)

\displaystyle  \dot{y_k} = -y_k +(1-y_k)S_{b_y,\theta_y}\left (\sum_{p=1}^N B_{kp}x_p + \sum_{p=1}^N g_{yy}y_p + Q \right), \ \ \ \ \ (10)

where the function {S_{b,\theta}} is a type of sigmoid function used to modulate the inputs to each node. It is defined as,

\displaystyle  S_{b,\theta}(Z) = \frac{1}{1+\exp(-b(Z-\theta))}-\frac{1}{1+\exp(b\theta)}, \ \ \ \ \ (11)

where the {\theta} and {b} parameters are different for nodes in the {X} and {Y} modules respectively. All parameters are specified at fixed values in the model (see the paper for these values), except for the size of the modules {N}, the interaction matrices {A} and {B}, and the weight of the interaction strengths {g_{xy}} and {g_{yx}}, which are varied as bifurcation parameters. Specifically, we allow the between-module coupling strengths to vary between {0} and {30}.

In addition to varying the inter-module connection strengths, the sensitivity of the dynamics to the density of connections between the modules (the number of nonzero entries of {A} and {B}) and their specific configuration was explored in the paper for low-dimensional values of {N=2} and {N=4}. Even in the low-dimensional case, the number of possible configurations is quite large. It is necessary to simplify this by specifying a density of connections between each module, as for a given {N}, there are {2^{2N^2}} possible configurations to choose from. Denoting the number of connections from {X} to {Y} as {M_{xy}} (the number of nonzero elements of {A}), and the number of connections from {Y} to {X} as {M_{yx}} (the number of nonzero elements of {B}), the total number of configurations for a given number of connections can be computed as {{N^2 \choose M_{xy}} {N^2 \choose M_{yx}}}. This is still combinatorially very large for fixed densities, but it is at least a starting point for understanding how structural variation can affect the dynamics.

As discussed in the paper, a comprehensive study of the behaviour of the system when all parameters are varied is essentially intractable. Instead we will look at one result that the paper contains: The bifurcation structure of the asymptotic behaviour of the system is influenced, but not fully determined, by the spectrum of the adjacency matrix describing the network topology. This implies that the adjacency spectrum is insufficient to understand the behaviour of the system. Conversely, however, the dynamics seem to completely determine the spectrum. This was shown numerically in some specific low-dimensional cases.

Before presenting this result, I will reference some elementary aspects of bifurcation theory that are relevant for the discussion. Broadly speaking, a bifurcation is a qualitative change in the behaviour of a system due to a change in a parameter. Local bifurcations are where an equilibrium point changes stability. This can coincide with new solution branches emerging, such as in pitchfork bifurcations, or the birth of limit cycles such as in Hopf bifurcations, or the disappearance of equilibria, such as in saddle–node bifurcations. For certain kinds of systems (namely those that are structurally stable), these bifurcation points occur on boundaries in the parameter space between regions where the dynamics are topologically equivalent. Away from a bifurcation the number and stability of equilibria and limit cycles remains the same, and these things can change only at bifurcation points.

We can proceed to classify dynamics classes of the system in Equations (9)(10) by considering their bifurcations in the {g_{xy}-g_{yx}} plane (at least in the range of {0} to {30} we are considering for numerical purposes). To illustrate this idea, we will construct this diagram for the case when {N=2} and {M_{xy}=2}, {M_{yx}=3}. Consider the configuration described by {A=g_{xy}\left [\begin{array}{cc}0 & 1 \\0 & 1 \\ \end{array}\right]} and {B=g_{yx}\left [\begin{array}{cc}1 & 1 \\0 & 1 \\ \end{array}\right]}. In Figure 1, we have plotted curves representing bifurcations that equilibrium points and limit cycles of this system experience as the inter-module edge weights are varied. The lines represent codimension 1 bifurcations that separate the plane into several regions. Away from these boundaries, the equilibrium states in each region can vary numerically, but their topological qualities (e.g. number of steady states/limit cycles) do not change. When these bifurcations intersect, codimension two points can occur, and are plotted as red stars. These generally have very complicated structure. As more parameters are varied, more complicated bifurcations of higher codimension can occur, but they will always be small in a set-theoretic sense (e.g. of measure zero).

Figure 1: Hop bifurcations are shown as green curves, and limit point (saddle point) bifurcations as blue curves. The red stars represent codimension two bifurcations: CP for a cusp bifurcation, BT for Bogdanov-Takens, and GH for Generalized Hopf. The labels {R_1} through {R_5} represent regions with different types of equilibria. {R_1} and {R_3} are regions with a unique stable equilibrium; {R_2} is a region with a unique stable limit cycle; {R_4} is a region of multiple stable equilibria, and {R_5} a region where a limit cycle and a stable equilibrium coexist.

This diagram was constructued using the MatCont toolbox for Matlab. There are several good hands-on tutorials for this software that I would encourage you to play with. To produce Figure 1, I initialized the system with {g_{xy}=0} and {g_{yx}=20} and ran time forward until a steady state was found. This solution was then numerically continued for larger values of {g_{xy}} until a (Hopf) bifurcation was found. I then selected the Hopf Curve option, and MatCont traced the bifurcation curve until the maximum number of steps had been taken or it reached a codimension-two bifurcation. From there I selected the codimension-two points, and numerically continued Hopf and limit point curves from them. Repeating this procedure with different initial conditions and parameters does not necessarily give an exhaustive diagram of the bifurcation structure, but it is a good first approximation for simple enough systems. The internal structure of the various regions was computed in the paper using a particle-swarm optimization algorithm to try and determine the possible kinds of equilibria in each region. In my case, I simply verified their results by running simulations of the system in each region to check that the number and kind of equilibria were the same as reported in the paper. Regions 4 and 5 have several different possible equilibria but are not further subdivided for brevity.

It is worth mentioning that these bifurcations are all local in that an equilibrium or a limit cycle changes stability, which does not necessarily say anything about other equilibria. Nevertheless, these diagrams can give a rough idea of how the structure of the attractor changes as parameters are varied, as equilibria and periodic orbits change stability at curves and so those parts of the attractor must necessarily change. If we can show that these are the only equilibrium states, and we can exhaustively find all bifurcation curves, then we essentially know everything about the attractor; hence we know the asymptotic state of the system as parameters are varied. The paper demonstrates that this structure is unique to this adjacency class (graphs with the same eigenvalues) within the set of networks with {M_{xy}=2} and {M_{yx}=3}, but that this class can give rise to other kinds of dynamical behaviour. For instance, consider the system with the off-diagonal blocks being {A=g_{xy}\left [\begin{array}{cc}0 & 0 \\1 & 1 \\ \end{array}\right]} and {B=g_{yx}\left [\begin{array}{cc}1 & 0 \\1 & 1 \\ \end{array}\right]}. The full adjacency matrix has the same eigenvalues as the previous example, but if we perform the same computations as above on this system, we get Figure 2, which appears to be a much simpler diagram.

Figure 2: Limit point (saddle point) bifurcations are shown as blue curves. The red star is a cusp bifurcation.

Given this evidence, the authors conjecture that these bifurcation diagrams were unique to each adjacency class (e.g. graphs with different spectra could not have precisely the same diagram), but that even within an adjacency class, variation could be significant. They explore this further in the paper using a statistical approach, since the number of possible configurations is quite large, and it grows extremely rapidly for even moderately larger graphs. They also discuss other aspects of bifurcations due to changes in the graph structure, comparing for instance densely-interconnected modules with sparser ones, and showing that certain regions were more sensitive to global changes such as adding or removing edges when the edge weights were large. A complete picture is elusive, but I feel that this kind of exploratory analysis at least gives an idea of what tools may or may not be helpful for analyzing the effects of structural changes in network dynamical systems. The authors also suggest that using finer metrics on the network, such as node degree distributions, or clustering coefficients, may provide a more effective way of classifying dynamic complexity.

Practically speaking, this result helps us understand how useful the adjacency spectrum of a network is in terms of classifying its behaviour. There has been a significant amount of recent work on using models like these, as well as statistical approaches, to understand how structural changes influences complex systems, such as in the process of learning in the brain. Restructuring seems to play a crucial role in learning. Additionally, understanding the importance of structure to dynamical function seems important for understanding abnormal brain connectivity in terms of the efficiency of brain function and in mental illness. As with most good areas of science, there are many more interesting questions to pursue.

Over the summer I hope to post a few more blogs like this related to dynamical systems approaches to large coupled systems. In particular, I think a good layperson discussion of this paper would be interesting as my recent research has been in the area of Differential–Algebraic Equations with an underlying network structure.

3. References and Further Reading

  1. Porter, Mason A., and James P. Gleeson. “Dynamical systems on networks: a tutorial.” arXiv preprint arXiv:1403.7663 (2014).
  2. Arenas, Alex, et al. “Synchronization in complex networks.” Physics Reports 469.3 (2008): 93-153.
  3. Rǎdulescu, Anca, and Sergio Verduzco-Flores. “Nonlinear network dynamics under perturbations of the underlying graph.” Chaos: An Interdisciplinary Journal of Nonlinear Science 25.1 (2015): 013116.
  4. Kac, Mark. “Can one hear the shape of a drum?.” The american mathematical monthly 73.4 (1966): 1-23.
  5. Dhooge, Annick, Willy Govaerts, and Yu A. Kuznetsov. “MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs.” ACM Transactions on Mathematical Software (TOMS) 29.2 (2003): 141-164.
  6. Guckenheimer, John. “Bifurcation.” Scholarpedia 2.6 (2007): 1517.

Advertisements
This entry was posted in Expository, Mathematical and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s