**1. A One–Parameter Function Can Fit An Elephant **

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” –John von Neumann

This recent preprint solidifies some intuition that I think many people in dynamical systems and related areas have about chaos and other complex systems – often there is a tremendous amount of room for these systems to exhibit arbitrarily complex behaviours. They show that given essentially any set of two–dimensional data, that a single ‘nice’ function with a single free parameter can approximate this data arbitrarily well.

Specifically, they assume that you want to fit some data which I will denote via the set of ordered pairs where we will let for some positive integer . This assumption on the values is really only notational – a subset of these integers can be taken, and hence the values can be scaled arbitrarily. Similarly, we let , but again this can be scaled arbitrarily. Then, for any , there exists a *fixed* such that for any such collection of ordered pairs, there is a such that the function

satisfies for all . So one can pick based purely on the desired accuracy, and then for any set of data, fit the above one–parameter function to the data. Of course this function will behave wildly (oscillating rapidly) between integer points, and similarly for larger or smaller values of , so this approach is not useful for any kind of prediction or interpolation. It can be applied to suggest that the number of parameters in a model is not a sufficient way to characterize complexity. It is also useful for one–upping von Neumann. You should read the paper for further details and discussion, and in particular to see the relatively elementary application of the logistic map used to construct this function.

Edit: A statistically–minded friend pointed out that this isn’t as surprising as it seems. This Theorem, and many generalizations, essentially exploit the denseness of ‘data’ in the reals. To use the above function, one typically has to specify hundreds or thousands of decimal places of , which just shifts the complexity to the knowledge of this parameter.

**2. Universal(ly Useless) Differential Equations **

A Universal Differential Equation is one that admits solutions that can arbitrarily approximate any given function (from the real numbers to the real numbers; often other settings are actually easier). There is quite a bit of theory developed for these equations and similar ones. See this recent paper for modern developments, which also contains a result I will discuss momentarily. First, I want to give an example from this paper which nicely demonstrates the intriguing nature of these equations. Consider continuous functions which satisfy,

where denotes the derivative of with respect to the independent variable (, for instance). We also require . Then, for *any* continuous function , and any continuous positive function , we have that one solution to (2) satisfies for all . To drive the point home that this equation is itself not horrendeously complex, I want to write (2) as a system of first–order equations. So we have that (2) is equivalent to,

which only involves cubic nonlinear terms, which are ubiquitous in mathematics. However, the on the left side of (6) does show that this is a system of Differential–Algebraic–Equations, which are in some sense more complex than ordinary differential equations.

Or at least, I thought so until I skimmed through the paper linked at the top of this section. It turns out there exists a *fixed* polynomial vector field (system of ODEs) in variables, such that for any arbitrary continuous function and error function , one has computable initial data so that this system of ordinary differential equations has a unique solution whose first component satisfies for all . Also, the integral curve of this vector field (e.g.~) is analytic. They also discuss relationships between ODEs and Turing machines, in the context of universal computation. The really interesting bit this paper brings is that the solution, for computable initial data, is unique and analytic. Unfortunately they can only prove that this ODE system exists, and do not give a concrete example.

Almost on a whim, I applied for a postdoctoral position early in 2017, which meant that I frantically finished my DPhil (PhD). I submitted my thesis in July, and had my viva (defense) in September. Due to some delays I have not yet been awarded the degree officially, but I have been pushing on ahead as if it was all behind me. While I did really enjoy the field that I studied (bioactive porous media, with an eye on tissue engineering applications), I anticipate that once some of the work in the thesis has been published, I will primarily work in other areas, at least for some time. Two arXiv preprints have been posted related to these, although due to reviewer comments some details in these versions will likely change. A modelling paper can be found here, and a dynamical systems paper can be found here. I have now become very preoccupied with several other areas of mathematical biology, and a few areas of physics.

In the summer of 2016, I co-supervised two summer students with Robert Van Gorder. These students considered ecological models, both of which incorporated dispersal of populations throughout a continuous spatial domain. One student studied interactions between predators, prey, and a subsidy, and found some interesting clustering behaviours induced by heterogeneous subsidy input in the domain. This heterogeneity mimics many biological scenarios where a predator subsists off of prey and subsidy in different geographical regions. We have continued to explore these kinds of models over the past year in terms of incorporating additional temporal and spatial effects, so I may discuss these in a later post once they have been published.

Another student considered a generalized Lotka-Volterra type model in a two dimensional domain with both random dispersal (e.g. diffusion) and directed motion of populations. This directed motion arose from environmental effects as well as inter- and intra-species interactions. This second kind of movement was modelled as an advection toward favorable regions of the domain, or away from unfavorable regions. This model then consisted of coupled reaction-advection-diffusion equations, with additional elliptic equations determining the direction and magnitude of the advection. While this model was mathematically more complicated than classical reaction-diffusion equations modelling population dispersal, we showed that it did not give rise to spatial clustering of the populations in a spatially homogeneous domain (subject to some technical restrictions). So we investigated explicit spatial heterogeneity, corresponding to sources of food, or hazards within the domain, and found some interesting interactions between spatial heterogeneity and the advection of these interacting populations. We are also pursuing several other directions from this project, which mathematically correspond to understanding the role of advection in reaction-diffusion equations (as well as coupled elliptic-parabolic systems), and biologically correspond to understanding the directed motion of populations. Some of these extensions have resulted in projects involving waves in biological media, investigation of Turing instabilities in this more general setting, and emergence of patterns on a variety of manifolds.

In addition to these extensions, I have also been involved in a Saturday study group that has explored a variety of applied mathematics projects. We started by considering models for chaotic rotation of rigid bodies orbiting more massive objects, motivated by chaotic rotation observed in some of Pluto’s recently discovered moons (this is a good visualization of the rotation of Nix). Using a Melnikov function approach, we could analytically demonstrate that chaos will occur in a particular idealization of this problem, and we confirmed this numerically via Poincaré sections (see the paper here if you’re interested). While Robert originally suggested this project, a large amount of the modelling and the analysis was developed by a fellow graduate student, James Kwiecinski. This was intended to be a one-off kind of project, but has now spawned at least one interesting extension which I may discuss early next year.

We also studied models involving coupled Complex Ginzburgh-Laundau Equations. These arise in a huge variety of physical contexts, from superconductivity, to quantum field theory and nonlinear optics, and also give rise to a plethora of spatiotemporal behaviours. Despite being well studied, there are still many interesting phenomena that have not been thoroughly explored, and so we analytically and numerically investigated a particular generalization of the model (which can be thought of as a generalization of the cubic nonlinear Schrödinger equation). Our study was posted to the arXiv here, and along with another paper, was submitted to a journal last year.

There are a few other things that I was involved in last year, including student supervision of projects involving stochastic epidemiological models, as well as projects involving robots, but overall the above gives a flavour of the sorts of things that I have been up to. My postdoctoral work fits into a similar class of problems as those described above, and involves spatial and temporal heterogeneity in reaction-diffusion systems. It is especially nice as there are several theoretical projects, as well as an excellent collaboration with an experimental group in Edinburgh which is already producing some surprising connections (see here for some neat videos of size and wavelength modulation of spots in a growing domain). Once some of these projects have concluded, I hope to give a brief nontechnical summary of the results later next year.

I am incredibly indebted to all of the people who have helped me get to where I am now – my PhD and postdoc supervisors, my fellow students and colleagues, and my incredible wife and family have all made this an extraordinarily successful year. I plan to continue with the momentum from this year, and hopefully be able to give back to the world something useful for everything that I have been given. See you all in 2018!

]]>I implemented the Geodesic or Zipper algorithm due to Don Marshall. The full code (in Matlab) can be found here, along with a few graphical tools to explore it. It has some precision problems for domains with “sharp” boundaries for the inverse map (that is, mapping the unit disk to a given domain) but overall I’m pretty happy with the results. I did not include all of the approximations one could to make the algorithm both more accurate and more efficient, but I may come back to this later and do that, as well as demonstrate some of the applications of these maps. If you’re interested, I would also check out this Thesis on the topic which I found to be useful. It also contains Python code for this algorithm as well as another approach using sphere packing. Below I will include a few examples of what this code can generate, but I encourage you to download it and play with it yourself. As always, comments and questions are appreciated!

Here is an example map from a Carleson grid, where each successive circle has additional lines and has a radius of . The domain is given by the polar function, .

Here is an example of mapping a triangular domain to the unit disk.

Lastly, here is a mapping from a 4th-iteration Koch Snowflake to the unit disk.

The inverse map for this domain has numerical problems as mentioned above. These could likely be remedied via various approximations that I may implement later.

]]>“…the equations we deal with are probably more complicated than even most physical scientists are accustomed to. This is because the phenomena we are attempting to describe are generally more complex than most physical systems, although it may reflect our own ineptness in perceiving their underlying simplicity.” -James Murray, Mathematical Biology

Due to the complexity of modelling biological phenomena, analytical and numerical approaches are often used together to give a more thorough understanding of a model than either could provide alone. This is particularly true in the area of mathematical physiology, where there has been a dual development of increasingly sophisticated models, as well as analytical and numerical tools to analyze these models. In this post I want to describe some visualizations of simple models in this area, as well as reference some useful tools to explore these simulations further. These visualizations complement the lecture notes for a corresponding course that I tutored last year, which can be found here for the moment, and a much older version of the notes can be found here.

Mathematically these models are small systems of ordinary differential equations, delay differential equations, and spatial variants consisting of coupled partial differential equations. I will not exhaustively discuss or describe these models but will instead link to relevant literature, and primarily discuss particular example simulations demonstrating relevant behaviours that complement the lecture notes. All of the simulations are performed using different variants of the explicit fourth-order Runge-Kutta method, which can be derived via Taylor series.

**1. Hodgkin-Huxley Equations for Transmembrane Ion Transport **

The Hodgkin-Huxley Equations were first proposed in a series of papers in the early 1950s concerned with the flow of electric current, mediated by the transport of ions, through the surface membrane of a giant axon from a squid (not to be confused with an axon from a giant squid). They have formed the basis for a wide range of mathematical and scientific work involving ion channels in cellular physiology in particular, and conductance-based excitable media in general. The equations, using the same notation as the lecture notes cited above, can be written as

where is the capacitance of the membrane, is the voltage (potential difference), represents the proportion of open ion channels in the membrane for potassium (K), and are proportions for two different sodium (Na) channels, , , , are ion-specific conductances and Nernst-potentials respectively, and model a leakage current, is an externally-imposed current, and finally , , , , , and represent the timescales and equilibrium values of the gating variables , and respectively. Note that these last six parameters are functions of the membrane potential , which models the voltage-dependence of the ion gates themselves, although typically these functions are determined experimentally by fitting data to physiologically sensible functional forms.

These equations have many interesting features important in quantitative and qualitative modelling in biophysics. Some of the original motivation came from modelling spiking behaviour in axons, where a sufficiently large external current prompted a large increase in the membrane potential known as an action potential. This corresponds in neurophysiology to individual neurons firing, and has analogues in other biological systems such as the cardiac action potential that gives rise to the periodic contractile behaviour of the heart.

You can observe this behaviour yourself using a JavaScript tool I wrote that solves Equations (1)–(4) using Runge-Kutta. The timescales and equilibrium values of the gating variables are given as rational functions involving the membrane potential , as done in the paper cited at the beginning of this section. As an interesting historical aside, note that the figures produced in that paper were laboriously computed by hand using a tedious numerical procedure described on page 523. Thankfully, your web browser will allow you to reproduce their results in a fraction of a second, rather than several weeks of tedious arithmetic.

As an external current is added (at ), a small deviation from the resting potential is observed. Increasing the value of above causes a sharp rise in the membrane potential. At two spikes are observed. Finally, for , a continuous spiking behaviour is observed. This can be understood via Hopf bifurcations as the external current is varied, although the calculations for the full four-dimensional system become quite cumbersome, and so the use of sophisticated mathematical and computational tools is typically used to analyze these bifurcations. Another common approach is to use typical values of the timescales and equilibrium values of the gating variables to asymptotically reduce Equations (1)–(4) to a system of two equations, where classical phase-plane analysis (as well as our intuition for two-dimensional systems) can be used. This is done in the lecture notes mentioned at the beginning, as well as in the book by Keener and Sneyd.

Some intuition for both the action potential itself, and sustained oscillations, can be gained from analyzing a caricature of this reduced two-dimensional model. However, there are qualitative behaviours in the full system that cannot be captured by a two-dimensional reduction or a caricature thereof. These include bifurcations not found in planar dynamical systems, such as the twisting of homoclinic orbits reported in this paper. Additionally, for small subsets of the parameter space, numerical evidence for intermittent deterministic chaos can be found, which cannot exist in any two-dimensional vector field (unless there is a non-autonomous forcing, such as a time-dependent external current).

You can use the web application linked above to explore some of these dynamics further. One way to do this is to introduce a noise term into the external current. the variable does this by adding white noise to this current. Without discussing the technicalities involved in defining this noise term, you can think of it as for every time step , the external current is modified by a normally distributed random variable with zero mean and variance . Adding a deterministic constant shifts the mean of this process by the same amount, and multiplying by a constant multiplies the variance at each step. If you set the deterministic part of the external current , and the random part you can see that while the total external current rarely moves far from the mean, it can induce action potentials in random intervals of the time series. I encourage you to explore the effects this noise has on inducing (or collapsing) sustained oscillations for varying these two values. If the variable is set, then the simulation uses the same `realization’ of the noise each time the simulation is computed. If it is not set then a different realization is used every time the simulation is done.

Another interesting effect can be seen by setting the resting potential . Increasing the resting potential from the physiological value by a factor of about three causes the membrane potential to spontaneously oscillate on its own. If you now introduce an external current, by slowly increasing using the sliders, you will see that for some values of the current, the oscillations appear to collapse after the current is turned off. To see this more clearly, you can change the range of the time series by setting . Now, for the case when , if you increase the initial condition on the membrane potential (which corresponds to shifting the phase of the oscillation while the external current is present) for , the membrane potential stops oscillating when the current is turned off after . However for , the oscillation becomes self-sustained. This is due to the limit cycle and the steady state both being locally asymptotically stable, and changing the initial condition changes which basin of attraction the solution falls into as time goes on. This is an example of multistability, as the long-time dynamics of the system are dependent upon the initial conditions. Quasi-periodic behaviours can also be seen by increasing and .

While the above asymptotic behaviours can occur in systems of ODEs with fewer equations, it is important to remember that planar (two-dimensional) dynamical systems can only exhibit steady state behaviour or oscillations for asymptotically long periods of time. Adding non-autonomous effects (such as the explicitly time-dependent noise term ) does increase the effective dimension of the dynamics as well, but unless this non-autonomous term induces a kind of resonant behaviour as , the long-time dynamics will still be either a limit cycle or a steady state.

**2. FitzHugh-Nagumo Equations and Phase Plane Analysis **

Next, I want to discuss a few tools to explore planar dynamical systems numerically, in order to help develop some intuition for these simplified systems. Here is a generic solver for two-dimensional systems of ODEs that I wrote to illustrate certain bifurcation behaviours for the FitzHugh-Nagumo equations, which are a caricature of Equations (1)–(4). The FHN equations can be written as,

where , , , and is sufficiently large to guarantee that there is only one equilibrium point. Using the application above, you can see the emergence of oscillations by increasing the rightmost term in the first equation (corresponding to ) from to through .

There is also this excellent JavaScript solver written by Daryl Nester that shows a direction field and plots trajectories of the system starting from a given initial point. Eventually I hope to extend this application to include plots of the nullclines, but you can find these easily for the FHN system. For now, you can get an idea of the role that these nullclines and their intersections (corresponding to steady states) have on the dynamics by varying the parameters and visualizing various orbits in the plane. Performing the same increase in , you can see the emergence of a small amplitude limit cycle for , which becomes a large-amplitude limit cycle for larger external currents.

If you look under, “Numerical solution, timeplots, and tables,” you will see a time series plot for the FHN system. Hovering over various sections of the time series will display where that point corresponds to in the phase plane diagram above, and so this can be used to see the “fast” dynamics along the horizontal part of the orbit, and the “slow” dynamics in the vertical parts of the trajectory, which correspond to the limit cycle crawling along a nullcline. These timescale arguments presented in the lecture notes (and elsewhere) can be generalized beyond the two-dimensional setting and applied to the original four-dimensional system above, but the details are more complicated, and it is not generically true that oscillations in these systems always separate into fast and slow components that have a simple structure as in the two-equation case. See this paper for some of the details in this direction.

Alongside this system are several other two-dimensional models, such as the Two-pool model of cellular calcium dynamics and the Lotka-Volterra Predator-Prey equations that can be explored using phase-plane analysis. Murray’s first volume of Mathematical Biology goes through this analysis in detail for many of these biological systems, and analysis of the Two-Pool model can be found in the book by Keener and Sneyd mentioned above.

**3. Delay and Partial Differential Equations **

Finally I want to discuss two other common kinds of equations used to model physiological phenomena, as well as other things in mathematical biology. The first of these are partial differential equations, often in the form of reaction-diffusion equations. The second kind of model that has seen recent interest in the past few decades are delay differential equations.

The former can sometimes be derived using tools from statistical mechanics or by analogy with well-understood physicochemical processes, such as diffusion in a stationary fluid. Often qualitative insight can be gained by phenomenologically adding a diffusion term to systems of ordinary differential equations, such as equations (1)–(4) or (5)–(6). Analytical and numerical analysis is typically more difficult to do for these systems, partly due to the infinite-dimensionality of PDEs, and the complicating effects that boundaries have on the dynamics. Nevertheless, several interesting phenomena can be found in these models, such as spiral waves, and their higher-dimensional analogues, scroll waves. Figure 1 of the article on scroll waves gives an interesting visualization of how these higher-dimensional waves can propagate and interact in non-intuitive ways.

Lastly I wanted to briefly mention delayed dynamical systems. These kinds of equations are less well-studied than ODEs and PDEs, but there is a growing literature analyzing them from both a theoretical point of view, and using them to model a variety of phenomena. One of these is the Mackey-Glass equation, which was originally proposed to demonstrate bifurcations that may be relevant to physiological systems, such as the change from a periodic heart-rate to an aperiodic arrythmia, or the transition from periodic breathing to Cheyne Stokes respiration. Note that there are several non-equivalent Mackey-Glass equations in the literature. As with PDEs, these equations are not as easy to simulate as ordinary differential equations. Partly this is due to the complicated phase space (which is also infinite-dimensional), and partly because these equations have not had as much attention as the other kinds of models historically. There are good numerical routines in Matlab, Octave, and Maple, however, so I would encourage you to explore some of these models. Here is a nice illustration of some of these delay models in Maple that I found useful.

]]>

**1. Reaction-Diffusion Equations and Turing Instability **

A reaction-diffusion system is a coupled set of partial differential equations of the form,

where is the state vector typically denoting concentrations of chemicals or biological populations, are the (typically nonlinear) kinetics that describe interactions between the different species, are the diffusion rates, is the space variable, and is time. As the represent population densities or concentrations of chemical species, we typically assume for all . We also need some initial distribution for each species, and either Neumann,

or Dirichlet boundary conditions,

where is the outward normal vector at the boundary of the domain, which we denote by . The former models `no-flux’ conditions where the cannot leave or enter the domain, and the latter models fixed boundaries where the value of is fixed. Here we are writing homogeneous conditions (), but typically you can shift non-homogeneities in the boundary into the equation and reduce it to this case, potentially introducing spatially-dependent kinetics .

If the spatial effects are negligible, such as in well-mixed or homogeneous chemical systems, then we can neglect the effect of diffusion and write a system of ordinary differential equations instead,

where all terms have the same meaning as in equations (1). Systems of ODEs can be found throughout chemical and biological systems, and have formed the basis of much of our theoretical understanding of gross reactions, such as in Mass action kinetics of chemical reactions, ecological models of various kinds of populations, and epidemiological models of the spread of disease throughout a population. As these models are both amenable to elementary methods, and quite good at providing insight into a real phenomenon, many undergraduate courses in applied mathematics, and especially mathematical biology, spend a considerable amount of time studying the spatially homogeneous cases for various choices of kinetic functions . ODEs are still heavily studied in areas of applied mathematics, as well as their extensions to stochastic differential equations, delay differential equations, and partial differential equations. In all of these models there is often a discussion of what changes by considering these other phenomena-delays in time, randomness, and spatial effects.

Typically diffusion plays the role of smoothing out irregularities, as can be seen in the solution of the Heat Equation. Contrary to this intuition however, Turing [2] famously showed that chemicals with different diffusivities, in equations (1), can destabilize a stable solution of the spatially homogeneous equations (4). This often leads to non-uniform steady states or patterns that cannot be obtained or understood by the spatially homogeneous ODEs. There are many accounts of these patterns, and a huge amount of mathematical literature discussing them. This result is relatively easy to derive, and I highly encourage you to read Turing’s paper no matter what background you have in this material-It is wonderfully written, and implies that diffusion in some systems can drive an instability of the uniform solution that must mean more complex behaviour occurs over the spatial extent of the domain. These non-uniform states are often referred to as `Patterns,’ and they admit some very beautiful structures.

**2. Non-Existence of Stable Patterns in One Dimensional Scalar RD Equations **

Here I want to extend the discussion I presented before about Lyapunov functionals for the scalar Reaction-Diffusion equation, and briefly summarize some literature that extends these results. There are many technicalities, both practically and formally, when dealing with PDE. Here I will consider Neumann boundary conditions on a simple domain . We consider the scalar equation on the interval ,

with boundary and initial data,

where is a given initial distribution of the quantity . Note that the diffusivity can be set to unity by scaling the space variable . In the previous post I showed how one can see, via integration by parts, that the only long-time behaviour of equation (5) consisted of steady states. So for a large enough time , we anticipate that the solution to this equation will asymptotically approach a steady state function, e.g. for some spatial function . As usual there are details about what we mean about approximation of functions, but formally this can be shown as convergence in a suitable function space. Practically this means that if we simulate equation (5) for a long enough time, things will settle onto some spatial function. We can show that non-uniform solutions to equation (5) are in fact unstable, so the only possible asymptotic solutions are uniform steady states satisfying the spatially homogeneous equation (4) for . That is, the asymptotic solution for a constant that satisfies . This is a result in the opposite direction of Turing’s work, and implies that the long-time behaviour of scalar reaction-diffusion equations in one dimension is captured by an ordinary differential equation.

Let for any constant denote a non-uniform solution to (5). Then we can perturb this solution by writing for some small parameter . The sign of will tell us if this perturbation grows or decays in time, and hence whether or not is a stable steady state. Substituting this in and rearranging we have the equation involving the perturbation ,

where satisfies the boundary conditions,

This is a standard Sturm-Liouville eigenvalue problem, and so the eigenvalues that satisfy it are discrete (e.g. countable), real and simple. They can be arranged in increasing order as . Similarly we can consider the eigenvalues corresponding to the Dirichlet problem, , and note that with a similar ordering, the eigenvalue with index will have zeroes in the interval . Let and differentiate the steady-state version of equation (5) with respect to to get that

where satisfies the boundary conditions,

This tells us that problem (7) with homogeneous Dirichlet boundary conditions instead of Neumann conditions has as an eigenvalue for some . We will exploit the relationship between Dirichlet and Neumann eigenvalues to get the result . We use the following Lemma,

Lemma 1Let be a given function and for be two solutions to the following eigenvalue problem,

with eigenvalues respectively. Assume , and that for and for . Then .

*Proof:* Multiply the equation (11) for by , and vice-versa, and subtract these two equations. We then have,

We integrate (12) from to and use integration by parts to find,

By assumption on , the second term of (13) vanishes. Likewise, the integral in the third term must be non-negative. By assumption on , the first term is non-negative as we have and , but not both. If , then for all violating our assumptions. So the first term must be strictly positive, and hence .

As we are interested in the eigenvalue , which corresponds to a function with no zeros in the interval , we can immediately apply Lemma 1 with as the corresponding eigenfunction. So we conclude that , and hence the solution is unstable. Therefore, for scalar Neumann problems in one dimension, stable non-uniform solutions do not exist.

We can also apply this analysis to show something about the Dirichlet problem. Assume is a non-constant steady state solution to equation (5) but with Dirichlet conditions (). We can show that if changes sign at least once on the interval , then it is unstable. Again consider . As U changes sign, there exist such that . So solves the boundary value problem,

We can then apply Lemma 1 with where we choose the sign to enforce positivity of . By restricting our domain to we again have that and hence is unstable if it changes sign on the interval . Section 2.7 of the second volume of Murray’s Mathematical Biology [3] gives a nice example of this problem that does have spatially non-uniform steady states, but these are all symmetric about and never change sign on the interval.

**3. Non-Existence of Stable Patterns for **

There are many extensions to the above results in the scalar case. One that I’ve come across recently is the paper by Kishimoto and Weinberger [4] that contains the following result,

Theorem 2Let be a non-constant solution to equation (1) satisfying (2) in a convex domain . If the kinetic functions satisfy,

The condition (15) implies that the reaction terms are cooperative-each species helps every other in some sense. Similarly competitive dynamics can be modelled by,

For the two species case of , the change of variables allows the application of Theorem 2 again, and so two-species competitive dynamics cannot exhibit spatial patterning either. Other kinds of kinetics, such as Gierer-Meinhardt, can give rise to spatial patterns in two dimensions with Neumann boundary conditions, as they are of the `activator-inhibitor’ type. These are specifically the kinetics that Turing demonstrated.

The assumption about a convex domain is an interesting necessity. One can construct a dumbbell-shaped domain with bistable cooperative kinetics and a very thin `neck’ connecting the two subdomains such that instability occurs [5].

We can then broadly classify whether we expect patterns to exist based on both the spatial dimension and the number of constituents (chemical or biological species) we are considering.

- For and , we have by the results of section 2 that stable equilibria will always be the uniform constant solutions for Neumann boundary conditions, and Dirichlet conditions can only give certain kinds of patterns.
- For and , [4], [5],[6], and related works show that even for very general kinetics (e.g. where is periodic in time ), if the domain is convex and Neumann conditions are imposed, only spatially homogeneous steady states (or time-periodic solutions to the spatially homogeneous equation) are stable. This suggests that patterning is only possible for different boundary conditions, non-convex domains, or explicitly spatially-dependent kinetics.
- For and , Theorem 2 implies that for a convex domain and cooperative (and if competitive) kinetics, the Neumann problem also does not admit patterns. In [6], this result is extended to include time-periodic kinetics.

In [6] and elsewhere, there is some discussion of allowing for dependence on in these general non-existence results, but I have not found such a result in this direction that wasn’t buried in technicalities. In general, however, this gives some clear boundaries of where patterning is and is not possible. It is an interesting mathematical phenomenon in part because, like chaos, it only seems to exist when there is a balance between some kind of stabilizing force, and some kind of destabilizing force. This is true of classical Turing patterns, which do not satisfy either cooperative or competitive kinetics as one chemical activates the other, whereas the reverse interaction is inhibition. There are results that if diffusion is too strong or too weak, non-uniform steady states can also lose their stability in some sense.

Patterning is also an interesting modelling tool, as it allows us to determine when observed spatial heterogeneity in nature or an experiment is due to some combination of interactions and dispersal alone, or if there is necessarily an environmental heterogeneity behind what we see. In this sense, these non-existence results give some useful information about when additional effects, such as environmental heterogeneity, must be included in our models, and when simple models are sufficient. This is a key point about Turing’s original work on morphogenesis-Occam’s razor suggests that a simple diffusion-driven instability is a plausible explanation for patterning in developmental biology, but this is still a hotly debated discussion. Much of the difficulty of modelling is about finding ideal models that are only just complicated enough to explain the phenomenon of interest, and this is especially challenging in fields with so much complexity.

**4. References and Further Reading **

- Grindrod, P.: Patterns and waves: The theory and applications of reaction-diffusion equations. Oxford University Press, USA (1991).
- Turing, A.M.: The Chemical Basis of Morphogenesis, Phil. Trans. R. Soc. Lond. B, (1952), 237, 37-72.
- Murray, James D.: Mathematical Biology. II Spatial Models and Biomedical Applications, Interdisciplinary Applied Mathematics V. 18. Springer-Verlag New York Incorporated, 2001.
- Kishimoto, K and Weinberger, H.F.: The Spatial Homogeneity of Stable Equilibria of Some Reaction-Diffusion Systems on Convex Domains, Journal of Differential Equations, (1985), 58, 15-21.
- Matano, H.: Asymptotic behavior and stability of solutions of semilinear diffusion equations, Publ. Res. Inst. Math. Sci., (1979), 15, 401–458.
- Hess, P.: Spatial homogeneity of stable solutions of some periodic-parabolic problems with Neumann boundary conditions, Journal of Differential Equations, (1987), 68, Issue 3, 320-331.

]]>

**1. Long-time dynamics and Lyapunov Functions **

Let be the solution to the differential equation,

where is some (generally nonlinear) mapping of the state-space . We also assume some initial data to make the problem well-defined, such as . For the moment, consider , so the equation (1) is just a system of ordinary differential equations, and is just a vector-valued function. We can think of as being a particle in some -dimensional space moving around with a velocity . For the rest of this discussion, we assume the function is autonomous in that it only depends on time through the variable at the current time.

We will use geometric terms to describe how moves in the state space . For each point , we can think of setting this as the initial condition for our differential equation, and evolving time forward. This is called the flow of the dynamical system, which we write as with . We can then think of the geometric objects that this flow generates. Namely, for each , we define the orbit of it as , and the positive half-orbit as . Lastly, we can describe the asymptotic states that our flow generates using the language of -limit sets.

Definition 1A point is an -limit point of if is defined for all and there exists an increasing sequence of times such that for . The set of all -limit points of is called the -limit set of and is denoted by .

We think of as the set of points that our dynamical system settles into as its long-time behaviour, for a given initial point . These sets can then be used to describe attractors, which contain information about the asymptotic dynamics for all bounded orbits. The idea behind all of this is to study long-time behaviour as a way of simplifying differential equations, which goes back some centuries and was significantly advanced by Poincaré as the qualitative theory of differential equations. How complex can these sets be, and what sorts of dynamics do they admit? I’ll give a few examples before describing some tools used to understand them.

A steady-state or equilibrium point is a solution to the equation,

Equilibria have the simplest possible dynamics, as a steady solution is preserved by the flow (e.g. for all ). For this reason they are also referred to as rest points or fixed points of the flow. Practically speaking, real physical systems may fluctuate slightly, and a system may not be precisely at a steady state, so there is an important discussion to be had about their Stability with respect to small perturbations. Other (bounded) asymptotic behaviours can exist as well-periodic behaviour, or more complicated time-dependent evolution such chaotic dynamics on a strange attractor. It is also possible that solutions will cease to exist in finite time, or become unbounded as time becomes larger. We will assume the function is sufficiently regular to prevent the former, and has a dissipative structure to prevent the latter. Still, even dissipative systems can admit arbitrarily complex asymptotic behaviours, depending on their dimension.

To motivate the definition of a Lyapunov function, consider equation (1) with , so that this is just a single ordinary differential equation. The only possible bounded asymptotic behaviour in one dimension is that the solution approaches a fixed point. Oscillations, or more complicated behaviours, are not possible as a continuous change in the sign of the velocity would require , and hence imply a steady state. We can understand this a different way by defining a function as a potential, and noting that , so that extremal points of correspond to steady states of Equation (1). Local minima correspond to asymptotically stable steady states, and all other extremal values to unstable steady states. So for a scalar first order ODE, we know that the -limit set of any point is a steady state if the orbit remains bounded. This can easily be generalized to show that a class of dynamical systems called gradient systems do not admit oscillating solutions.

Let , and for some continuous scalar function , where is the gradient with respect to the vector of dependent variables . Assume that is a periodic solution with period . Then by integrating Equation (1) we find,

with equality only if , which means that was a steady state. Therefore the only periodic solutions are steady states. This is true even in the case of being infinite-dimensional (e.g. for partial differential equations), although there are many technicalities involved in showing it. We can also generalize the idea of exploiting this potential function, even if the system is not the gradient of a potential, using Lyapunov functions.

Definition 2A continuous function is a Lyapunov function if for all , we have , and if for all , , for some constant implies that for some steady state .

There are many other definitions of Lyapunov functions, not all of which are equivalent, so care should be taken in applying theorems built from them. One common condition for to be decreasing along orbits is for all . A sufficient (but not necessary) condition for rest points to be minima is that implies that . These definitions often give insight into the stability of rest points and the particular kind of stability depends on the exact nature of these conditions (e.g. if the inequalities for non-minimal points are taken to be strict or not).

We can think of these functions as tools to understand the stability of equilibria in a different way from local linearization. These functions generalize potentials in that they should be decreasing along an orbit and attain minima at rest points. One powerful tool that makes use of these functions is the LaSalle Invariance Principle. Here, we will state it using the condition that is compact in , but in finite dimensions this is equivalent to the orbit remaining bounded.

Theorem 3Let be a Lyapunov function, and let with relatively compact. Then consists only of equilibria. If the only nonempty connected subsets of all equilibria are single points (for example, if there are only a finite number of equilibria) then for some equilibrium , and as .

So dynamical systems that admit Lyapunov functions have quite simple asymptotic dynamics, as long as orbits are compact and equilibrium points are isolated.

As a brief aside, the dimension of a dynamical systems provides some hard limits on the possible dynamics. As we’ve seen, one dimensional dynamics is more-or-less trivial (at least if we consider only the large time asymptotics). The Poincaré –Bendixson Theorem guarantees that dynamical systems of the form (1) in the plane (e.g. ) can only have steady states and oscillations in terms of bounded solutions. Unbounded solutions, or solutions that blow up in finite time, can of course exist but these are not interesting in terms of asymptotic dynamics. Three-dimensional systems (equivalently, two-dimensional nonautonomous systems) can exhibit chaotic behaviour in addition to numerous other phenomena not possible in planar dynamics. For this reason, dynamics is often broken into the trivial (1D), the planar (2D), and everything else (3+D). So you can think of this as Mathematicians’ One-Two-Many Trichotomy.

**2. Asymptotic Dynamics of Reaction-Diffusion Equations **

Here I will briefly describe applying the theorem above to the case of the Reaction-Diffusion equation in a domain that has a sufficiently smooth boundary . The equation is,

where is a given constant and a given function of space and the dependent variable. We also assume that satisfies Dirichlet boundary conditions,

We assume that is smooth enough to prove existence of bounded global solutions (that is, solutions remain bounded for all time ). From the dynamical systems perspective, this is an infinite-dimensional system. This can be motivated by thinking about how spatial functions (the state space of our equation) are more complex than vectors in finite dimensional systems. If this is not obvious, consider that the Taylor or Fourier series of a function has potentially infinitely many nonzero coefficients. There are many technical details about dynamical systems in infinite-dimensional spaces, but the same approach described for systems of ODE works as long as one is careful about these details. For example, this is what necessitates the use of compactness of orbits, as opposed to boundedness in Theorem 3, as boundedness only implies compactness in finite dimensions. The function space for solutions to this equation should be chosen carefully as well. For the purposes of this post I will just use the space as the Sobolev space of functions that satisfy the boundary condition (4) and are weakly-differentiable (in the spatial variable ). The second link gives some nice motivation for these technicalities, but you can ignore them and just think of the state space as a space of functions. Finally, steady states or rest points are functions that satisfy,

We now define a Lyapunov *functional* for Equation (3). We let be defined by,

where we define,

Formally we can then see that this function must be decreasing along orbits as,

where we have integrated by parts to move the spatial derivatives around, and have assumed sufficient regularity to justify this computation. Justifying it is somewhat technical. As an example, it is nontrivial to define a flow for these kinds of equations due, in part, to the Second Law of Thermodynamics (See the Clausius–-Duhem inequality for a neat application of this irreversibility). Solving the backwards diffusion equation is not a well-posed problem (for usual definitions necessary to discuss things like dynamics). For this reason, even the concept of flow has to be augmented so that we only allow flow forward in time.

If we assume that all of this can be justified, a *semi*flow can be defined, and we can apply Theorem 3 after some compactness of orbits arguments. We then get the following result.

Theorem 4Suppose that there are only finitely many solutions to (5) that satisfy the boundary data (4). Then given any , the solution of Equation (5) satisfies,

for some equilibrium solution .

Additionally, the Lyapunov functional can be used to describe the stability of these steady states. Dynamically, this tells us that solutions are eventually damped to a rest state determined by the Nonlinear Poisson Equation (2). No oscillations or other time-dependent behaviour can be expected in the large-time limit, and the form of the steady states can also be relatively simple, depending on . This is in contrast to systems of Reaction-Diffusion Equations, where Turing Instabilities and other phenomena can lead to time-periodic behaviours that I may illustrate in a later post.

**3. References and Further Reading **

For a very nice gentle introduction to the formalism, we refer to John Ball’s notes on Dynamical Systems and Energy Minimization, or to the excellent book by James Robinson which extends much of this discussion, in particular to the concept of global attractors and their structure. This is also what my Masters thesis was about–although admittedly the actual information gained about that PDE was very minimal compared to the above theorem characterizing -limit sets as points.

This paper contains many generalizations of these results to gradient dynamical systems and gradient-like systems in general spaces, although it is quite advanced. A very nice introductory book to the finite dimensional dynamical systems is Strogatz’ Nonlinear Dynamics and Chaos, which contains a good introduction to other aspects of dynamical systems as well, such as bifurcation theory.

]]>

The first of these is the use of Master Stability Conditions, or in the more general setting, Master Stability Functions, in relating network topology to temporal dynamics. Under some broad assumptions, these results show how the stability of the synchronous state (where all nodal variables are equal) of a dynamical system depends on the adjacency structure of the network as well as the dynamics at each node. This is particularly useful, since for a given dynamical system we can say precisely when this steady state gains or loses asymptotic stability. Physically, this allows us to say that for a given physical process (e.g. neurons firing), some topologies permit perfect synchronization and others do not. A good review of synchronization phenomena in general can be found here. I will briefly describe the simpler Master Stability Condition, and refer to the above-cited reviews for more information and generalizations.

This is very useful, but it only gives information about this particular local equilibrium, and often requires many assumptions about the homogeneity of the dynamics on the network. In general, we might be interested in the global behaviour of the system due to changes in the network topology, rather than just the behaviour of the synchronous state. There are not many general results in this direction that I am familiar with, so instead I will discuss a relatively recent paper by Anca Rǎdulescu and Sergio Verduzco-Flores just to give an idea of some things that can be done. The arXiv version of their paper can be found here if you do not have access to the journal. Among other things, their work explores how the dynamical behaviours the system exhibits change when the underlying graph structure is changed. This theme of connecting network structure to dynamics is something many people are interested in.

**1. Master Stability Conditions **

This presentation closely follows that of the tutorial article mentioned above on dynamical systems on networks. We consider a simple unweighted undirected graph as the domain for our system. Denote our set of nodes by and a set of edges between each node . Let be the adjacency matrix for this network so that if there is an edge from node to node , and otherwise. We denote our state variable at each node depending on time as . In general we consider a dynamical system of the form,

where the state at each node evolves according to the nodal dynamics and its interaction with its neighbours . The sum can be thought of as just adding to the equation for whenever node is connected to node .

For the sake of simplicity assume that each node evolves according to the same local dynamics (), and interacts with all of its neighbours in a consistent way that only depends on the adjacent nodal variables (). Then we can write Equation (1) as,

We now consider the linear stability of a uniform steady state. Let for all be our equilibrium state. We let where we assume . Substituting this expansion into (2) and expanding each term in a Taylor series we have,

where . The first two terms sum to zero as was a steady state of Equation (2). We also neglect the nonlinear terms in since we are doing linear stability analysis. Let and . Then Equation (3) can be written in vector notation as,

Since our graph was undirected, we know that must be symmetric, and therefore diagonalizable with real eigenvalues. Let be an eigenvector of the adjacency matrix with eigenvalue . Then we have,

so that is also an eigenvector of with eigenvalue .

In order for the equilibrium to be linearly asymptotically stable, we need all eigenvalues of to be negative. That is,

From this, we will show that is necessary for stability. The adjacency matrix has zero trace (no self-loops). We know that the eigenvalues of must all be real due to being symmetric. The trace of a matrix is precisely the sum of the eigenvalues, so we know must have positive and negative eigenvalues. Therefore we know that .

Now we consider the sign of . If , then we need . If we assume that this is true for the largest eigenvalue of , , then it must also be true for all other eigenvalues. So we have that is a necessary condition for asymptotic stability. If , then we need . Again, if we assume that this is true for the smallest eigenvalue of , , then this condition will be true for all eigenvalues of . So we have as another necessary condition for asymptotic stability. The reason for rewriting these necessary conditions in terms of reciprocals comes from the fact that they are compatible independent of the sign of as long as we do not divide by . So putting them together we have that is an asymptotically stable steady state of Equation (2) if,

We now recall what and represented and write this as,

This set of inequalities is known as a Master Stability Condition. It is powerful in that it relates the stability of the equilibrium state to the topology of the network through the eigenvalues and , and the specified system dynamics at each node. In particular, the spectrum of the graph is sufficient information to tell us about the stability of a uniform equilibrium point associated to a given dynamical system. This illustrates one connection between the spectrum of a graph and dynamical systems on that graph that is frequently used in the literature to understand these kinds of systems.

An interesting aspect of this result is that if we think only of equilibria which satisfy the steady-state dynamics independent of the graph (e.g. ), then the stability of these points is independent of the details of graphs which are isospectral or cospectral, meaning they have the same eigenvalues. The study of eigenvalues as graph invariants leads to many interesting questions, but they do not constitute a complete invariant in that graphs which are not isomorphic can have the same eigenvalues. As an aside, this is related to a question from spectral geometry with a rich history: Can you hear the shape of the drum?

So a natural question is: how does the relationship between dynamics and graph structure look like for other kinds of behaviour? There are many useful generalizations and applications of this approach to the synchronous steady state which can be found in the reviews mentioned at the beginning of this post. Rather than review those, I want to discuss the influence of graph structure on the global dynamics of a system. In particular, what kinds of asymptotic () states or attractors can we expect in a network dynamical system, and how do they behave as the underlying network structure changes?

**2. Nonlinear network dynamics under perturbations of the underlying graph **

This is a very general and difficult question. Rather than try and mention the numerous contributions in this area, I will discuss a relatively simple model analyzed in the paper mentioned at the beginning, and talk about what their results mean in the context of dynamics and graph structure.

Consider a weighted directed graph consisting of two cliques (completely connected subgraphs), labelled and , so that all nodes are mutually connected with edges of positive weight and all nodes are mutually connected with edges of positive weight . Assume that each of these fully connected modules has nodes. The connectivity between and can be described by two blocks of the full adjacency matrix. Let represent that there is an edge of positive weight from node to node . Let represent that there is an edge of negative weight from node to node .

This corresponds to excitatory and inhibitory connections between these two modules, so that the nodes in positively influence nodes in , and nodes in negatively influence nodes in . This kind of excitation-inhibition modular structure is discussed in the neurophysiology literature, from which this model is motivated. In particular, Wilson–Cowan equations are models of coarse-grained brain regions with excitatory and inhibitory interactions, although they are often more complicated in practice. The model I will describe here is a simpler model of coupled nonlinear oscillators, but in their paper, there is some a discussion of how their results have implications for neuroscience research with more realistic equations.

These coupled oscillators are described using the following equations:

where the function is a type of sigmoid function used to modulate the inputs to each node. It is defined as,

where the and parameters are different for nodes in the and modules respectively. All parameters are specified at fixed values in the model (see the paper for these values), except for the size of the modules , the interaction matrices and , and the weight of the interaction strengths and , which are varied as bifurcation parameters. Specifically, we allow the between-module coupling strengths to vary between and .

In addition to varying the inter-module connection strengths, the sensitivity of the dynamics to the density of connections between the modules (the number of nonzero entries of and ) and their specific configuration was explored in the paper for low-dimensional values of and . Even in the low-dimensional case, the number of possible configurations is quite large. It is necessary to simplify this by specifying a density of connections between each module, as for a given , there are possible configurations to choose from. Denoting the number of connections from to as (the number of nonzero elements of ), and the number of connections from to as (the number of nonzero elements of ), the total number of configurations for a given number of connections can be computed as . This is still combinatorially very large for fixed densities, but it is at least a starting point for understanding how structural variation can affect the dynamics.

As discussed in the paper, a comprehensive study of the behaviour of the system when all parameters are varied is essentially intractable. Instead we will look at one result that the paper contains: The bifurcation structure of the asymptotic behaviour of the system is influenced, but not fully determined, by the spectrum of the adjacency matrix describing the network topology. This implies that the adjacency spectrum is insufficient to understand the behaviour of the system. Conversely, however, the dynamics seem to completely determine the spectrum. This was shown numerically in some specific low-dimensional cases.

Before presenting this result, I will reference some elementary aspects of bifurcation theory that are relevant for the discussion. Broadly speaking, a bifurcation is a qualitative change in the behaviour of a system due to a change in a parameter. Local bifurcations are where an equilibrium point changes stability. This can coincide with new solution branches emerging, such as in pitchfork bifurcations, or the birth of limit cycles such as in Hopf bifurcations, or the disappearance of equilibria, such as in saddle–node bifurcations. For certain kinds of systems (namely those that are structurally stable), these bifurcation points occur on boundaries in the parameter space between regions where the dynamics are topologically equivalent. Away from a bifurcation the number and stability of equilibria and limit cycles remains the same, and these things can change only at bifurcation points.

We can proceed to classify dynamics classes of the system in Equations (9)–(10) by considering their bifurcations in the plane (at least in the range of to we are considering for numerical purposes). To illustrate this idea, we will construct this diagram for the case when and , . Consider the configuration described by and . In Figure 1, we have plotted curves representing bifurcations that equilibrium points and limit cycles of this system experience as the inter-module edge weights are varied. The lines represent codimension 1 bifurcations that separate the plane into several regions. Away from these boundaries, the equilibrium states in each region can vary numerically, but their topological qualities (e.g. number of steady states/limit cycles) do not change. When these bifurcations intersect, codimension two points can occur, and are plotted as red stars. These generally have very complicated structure. As more parameters are varied, more complicated bifurcations of higher codimension can occur, but they will always be small in a set-theoretic sense (e.g. of measure zero).

Figure 1: Hop bifurcations are shown as green curves, and limit point (saddle point) bifurcations as blue curves. The red stars represent codimension two bifurcations: CP for a cusp bifurcation, BT for Bogdanov-Takens, and GH for Generalized Hopf. The labels through represent regions with different types of equilibria. and are regions with a unique stable equilibrium; is a region with a unique stable limit cycle; is a region of multiple stable equilibria, and a region where a limit cycle and a stable equilibrium coexist.

This diagram was constructued using the MatCont toolbox for Matlab. There are several good hands-on tutorials for this software that I would encourage you to play with. To produce Figure 1, I initialized the system with and and ran time forward until a steady state was found. This solution was then numerically continued for larger values of until a (Hopf) bifurcation was found. I then selected the Hopf Curve option, and MatCont traced the bifurcation curve until the maximum number of steps had been taken or it reached a codimension-two bifurcation. From there I selected the codimension-two points, and numerically continued Hopf and limit point curves from them. Repeating this procedure with different initial conditions and parameters does not necessarily give an exhaustive diagram of the bifurcation structure, but it is a good first approximation for simple enough systems. The internal structure of the various regions was computed in the paper using a particle-swarm optimization algorithm to try and determine the possible kinds of equilibria in each region. In my case, I simply verified their results by running simulations of the system in each region to check that the number and kind of equilibria were the same as reported in the paper. Regions 4 and 5 have several different possible equilibria but are not further subdivided for brevity.

It is worth mentioning that these bifurcations are all local in that an equilibrium or a limit cycle changes stability, which does not necessarily say anything about other equilibria. Nevertheless, these diagrams can give a rough idea of how the structure of the attractor changes as parameters are varied, as equilibria and periodic orbits change stability at curves and so those parts of the attractor must necessarily change. If we can show that these are the only equilibrium states, and we can exhaustively find all bifurcation curves, then we essentially know everything about the attractor; hence we know the asymptotic state of the system as parameters are varied. The paper demonstrates that this structure is unique to this adjacency class (graphs with the same eigenvalues) within the set of networks with and , but that this class can give rise to other kinds of dynamical behaviour. For instance, consider the system with the off-diagonal blocks being and . The full adjacency matrix has the same eigenvalues as the previous example, but if we perform the same computations as above on this system, we get Figure 2, which appears to be a much simpler diagram.

Figure 2: Limit point (saddle point) bifurcations are shown as blue curves. The red star is a cusp bifurcation.

Given this evidence, the authors conjecture that these bifurcation diagrams were unique to each adjacency class (e.g. graphs with different spectra could not have precisely the same diagram), but that even within an adjacency class, variation could be significant. They explore this further in the paper using a statistical approach, since the number of possible configurations is quite large, and it grows extremely rapidly for even moderately larger graphs. They also discuss other aspects of bifurcations due to changes in the graph structure, comparing for instance densely-interconnected modules with sparser ones, and showing that certain regions were more sensitive to global changes such as adding or removing edges when the edge weights were large. A complete picture is elusive, but I feel that this kind of exploratory analysis at least gives an idea of what tools may or may not be helpful for analyzing the effects of structural changes in network dynamical systems. The authors also suggest that using finer metrics on the network, such as node degree distributions, or clustering coefficients, may provide a more effective way of classifying dynamic complexity.

Practically speaking, this result helps us understand how useful the adjacency spectrum of a network is in terms of classifying its behaviour. There has been a significant amount of recent work on using models like these, as well as statistical approaches, to understand how structural changes influences complex systems, such as in the process of learning in the brain. Restructuring seems to play a crucial role in learning. Additionally, understanding the importance of structure to dynamical function seems important for understanding abnormal brain connectivity in terms of the efficiency of brain function and in mental illness. As with most good areas of science, there are many more interesting questions to pursue.

Over the summer I hope to post a few more blogs like this related to dynamical systems approaches to large coupled systems. In particular, I think a good layperson discussion of this paper would be interesting as my recent research has been in the area of Differential–Algebraic Equations with an underlying network structure.

**3. References and Further Reading **

- Porter, Mason A., and James P. Gleeson. “Dynamical systems on networks: a tutorial.” arXiv preprint arXiv:1403.7663 (2014).
- Arenas, Alex, et al. “Synchronization in complex networks.” Physics Reports 469.3 (2008): 93-153.
- Rǎdulescu, Anca, and Sergio Verduzco-Flores. “Nonlinear network dynamics under perturbations of the underlying graph.” Chaos: An Interdisciplinary Journal of Nonlinear Science 25.1 (2015): 013116.
- Kac, Mark. “Can one hear the shape of a drum?.” The american mathematical monthly 73.4 (1966): 1-23.
- Dhooge, Annick, Willy Govaerts, and Yu A. Kuznetsov. “MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs.” ACM Transactions on Mathematical Software (TOMS) 29.2 (2003): 141-164.
- Guckenheimer, John. “Bifurcation.” Scholarpedia 2.6 (2007): 1517.

]]>

There is quite a lot to say about the information presented in the links above. I mostly want to comment on the common perceptions people have about academics, and perhaps more importantly, about the tools that they produce. It is abundantly clear that we live in a world that is becoming more complex, and that we are starting to face challenges that previous generations could never have conceived. Epigenetics has turned the concepts of DNA and hereditary information into a much more complicated picture, where our daily choices will possibly have deep impacts on the physiology of our children. Antibiotics changed so much of medicine, but they have come with a very unexpected new danger of antimicrobial resistance, which we still don’t have good measures to guard against. While we have increased our awareness as a species about our impacts on the environment, climate change and other areas where the science is very nuanced show us how little we know about our meddling in natural affairs, and how very dangerous that can be. I highly recommend you to read this article on a very real take on the purely nonsensical problems with the politics surrounding this issue.

It is for these reasons that I think, on average, people are beginning to appreciate complexity significantly more. Speaking anecdotally for a moment, I remember that as a child the number of people I knew who had the mental abilities to manipulate computers was quite small. In comparison, I now know more people capable of not only using a myriad of devices, but of programming them to various degrees. This isn’t a trivial skill, but it is becoming a much more valuable one to at least begin to understand. I would highly recommend reading this article in full when you have the time to really begin to appreciate what programming really is. In fact, I’d recommend spending some time just playing on that beautifully-interactive description of an increasingly-important area of human knowledge.

So in general, my outlook for the average person’s capabilities and intelligence are quite high. I think people, at least those in an economically prosperous country, will continue to become more capable, and on average more intelligent. There is some debate, especially about how we quantify intelligence and how our capabilities are changing. As usual, I encourage you to read some of the modern discussions such as this or this, as a brief overview of that topic. I think though, that it is a much harder question to determine if our capabilities are increasing fast enough to match the increasingly complex and nuanced challenges we are beginning to face.

In particular, I find the abuse of mathematical and statistical tools, or the mis-use of scientific theory in government or even within academia itself, very disheartening. Taleb’s article about how little good theory we have to deal with the majority of real-life probabilistic challenges is very sobering to a theoretician like myself. Most of us are well aware that we idealize reality in order to strive for more elegant and general theories, and often those using applications in the “real-world” simply take our models and apply them, without carefully checking the assumptions.

It is difficult to read through dense and technical work in order to understand things like why GDP is used as an economic measure, and how viable it is. The literature is vast and contains an incredible amount of subtlety and technical analyses. Yet, as with many such ideas, we can find videos of politicians and newscasters comparing GDP to the total enterprise value of a company. This kind of comparison has many issues (discussed in some detail in the Street-Fighting Mathematics book), not the least of which is that it is *dimensionally inconsistent*. It would be similarly nonsensical to compare the position of the moon relative to the earth with how fast a car is moving, in order to make a statement about how slow it is going. On the one hand, it is unreasonable for us to expect everyone to be well-versed in modern economic theory. Despite its mathematical nature, I know basically nothing about it. On the other hand, wouldn’t it be nice if people with important jobs, like politicians and reporters, could remember enough about their high school science curriculum to make valid statements and comparisons?

I have very mixed feelings about all of this, and I will probably spend a long time thinking and reading before stating any possible solutions to these problems. Nevertheless, I think a very important takeaway is how invaluable a *good* education is, and how this is often being compromised in favour of training in many places, or otherwise terrible solutions to the difficult problem of education in an increasingly complex world.

Please do share your thoughts on these issues with me, and I very much would encourage you to read some of the items linked above.

]]>

Here I want to summarize some of the ideas presented in [1], and briefly discuss them in the context of multiple scales analysis. Specifically, I want to emphasize the similarities and differences between one-dimensional homogenization for ordinary differential equations (multiple scales), and the approaches to partial differential equations that are discussed in the paper. Throughout, emphasis will be on the perturbative assumptions and analysis, although other considerations (e.g. numerical difficulties) will be mentioned.

**1. Motivation **

There are a variety of paradigms used to model the influence of complex media on macroscopic physics, such as the effects of a porous medium on transport of some physical quantity. An interesting approach is that of homogenization, where separation of scales is exploited to simplify the behaviour of the system. To give an example, consider a bounded domain modelling some porous medium, so that to model the microscopic flow in we must account for the complex geometry of the pores. This problem can be very difficult for many reasons, especially since even sophisticated numerical approaches are often not capable of resolving this fluid flow.

Homogenization is a mathematical approach where we consider a suitable average flow through this domain rather than the flow through the microstructure of the fluid domain itself. In many practical situations this is what we are actually interested in. Additionally, these approaches can sometimes be made rigorous, and at the very least give some motivation for phenomenological relationships discovered empirically, such as Darcy’s Law. This idea of rigorous justification is of course a spectrum, and research in areas like porous media fall throughout it. Nevertheless, it is useful to have a methodology that can be applied to a variety of microscopic geometries, and determine useful macroscopic values.

**2. 1-D Homogenization **

The case of homogenization for Ordinary Differential Equations was presented as an application of the Method of Multiple Scales. I will briefly outline how this can be applied to the equation,

where and are smooth and periodic in with period one. This might be a model, for example, of steady state transport in a one dimensional medium with a rapidly varying microstructure, modelled by the micro or fast variable . The periodicity assumption warrants some discussion, but is a reasonable physical assumption for many problems of interest. See Technical Note 10 in [1] for more information. What we are interested in doing is averaging this equation in a suitable sense so that we don’t have to resolve the solution at the microscale, that is, when is order . What would be ideal is to recover an ODE in that has an average or effective diffusivity related to .

We may proceed via multiple scales by formally treating and as different variables, and expanding as . We may then further assumption that the are all periodic in with period one. Substituting this into (1), we find

Formally equating powers of we find at order ,

If we multiply this equation by , and integrate in , we find

through integration by parts. Since is positive and is periodic in , we have that does not depend on . We now look at order ,

Integrating once over gives us,

where is a constant of integration. Integrating this equation over , and using the periodicity of , we can find as

where we have used the brackets to denote averaging over the periodic cell. If we now look at the problem we have,

If we now average this equation over one period in , we find by periodicity that

Now using the average of (2), we can substitute for the second term in this equation to find,

so that the harmonic mean of is the correct average, or effective diffusivity, that appears in our final ODE for .

**3. Homogenization for Partial Differential Equations **

The above approach can fairly easily be extended to second order elliptic operators, but instead I want to compare it to the methods discussed in [1]. There, two techniques to homogenize PDE are compared with a general parabolic transport problem in mind. The first approach discussed is the use of volume averaging, which is a very physically-motivated technique to understand the relationship between the microstructure of a material, and its effective approximations. The second approach is by use of formal multiscale asymptotics, which is much closer to the 1-D method demonstrated above. For this reason I will focus the rest of the discussion on this second approach, and refer to the paper for details about the former. For the rest of this discussion we will use notation from the paper, recalling what seems appropriate. See section 4.1 of [1] for details if anything is unclear.

After a suitable nondimensionalization, many transport phenomena can be described by the following problem,

where for or . We also assume that exhibits only high-frequency oscillations, that is, on length scales of order . As in the 1-D case, we are letting this diffusion tensor model the microstructure of the medium. There is some discussion about when this is not applicable, such as when there are sharp phase boundaries, but it is a relatively good approximation for many physical problems. It should also be noted that is the ratio of the microscopic length scale, , and the macroscopic length scale, . That is, .

To proceed, several assumptions are made about the problem that allow us to treat it in a similar fashion to the 1-D problem above. Here, however, both the physical and mathematical ideas become more technical. The assumptions made, and some simplified discussion, are as follows.

: We homogenize the problem by considering the limit of a sequence of fictitious problems , rather than the original problem (5)–(7), which implicitly contains a fixed finite, but nonzero, value of the length scale ratio . This is in the spirit of zooming away from the microstructure in the limit, and so this limit should rightfully be considered an asymptotic approximation to the real geometry of the problem. Parameter fields for the problem use a similar subscript notation.

: We assume scales as component-wise, and that we can write . The original nondimensionalization was with respect to the macroscopic length scale , and so this assumption about scaling is focussing on the global, macroscopic behaviour. There are subtleties about the scalings chosen that are particular to each problem, and the paper has an important discussion of how these can lead to different macroscopic models. See Technical Note 12 in the paper for more details.

: We assume that we can use the formal two-scale expansion . A crucial point here is that for .

: We assume that the microstructure of our problem is either periodic, or can be well-approximated by a conceptual periodic setting. The fundamental unit of periodicity is called the unit-cell, and would be considered the interval in the earlier example. Î½ Finally, we recall that averaging is with respect to the unit-cell being considered in the problem. If is the unit-cell with centroid and corresponding volume , we define the average of a function as

Here, we are taking the unit-cells to be sequences of sets at each point , in order to define this average globally, and so the integral is only over . Think of this as a generalization of the earlier definition of average, .

I will briefly outline the method and results, and refer to the paper for the full details. Applying these assumptions sequentially, and following a process very similar to the 1-D example, we can arrive at a macroscale equation and a unit-cell problem. Doing so yields that is independent of , and the following equation for the unit-cell,

and the macroscale equation for ,

Note that these two equations are exactly the macroscale and unit-cell problems for the 1-D problem if in the ODE, and if we added the time-derivative term.

Motivated by separation of variables, we write

and substitute it into the cell-problem, (8). We can now use the unit cell geometry and the definition of to solve for , assuming it is periodic in . Substituting this separation of variables ansatz into (9) to get,

where

Here, the average of the diffusion tensor directly takes the microscopic cell-problem into account. If we were to solve (2) by exploiting the linearity of the cell-problem, we could write the solution to that problem in an identical form. Instead, we went one step further since the geometry of the unit interval is so simple, and computed the effective diffusivity directly. The multidimensional problem has far more subtlety than the simple case, and even after averaging may be difficult to resolve numerically, but the similarities of the problems and their solutions is worth pointing out.

There are many important aspects that the paper covers that I did not discuss, as my intention was mostly to reflect on it in light of the Perturbation Methods lectures that I had attended. There are numerous other aspects worth discussing, such as efficient numerical coupling between the cell-problem and the macroscale equation, or formal convergence results of the asymptotic solutions. I would invite the interested reader to use [1] and the other links below as use starting points for further reading on the topic.

**4. References and Useful Sources**

[1] – Y. Davit, et. al., Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare? Advances in Water Resources, 62, Part B:178–206, December 2013.

[2] – Some good lecture notes on Multiple Scales methods.

[3] – Another paper comparing the physical meaning of homogenization with the mathematical theory.

]]>With this idea in mind, I want to briefly touch on a few areas where this spectrum can be observed. I will not give anything close to a full account of these areas, nor can I list the countless textbooks and online sources that discuss them, but I will litter the discussion below with some links to help make sense of things. Ideally, the reader will have some mathematical maturity, but I hope the discussion is illustrative even if the details are too abstract to follow. It should also be noted that this is perhaps a very subjective opinion, and that my experiences are primarily as a student, and less as a researcher.

**1. Partial Differential Equations and Shocks **

A huge amount of applied mathematics, and so physics and computer science, uses the framework of differential equations to study physical phenomena. They are a powerful tool that, since the discovery of Calculus, have given us an enormous amount of insight into the physical world. These equations form the basis of countless theories underlying modern economics, biology, engineering, and other disciplines. They are currently being used to push the boundaries of present knowledge in areas as diverse as firefighting, tissue engineering, materials science, and even understanding financial markets or the spread of information on Facebook. They also give rise to a variety of mathematical questions in order to analyze them, and often these mathematical questions are pursued for their own intrinsic beauty.

In particular, many phenomena of modern interest exhibit nonlinearity, and so the equations modelling them are almost always more complex than linear ones. Due to the difficulty in solving such equations, various analytical and numerical approximation schemes have been developed over the past few centuries. It is the validity of these approximations that seems to differentiate researchers by the rigour of their approaches. Whether or not solutions exist and are unique to these kinds of problems is a huge research endeavour, which is often taken for granted by scientists more interested in the phenomena that are being modelled. Beyond existence questions, many equations are ill-posed due to being very sensitive to parameters that we may not physically know perfectly, and hence their analysis must be done with care. See chaotic dynamics for some discussion of this point, which I may address in detail in a later post.

For a very illustrative example, consider the history of fluid dynamics. This is an area where mathematical modelling has been particularly successful in achieving exact analytical results that are relevant to science and engineering, but also because computational simulations of the differential equations governing the physics of fluids (the Navier-Stokes equations) are often much cheaper than doing experiments with wind tunnels or other apparatuses. The validity of such approaches can be very questionable, however, and the variety of attention to these details is quite broad in the literature.

In particular, we don’t yet know if the Navier-Stokes equations have a unique physical solution for general geometries. Simply proving that they do would solve a very difficult problem and net you $1,000,000 from the Clay Mathematics Institute. This question isn’t really even if the assumptions in deriving such equations (such as modelling fluids as continua) break down, but whether the model itself is predictive for all cases. Nevertheless, these equations are still used in almost all major fields of science, with varying levels of certainty.

A particularly important special case of these equations is Burger’s equation. Let be the velocity of fluid at position and time , and be viscosity. Then this equation is,

As long as , solutions to this equation are smooth. When , which is a common (inviscid) approximation for gases, solutions to this equation can become multivalued after a finite amount of time. This can be seen since this is just advection where the speed of the solution is , so it is easy to write down an initial condition that would collapse on itself. This can be shown formally using the method of characteristics.

Such a multivalued solution may be physical, as in the case of a water wave overtaking the surface of a body of water before breaking, but it usually isn’t. One approach to remedy this situation is to introduce a shock (or discontinuity) into the solution, so that it satisfies the differential equation everywhere but at the discontinuity. This can then be formally considered a weak solution of the equation. Here is an example of a shockwave in pressure due to this behaviour. An interesting fact is that the form of the shock solution is not uniquely determined by the equation, but requires an additional assumption of a conserved quantity. In particular, many physical models such as (1) (with ) can be written in a conservation form such as,

where the second term can be seen as conservation of kinetic energy. But this equation can be written in a variety of different forms, such as

for any constants and . Each choice of these constants would lead to a different form of the shock solution in general, and hence correspond to different physical situations. This type of phenomena is well known for systems of hyperbolic conservation laws, and is still a modern research area for many people. In particular, I hope it is clear that there are many technical considerations when one is viewing these equations rigorously, and many physical ones when one is choosing the conserved quantities for a given physical system.

**2. The Devil’s Invention: Asymptotics **

“Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever.” – N. H. Abel

Another area where a variety of approaches in rigour exists is the development of asymptotic formulae, especially in connection with perturbation problems. These occur in countless settings, from quantum mechanics and astrophysics, to cancer biology and electrodynamics. These kinds of approaches also play key roles in the development and analysis of numerical approaches to solving differential equations.

The key insight in such methods is to identify small parameters that are either explicitly in the model, such as the viscosity in (1), or implicitly, such as the length or time scales of events we are interested in. Once such a parameter has been identified, we write an expansion for our unknown function in some sequence of this parameter, such as a power series, and then formally substitute this into the equations. For an example of this, see this previous post.

This kind of technique has been incredibly successful at elucidating important aspects of physical phenomena. In particular, it can quite quickly give insight into what effects or forces dominate the dynamics in a particular system. But it requires what one might consider a leap of faith, as these expansions are, in general, divergent series. Proving their convergence, or alternatively, that they are formally asymptotic approximations according to a suitable definition, is quite challenging. I will likely return to this point with a more detailed example in a future post about homogenization theory, and how it is divided into several different groups working with different levels of rigour.

There are many other important instances where the very formal theoretical community of mathematicians has diverged from those interested in understanding physical problems, and a real spectrum of approaches has emerged. Another important example is the stochastic calculus, which has emerged to make sense of many earlier physical insights, such as Einstein’s model of brownian motion. I may write more about examples of this kind later, but for now I hope this has given you some insight into the very different notions of certainty that theoretical scientists have. Rigour and certainty are somewhat subjective, but I believe crucial notions in the advancement of understanding. As we begin to question more complex phenomena, we must keep an eye on how careful we are in our investigations. I don’t believe there is a single correct place to be on such a spectrum of rigour, but rather that differences in approach lead to deeper insights, as progress in both deep theory and complicated applications reinforces our fundamental understanding.

]]>