## Non-Existence of Patterns in Reaction-Diffusion Systems

In writing up several recent papers I have spent some time reading about Reaction-Diffusion systems and the kinds of behaviour they can have. I will likely blog about these kinds of equations more often, as well as applications of them to chemical, biological, and other systems. Today I want to just record some interesting observations from the literature. This will be about the existence of large time asymptotic solutions to these equations. Roughly speaking, these must be either uniform or non-uniform steady states, or more exotic time-periodic solutions such as oscillations, spatiotemporal waves, and uniform or spatial chaos. Other interesting transient behaviours can occur, such as wave propagation, but here I’ll focus entirely on the long time dynamics, and specifically on the existence of stable equilibria. For a broad overview of these models in some detail see [1].

1. Reaction-Diffusion Equations and Turing Instability

A reaction-diffusion system is a coupled set of ${k}$ partial differential equations of the form,

$\displaystyle \frac{\partial u_i(\boldsymbol{x},t)}{\partial t} = f_i(\boldsymbol{u})+\delta_i \Delta u_i,\quad i=1\dots k, \ \ \ \ \ (1)$

where ${\boldsymbol{u} = (u_1,\dots,u_k)^T}$ is the state vector typically denoting concentrations of chemicals or biological populations, ${f_i}$ are the (typically nonlinear) kinetics that describe interactions between the different species, ${\delta_i >0}$ are the diffusion rates, ${\boldsymbol{x} \in \Omega\subseteq R^n}$ is the space variable, and ${t}$ is time. As the ${u_i}$ represent population densities or concentrations of chemical species, we typically assume ${u_i \geq 0}$ for all ${i}$. We also need some initial distribution ${u_i(0,x) = u_{i,0}(x)}$ for each species, and either Neumann,

$\displaystyle \boldsymbol{n}\cdot \nabla u_i =0 \quad x \in \partial \Omega, \quad i=1,\dots,k \ \ \ \ \ (2)$

or Dirichlet boundary conditions,

$\displaystyle u_i =0 \quad x \in \partial \Omega, \quad i=1,\dots,k \ \ \ \ \ (3)$

where ${\boldsymbol{n}}$ is the outward normal vector at the boundary of the domain, which we denote by ${\partial \Omega}$. The former models no-flux’ conditions where the ${u_i}$ cannot leave or enter the domain, and the latter models fixed boundaries where the value of ${u_i}$ is fixed. Here we are writing homogeneous conditions (${=0}$), but typically you can shift non-homogeneities in the boundary into the equation and reduce it to this case, potentially introducing spatially-dependent kinetics ${f_i}$.

If the spatial effects are negligible, such as in well-mixed or homogeneous chemical systems, then we can neglect the effect of diffusion and write a system of ordinary differential equations instead,

$\displaystyle \frac{d u_i(t)}{d t} = f_i(\boldsymbol{u}),\quad i=1\dots k, \ \ \ \ \ (4)$

where all terms have the same meaning as in equations (1). Systems of ODEs can be found throughout chemical and biological systems, and have formed the basis of much of our theoretical understanding of gross reactions, such as in Mass action kinetics of chemical reactions, ecological models of various kinds of populations, and epidemiological models of the spread of disease throughout a population. As these models are both amenable to elementary methods, and quite good at providing insight into a real phenomenon, many undergraduate courses in applied mathematics, and especially mathematical biology, spend a considerable amount of time studying the spatially homogeneous cases for various choices of kinetic functions ${f_i}$. ODEs are still heavily studied in areas of applied mathematics, as well as their extensions to stochastic differential equations, delay differential equations, and partial differential equations. In all of these models there is often a discussion of what changes by considering these other phenomena-delays in time, randomness, and spatial effects.

Typically diffusion plays the role of smoothing out irregularities, as can be seen in the solution of the Heat Equation. Contrary to this intuition however, Turing [2] famously showed that chemicals with different diffusivities, ${\delta_i}$ in equations (1), can destabilize a stable solution of the spatially homogeneous equations (4). This often leads to non-uniform steady states or patterns that cannot be obtained or understood by the spatially homogeneous ODEs. There are many accounts of these patterns, and a huge amount of mathematical literature discussing them. This result is relatively easy to derive, and I highly encourage you to read Turing’s paper no matter what background you have in this material-It is wonderfully written, and implies that diffusion in some systems can drive an instability of the uniform solution that must mean more complex behaviour occurs over the spatial extent of the domain. These non-uniform states are often referred to as Patterns,’ and they admit some very beautiful structures.

2. Non-Existence of Stable Patterns in One Dimensional Scalar RD Equations

Here I want to extend the discussion I presented before about Lyapunov functionals for the scalar Reaction-Diffusion equation, and briefly summarize some literature that extends these results. There are many technicalities, both practically and formally, when dealing with PDE. Here I will consider Neumann boundary conditions on a simple domain ${\Omega}$. We consider the scalar equation on the interval ${[0,L]}$,

$\displaystyle \frac{\partial u(x,t)}{\partial t} = f(u)+\frac{\partial^2 u}{\partial x^2},\quad t>0, \quad x \in \Omega = [0,L], \ \ \ \ \ (5)$

with boundary and initial data,

$\displaystyle u_x(0,t)=u_x(L,t)=0, \quad u(x,0) = u_0(x), \ \ \ \ \ (6)$

where ${u_0}$ is a given initial distribution of the quantity ${u}$. Note that the diffusivity ${\delta}$ can be set to unity by scaling the space variable ${x}$. In the previous post I showed how one can see, via integration by parts, that the only long-time behaviour of equation (5) consisted of steady states. So for a large enough time ${T}$, we anticipate that the solution to this equation will asymptotically approach a steady state function, e.g. ${u(T,x) \approx U(x)}$ for some spatial function ${U}$. As usual there are details about what we mean about approximation of functions, but formally this can be shown as convergence in a suitable function space. Practically this means that if we simulate equation (5) for a long enough time, things will settle onto some spatial function. We can show that non-uniform solutions to equation (5) are in fact unstable, so the only possible asymptotic solutions are uniform steady states satisfying the spatially homogeneous equation (4) for ${k=1}$. That is, the asymptotic solution ${U(x)=U_0}$ for a constant ${U_0}$ that satisfies ${f(U_0)=0}$. This is a result in the opposite direction of Turing’s work, and implies that the long-time behaviour of scalar reaction-diffusion equations in one dimension is captured by an ordinary differential equation.

Let ${U(x) \neq c}$ for any constant ${c \in {\mathbb R}}$ denote a non-uniform solution to (5). Then we can perturb this solution by writing ${u(t,x) = U(X) + \epsilon e^{\lambda t}V(X)}$ for some small parameter ${\epsilon}$. The sign of ${\lambda}$ will tell us if this perturbation grows or decays in time, and hence whether or not ${U}$ is a stable steady state. Substituting this in and rearranging we have the equation involving the perturbation ${V}$,

$\displaystyle (f'(U)-\lambda)V+V_{xx}=0, \ \ \ \ \ (7)$

where ${V}$ satisfies the boundary conditions,

$\displaystyle V_x(0,t)=V_x(L,t)=0. \ \ \ \ \ (8)$

This is a standard Sturm-Liouville eigenvalue problem, and so the eigenvalues that satisfy it are discrete (e.g. countable), real and simple. They can be arranged in increasing order as ${-\infty < \dots < \lambda^N_1 0}$. Similarly we can consider the eigenvalues corresponding to the Dirichlet problem, ${\lambda^D_i}$, and note that with a similar ordering, the eigenvalue with index ${i}$ will have ${i}$ zeroes in the interval ${[0,L]}$. Let ${W=U_x}$ and differentiate the steady-state version of equation (5) with respect to ${x}$ to get that

$\displaystyle f'(U)W+ W_{xx}=0, \ \ \ \ \ (9)$

where ${W}$ satisfies the boundary conditions,

$\displaystyle W(0,t)=W(L,t)=0. \ \ \ \ \ (10)$

This tells us that problem (7) with homogeneous Dirichlet boundary conditions instead of Neumann conditions has ${\lambda^D_i=0}$ as an eigenvalue for some ${i}$. We will exploit the relationship between Dirichlet and Neumann eigenvalues to get the result ${\lambda^N_0 > 0}$. We use the following Lemma,

Lemma 1 Let ${h=h(x)}$ be a given function and ${g_i(x)}$ for ${i=1,2}$ be two solutions to the following eigenvalue problem,

$\displaystyle (h(x)-\lambda_i)g_i+ g_i''=0, \ \ \ \ \ (11)$

with eigenvalues ${\lambda_i}$ respectively. Assume ${g_1(0)=g_1(L)}$, and that ${g_1(x) >0}$ for ${x \in (0,L)}$ and ${g_2(x) >0}$ for ${x \in [0,L]}$. Then ${\lambda_1 < \lambda_2}$.

Proof: Multiply the equation (11) for ${g_1}$ by ${g_2}$, and vice-versa, and subtract these two equations. We then have,

$\displaystyle g_1'' g_2 - g_1g_2'' + (\lambda_2-\lambda_1)g_1 g_2=0. \ \ \ \ \ (12)$

We integrate (12) from ${x=0}$ to ${x=L}$ and use integration by parts to find,

$\displaystyle -g_1' g_2|_0^L + g_1g_2'|_0^L + (\lambda_2-\lambda_1)\int_0^L g_1 g_2dx=0. \ \ \ \ \ (13)$

By assumption on ${g_1}$, the second term of (13) vanishes. Likewise, the integral in the third term must be non-negative. By assumption on ${g_1}$, the first term is non-negative as we have ${g_1'(0)\geq 0}$ and ${g_1'(L) \leq 0}$, but not both. If ${g_1'(0)=g_1'(L)=0}$, then ${g_1(x)=0}$ for all ${x}$ violating our assumptions. So the first term must be strictly positive, and hence ${(\lambda_2-\lambda_1)>0}$. $\Box$

As we are interested in the eigenvalue ${\lambda^N_0}$, which corresponds to a function with no zeros in the interval ${[0,L]}$, we can immediately apply Lemma 1 with ${g_2}$ as the corresponding eigenfunction. So we conclude that ${\lambda^N_0 > \lambda^D_0 \geq \lambda^D_i=0}$, and hence the solution ${U(x)}$ is unstable. Therefore, for scalar Neumann problems in one dimension, stable non-uniform solutions do not exist.

We can also apply this analysis to show something about the Dirichlet problem. Assume ${U(x)}$ is a non-constant steady state solution to equation (5) but with Dirichlet conditions (${U(0)=U(L)=0}$). We can show that if ${U(x)}$ changes sign at least once on the interval ${[0,L]}$, then it is unstable. Again consider ${W=U_x}$. As U changes sign, there exist ${0 < x_1 < x_2 < L}$ such that ${W(x_1) = W(x_2) = 0}$. So ${W}$ solves the boundary value problem,

$\displaystyle f'(U)W+ W_{xx}=0,\quad W(x_1)=W(x_2)=0. \ \ \ \ \ (14)$

We can then apply Lemma 1 with ${g_1 = \pm W}$ where we choose the sign to enforce positivity of ${g_1}$. By restricting our domain to ${[x_1,x_2]}$ we again have that ${\lambda^D_0>0}$ and hence ${U(x)}$ is unstable if it changes sign on the interval ${[0,L]}$. Section 2.7 of the second volume of Murray’s Mathematical Biology [3] gives a nice example of this problem that does have spatially non-uniform steady states, but these are all symmetric about ${x=L/2}$ and never change sign on the interval.

3. Non-Existence of Stable Patterns for ${n,k\geq 2}$

There are many extensions to the above results in the scalar ${n=1}$ case. One that I’ve come across recently is the paper by Kishimoto and Weinberger [4] that contains the following result,

Theorem 2 Let ${U(x)}$ be a non-constant solution to equation (1) satisfying (2) in a convex domain ${\Omega}$. If the kinetic functions ${f_i}$ satisfy,

$\displaystyle \frac{\partial f_i}{\partial u_j}(U(x))>0,\quad i \neq j, \ \ \ \ \ (15)$

then ${U(x)}$ is unstable.

The condition (15) implies that the reaction terms are cooperative-each species ${u_i}$ helps every other ${u_j}$ in some sense. Similarly competitive dynamics can be modelled by,

$\displaystyle \frac{\partial f_i}{\partial u_j}(U(x))<0,\quad i \neq j. \ \ \ \ \ (16)$

For the two species case of ${k=2}$, the change of variables ${v_1=-u_1,v_2=u_2}$ allows the application of Theorem 2 again, and so two-species competitive dynamics cannot exhibit spatial patterning either. Other kinds of kinetics, such as Gierer-Meinhardt, can give rise to spatial patterns in two dimensions with Neumann boundary conditions, as they are of the activator-inhibitor’ type. These are specifically the kinetics that Turing demonstrated.

The assumption about a convex domain is an interesting necessity. One can construct a dumbbell-shaped domain with bistable cooperative kinetics and a very thin neck’ connecting the two subdomains such that instability occurs [5].

We can then broadly classify whether we expect patterns to exist based on both the spatial dimension and the number of constituents (chemical or biological species) we are considering.

• For ${k=1}$ and ${m=1}$, we have by the results of section 2 that stable equilibria will always be the uniform constant solutions for Neumann boundary conditions, and Dirichlet conditions can only give certain kinds of patterns.
• For ${k=1}$ and ${m>1}$, [4], [5],[6], and related works show that even for very general kinetics (e.g. ${f=f(t,u,\nabla u)}$ where ${f}$ is periodic in time ${t}$), if the domain is convex and Neumann conditions are imposed, only spatially homogeneous steady states (or time-periodic solutions to the spatially homogeneous equation) are stable. This suggests that patterning is only possible for different boundary conditions, non-convex domains, or explicitly spatially-dependent kinetics.
• For ${k>1}$ and ${m>1}$, Theorem 2 implies that for a convex domain and cooperative (and if ${m=2}$ competitive) kinetics, the Neumann problem also does not admit patterns. In [6], this result is extended to include time-periodic kinetics.

In [6] and elsewhere, there is some discussion of allowing for dependence on ${\nabla u_i}$ in these general non-existence results, but I have not found such a result in this direction that wasn’t buried in technicalities. In general, however, this gives some clear boundaries of where patterning is and is not possible. It is an interesting mathematical phenomenon in part because, like chaos, it only seems to exist when there is a balance between some kind of stabilizing force, and some kind of destabilizing force. This is true of classical Turing patterns, which do not satisfy either cooperative or competitive kinetics as one chemical activates the other, whereas the reverse interaction is inhibition. There are results that if diffusion is too strong or too weak, non-uniform steady states can also lose their stability in some sense.

Patterning is also an interesting modelling tool, as it allows us to determine when observed spatial heterogeneity in nature or an experiment is due to some combination of interactions and dispersal alone, or if there is necessarily an environmental heterogeneity behind what we see. In this sense, these non-existence results give some useful information about when additional effects, such as environmental heterogeneity, must be included in our models, and when simple models are sufficient. This is a key point about Turing’s original work on morphogenesis-Occam’s razor suggests that a simple diffusion-driven instability is a plausible explanation for patterning in developmental biology, but this is still a hotly debated discussion. Much of the difficulty of modelling is about finding ideal models that are only just complicated enough to explain the phenomenon of interest, and this is especially challenging in fields with so much complexity.