Dissipative Functions and Reaction-Diffusion Equations

I want to review some aspects of dynamical systems theory for a class of dissipative systems which are particularly simple. These are systems that posses a Lyapunov function. Dissipative systems are mathematical objects that evolve in time, but remain in some bounded region of state-space for all times. Below I will define the concept of a Lyapunov function, how it relates to the long-time dynamics of differential equations, and finally develop some intuition for why the (scalar) Reaction-Diffusion equation is relatively simple in terms of dynamics, while Reaction-Diffusion systems can have much richer structure.

1. Long-time dynamics and Lyapunov Functions

Let {u(t)\in X} be the solution to the differential equation,

\displaystyle  \frac{d}{dt} u(t) \equiv \dot{u}(t)= A(u(t)), \ \ \ \ \ (1)

where {A:X\rightarrow X} is some (generally nonlinear) mapping of the state-space {X}. We also assume some initial data to make the problem well-defined, such as {u(0)=u_0 \in X}. For the moment, consider {X={\mathbb R}^n}, so the equation (1) is just a system of ordinary differential equations, and {A} is just a vector-valued function. We can think of {u} as being a particle in some {n}-dimensional space moving around with a velocity {\dot{u}}. For the rest of this discussion, we assume the function {A} is autonomous in that it only depends on time {t} through the variable {u} at the current time.

We will use geometric terms to describe how {u(t)} moves in the state space {X}. For each point {x \in X}, we can think of setting this as the initial condition for our differential equation, and evolving time forward. This is called the flow of the dynamical system, which we write as {\varphi(x,t) = u(t)} with {u_0 = x}. We can then think of the geometric objects that this flow generates. Namely, for each {x \in X}, we define the orbit of it as {\Gamma(x) = \{y \in X : \varphi(x,t) = y, \ \forall t \in {\mathbb R}\}}, and the positive half-orbit as {\Gamma^{+}(x) = \{y \in X : \varphi(x,t) = y, \ \forall t \geq 0\}}. Lastly, we can describe the asymptotic states that our flow generates using the language of {\omega}-limit sets.

Definition 1 A point {y \in X} is an {\omega}-limit point of {x \in X} if {\varphi(t,x)} is defined for all {t \geq 0} and there exists an increasing sequence of times {\{t_i\}\rightarrow\infty} such that {\varphi(t_i,x)\rightarrow y} for {i \rightarrow \infty}. The set of all {\omega}-limit points of {x} is called the {\omega}-limit set of {x} and is denoted by {\omega(x)}.

We think of {\omega(x)} as the set of points that our dynamical system settles into as its long-time behaviour, for a given initial point {x}. These sets can then be used to describe attractors, which contain information about the asymptotic dynamics for all bounded orbits. The idea behind all of this is to study long-time behaviour as a way of simplifying differential equations, which goes back some centuries and was significantly advanced by Poincaré as the qualitative theory of differential equations. How complex can these sets be, and what sorts of dynamics do they admit? I’ll give a few examples before describing some tools used to understand them.

A steady-state or equilibrium point is a solution {u^*} to the equation,

\displaystyle  \frac{d}{dt} u^* = 0 \implies A (u^*)=0. \ \ \ \ \ (2)

Equilibria have the simplest possible dynamics, as a steady solution is preserved by the flow (e.g. {\varphi(u^*,t) = u^*} for all {t}). For this reason they are also referred to as rest points or fixed points of the flow. Practically speaking, real physical systems may fluctuate slightly, and a system may not be precisely at a steady state, so there is an important discussion to be had about their Stability with respect to small perturbations. Other (bounded) asymptotic behaviours can exist as well-periodic behaviour, or more complicated time-dependent evolution such chaotic dynamics on a strange attractor. It is also possible that solutions will cease to exist in finite time, or become unbounded as time becomes larger. We will assume the function {A} is sufficiently regular to prevent the former, and has a dissipative structure to prevent the latter. Still, even dissipative systems can admit arbitrarily complex asymptotic behaviours, depending on their dimension.

To motivate the definition of a Lyapunov function, consider equation (1) with {X={\mathbb R}}, so that this is just a single ordinary differential equation. The only possible bounded asymptotic behaviour in one dimension is that the solution approaches a fixed point. Oscillations, or more complicated behaviours, are not possible as a continuous change in the sign of the velocity would require {\dot{u}=0}, and hence imply a steady state. We can understand this a different way by defining a function {V(u) = -\int_{0}^{u}A(s)ds} as a potential, and noting that {A(u) = -\frac{dV}{dt}}, so that extremal points of {V(u)} correspond to steady states of Equation (1). Local minima correspond to asymptotically stable steady states, and all other extremal values to unstable steady states. So for a scalar first order ODE, we know that the {\omega}-limit set of any point is a steady state if the orbit remains bounded. This can easily be generalized to show that a class of dynamical systems called gradient systems do not admit oscillating solutions.

Let {X=R^n}, and {A(u) = -\nabla_u V(u)} for some continuous scalar function {V(u)}, where {\nabla_u} is the gradient with respect to the vector of dependent variables {u}. Assume that {u(t) = u(t+T)} is a periodic solution with period {T}. Then by integrating Equation (1) we find,

\displaystyle  0 = V(u(T)) - V(u(0))) = \int_0^{T}\frac{d}{dt}V(u(t))dt = \int_0^{T}(\nabla_u V(u))\cdot\dot{u}(t)dt= -\int_0^{T}\dot{u}(t)\cdot \dot{u}(t)dt \leq 0,

with equality only if {\cdot{u}=0}, which means that {u} was a steady state. Therefore the only periodic solutions are steady states. This is true even in the case of {X} being infinite-dimensional (e.g. for partial differential equations), although there are many technicalities involved in showing it. We can also generalize the idea of exploiting this potential function, even if the system is not the gradient of a potential, using Lyapunov functions.

Definition 2 A continuous function {V:X\rightarrow X} is a Lyapunov function if for all {y \in X}, {t \geq 0} we have {V(\varphi(y,t)) \leq V(y)}, and if for all {p \in \Gamma(x)}, {t \geq 0}, {V(\varphi(p,t)) =c} for some constant {c} implies that {\omega(x)=z} for some steady state {z}.

There are many other definitions of Lyapunov functions, not all of which are equivalent, so care should be taken in applying theorems built from them. One common condition for {V} to be decreasing along orbits is {f(x) \cdot \nabla V(x) \leq 0} for all {x}. A sufficient (but not necessary) condition for rest points to be minima is that {f(x) \cdot \nabla V(x) = 0} implies that {f(x)=0}. These definitions often give insight into the stability of rest points and the particular kind of stability depends on the exact nature of these conditions (e.g. if the inequalities for non-minimal points are taken to be strict or not).

We can think of these functions as tools to understand the stability of equilibria in a different way from local linearization. These functions generalize potentials in that they should be decreasing along an orbit and attain minima at rest points. One powerful tool that makes use of these functions is the LaSalle Invariance Principle. Here, we will state it using the condition that {\Gamma^{+}(p)} is compact in {X}, but in finite dimensions this is equivalent to the orbit remaining bounded.

Theorem 3 Let {V} be a Lyapunov function, and let {p \in X} with {\Gamma^{+}(p)} relatively compact. Then {\omega(p)} consists only of equilibria. If the only nonempty connected subsets of all equilibria are single points (for example, if there are only a finite number of equilibria) then {\omega(p)={z}} for some equilibrium {z}, and {\varphi(p,t) \rightarrow z} as {t \rightarrow \infty}.

So dynamical systems that admit Lyapunov functions have quite simple asymptotic dynamics, as long as orbits are compact and equilibrium points are isolated.

As a brief aside, the dimension of a dynamical systems provides some hard limits on the possible dynamics. As we’ve seen, one dimensional dynamics is more-or-less trivial (at least if we consider only the large time asymptotics). The Poincaré –Bendixson Theorem guarantees that dynamical systems of the form (1) in the plane (e.g. {X = R^2}) can only have steady states and oscillations in terms of bounded solutions. Unbounded solutions, or solutions that blow up in finite time, can of course exist but these are not interesting in terms of asymptotic dynamics. Three-dimensional systems (equivalently, two-dimensional nonautonomous systems) can exhibit chaotic behaviour in addition to numerous other phenomena not possible in planar dynamics. For this reason, dynamics is often broken into the trivial (1D), the planar (2D), and everything else (3+D). So you can think of this as Mathematicians’ One-Two-Many Trichotomy.

2. Asymptotic Dynamics of Reaction-Diffusion Equations

Here I will briefly describe applying the theorem above to the case of the Reaction-Diffusion equation in a domain {\Omega \subset {\mathbb R}^n} that has a sufficiently smooth boundary {\partial \Omega}. The equation is,

\displaystyle  \frac{\partial}{\partial t} u(t,\boldsymbol{x}) \equiv \dot{u}(t,\boldsymbol{x})= \nabla^2 u(t,\boldsymbol{x})-f(u(t,\boldsymbol{x}),\boldsymbol{x}),\quad \boldsymbol{x} \in \Omega, \ \ \ \ \ (3)

where {\delta>0} is a given constant and {f} a given function of space and the dependent variable. We also assume that {u} satisfies Dirichlet boundary conditions,

\displaystyle  u(t,\boldsymbol{x}) = 0,\quad \boldsymbol{x} \in \partial \Omega. \ \ \ \ \ (4)

We assume that {f} is smooth enough to prove existence of bounded global solutions (that is, solutions remain bounded for all time {t\geq 0}). From the dynamical systems perspective, this is an infinite-dimensional system. This can be motivated by thinking about how spatial functions (the state space {X} of our equation) are more complex than vectors in finite dimensional systems. If this is not obvious, consider that the Taylor or Fourier series of a function has potentially infinitely many nonzero coefficients. There are many technical details about dynamical systems in infinite-dimensional spaces, but the same approach described for systems of ODE works as long as one is careful about these details. For example, this is what necessitates the use of compactness of orbits, as opposed to boundedness in Theorem 3, as boundedness only implies compactness in finite dimensions. The function space for solutions to this equation should be chosen carefully as well. For the purposes of this post I will just use the space {X=H^1_0(\Omega)} as the Sobolev space of functions that satisfy the boundary condition (4) and are weakly-differentiable (in the spatial variable {\boldsymbol{x}}). The second link gives some nice motivation for these technicalities, but you can ignore them and just think of the state space as a space of functions. Finally, steady states or rest points are functions {w(\boldsymbol{x}) \in X} that satisfy,

\displaystyle  f(w(\boldsymbol{x}),\boldsymbol{x})= \nabla^2 w(\boldsymbol{x}),\quad \boldsymbol{x} \in \Omega. \ \ \ \ \ (5)

We now define a Lyapunov functional for Equation (3). We let {V:X \rightarrow {\mathbb R}} be defined by,

\displaystyle  I(u)= \int_{\Omega} F(u,\boldsymbol{x})+\frac{1}{2}\left (\nabla u\cdot \nabla u\right)d\boldsymbol{x},

where we define,

\displaystyle  F(u,x)= \int_{0}^u f(s,\boldsymbol{x})ds.

Formally we can then see that this function must be decreasing along orbits as,

\displaystyle  \dot{I}= \int_{\Omega} \dot{u}f(u,\boldsymbol{x})+ \nabla \dot{u} \cdot \nabla u d\boldsymbol{x} = \int_{\Omega} \dot{u}(f(u,\boldsymbol{x})- \nabla^2 u)d\boldsymbol{x} = -\int_{\Omega} \dot{u}^2d\boldsymbol{x} \leq 0,

where we have integrated by parts to move the spatial derivatives around, and have assumed sufficient regularity to justify this computation. Justifying it is somewhat technical. As an example, it is nontrivial to define a flow for these kinds of equations due, in part, to the Second Law of Thermodynamics (See the Clausius–-Duhem inequality for a neat application of this irreversibility). Solving the backwards diffusion equation is not a well-posed problem (for usual definitions necessary to discuss things like dynamics). For this reason, even the concept of flow has to be augmented so that we only allow flow forward in time.

If we assume that all of this can be justified, a semiflow can be defined, and we can apply Theorem 3 after some compactness of orbits arguments. We then get the following result.

Theorem 4 Suppose that there are only finitely many solutions {w\in C^2(\Omega)} to (5) that satisfy the boundary data (4). Then given any {u_0 \in H^1_0(\Omega)}, the solution {u(t,\boldsymbol{x})} of Equation (5) satisfies,

\displaystyle  u(t,\cdot) \rightarrow w(\cdot) \text{ in } H^1_0(\Omega), \ \ \ \ \ (6)

for some equilibrium solution {w}.

Additionally, the Lyapunov functional {I} can be used to describe the stability of these steady states. Dynamically, this tells us that solutions are eventually damped to a rest state determined by the Nonlinear Poisson Equation (2). No oscillations or other time-dependent behaviour can be expected in the large-time limit, and the form of the steady states can also be relatively simple, depending on {f}. This is in contrast to systems of Reaction-Diffusion Equations, where Turing Instabilities and other phenomena can lead to time-periodic behaviours that I may illustrate in a later post.

3. References and Further Reading

For a very nice gentle introduction to the formalism, we refer to John Ball’s notes on Dynamical Systems and Energy Minimization, or to the excellent book by James Robinson which extends much of this discussion, in particular to the concept of global attractors and their structure. This is also what my Masters thesis was about–although admittedly the actual information gained about that PDE was very minimal compared to the above theorem characterizing {\omega}-limit sets as points.

This paper contains many generalizations of these results to gradient dynamical systems and gradient-like systems in general spaces, although it is quite advanced. A very nice introductory book to the finite dimensional dynamical systems is Strogatz’ Nonlinear Dynamics and Chaos, which contains a good introduction to other aspects of dynamical systems as well, such as bifurcation theory.

Advertisements
This entry was posted in Expository, Mathematical, My Research and tagged , . Bookmark the permalink.

2 Responses to Dissipative Functions and Reaction-Diffusion Equations

  1. Don’t forget quasiperiodic functions in the list of possibilities.

  2. Pingback: Non-Existence of Patterns in Reaction-Diffusion Systems | Empathic Dynamics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s