The Spectrum of Rigour: PDE Theory, Shocks, and the Devil’s Invention

An overly simplistic (and hence incorrect) view of science is one in which we think all scientists have the same level of certainty in their descriptions of natural phenomena. One might suggest that different scientific fields could be arranged by their level of rigour, but even this misrepresents the idea of our certainty in a scientific result. The reality is that rigour and careful methodology don’t depend linearly on the complexity of the field or even the problem at hand. Nevertheless, we can make some progress by trying to understand the rigour of theoretical results on some linear spectrum.

With this idea in mind, I want to briefly touch on a few areas where this spectrum can be observed. I will not give anything close to a full account of these areas, nor can I list the countless textbooks and online sources that discuss them, but I will litter the discussion below with some links to help make sense of things. Ideally, the reader will have some mathematical maturity, but I hope the discussion is illustrative even if the details are too abstract to follow. It should also be noted that this is perhaps a very subjective opinion, and that my experiences are primarily as a student, and less as a researcher.

1. Partial Differential Equations and Shocks

A huge amount of applied mathematics, and so physics and computer science, uses the framework of differential equations to study physical phenomena. They are a powerful tool that, since the discovery of Calculus, have given us an enormous amount of insight into the physical world. These equations form the basis of countless theories underlying modern economics, biology, engineering, and other disciplines. They are currently being used to push the boundaries of present knowledge in areas as diverse as firefighting, tissue engineering, materials science, and even understanding financial markets or the spread of information on Facebook. They also give rise to a variety of mathematical questions in order to analyze them, and often these mathematical questions are pursued for their own intrinsic beauty.

In particular, many phenomena of modern interest exhibit nonlinearity, and so the equations modelling them are almost always more complex than linear ones. Due to the difficulty in solving such equations, various analytical and numerical approximation schemes have been developed over the past few centuries. It is the validity of these approximations that seems to differentiate researchers by the rigour of their approaches. Whether or not solutions exist and are unique to these kinds of problems is a huge research endeavour, which is often taken for granted by scientists more interested in the phenomena that are being modelled. Beyond existence questions, many equations are ill-posed due to being very sensitive to parameters that we may not physically know perfectly, and hence their analysis must be done with care. See chaotic dynamics for some discussion of this point, which I may address in detail in a later post.

For a very illustrative example, consider the history of fluid dynamics. This is an area where mathematical modelling has been particularly successful in achieving exact analytical results that are relevant to science and engineering, but also because computational simulations of the differential equations governing the physics of fluids (the Navier-Stokes equations) are often much cheaper than doing experiments with wind tunnels or other apparatuses. The validity of such approaches can be very questionable, however, and the variety of attention to these details is quite broad in the literature.

In particular, we don’t yet know if the Navier-Stokes equations have a unique physical solution for general geometries. Simply proving that they do would solve a very difficult problem and net you $1,000,000 from the Clay Mathematics Institute. This question isn’t really even if the assumptions in deriving such equations (such as modelling fluids as continua) break down, but whether the model itself is predictive for all cases. Nevertheless, these equations are still used in almost all major fields of science, with varying levels of certainty.

A particularly important special case of these equations is Burger’s equation. Let {u(x,t)} be the velocity of fluid at position {x \in {\mathbb R}} and time {t \geq 0}, and {\nu} be viscosity. Then this equation is,

\displaystyle \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2}. \ \ \ \ \ (1)

As long as {\nu > 0}, solutions to this equation are smooth. When {\nu = 0}, which is a common (inviscid) approximation for gases, solutions to this equation can become multivalued after a finite amount of time. This can be seen since this is just advection where the speed of the solution is {u}, so it is easy to write down an initial condition that would collapse on itself. This can be shown formally using the method of characteristics.

Such a multivalued solution may be physical, as in the case of a water wave overtaking the surface of a body of water before breaking, but it usually isn’t. One approach to remedy this situation is to introduce a shock (or discontinuity) into the solution, so that it satisfies the differential equation everywhere but at the discontinuity. This can then be formally considered a weak solution of the equation. Here is an example of a shockwave in pressure due to this behaviour. An interesting fact is that the form of the shock solution is not uniquely determined by the equation, but requires an additional assumption of a conserved quantity. In particular, many physical models such as (1) (with {\nu=0}) can be written in a conservation form such as,

\displaystyle \frac{\partial u}{\partial t} + \frac{1}{2} \frac{\partial }{\partial x}u^2 = 0, \ \ \ \ \ (2)

where the second term can be seen as conservation of kinetic energy. But this equation can be written in a variety of different forms, such as

\displaystyle u^{n-2}\frac{\partial u}{\partial t} + \frac{1}{n} \frac{\partial }{\partial x}(u^n+C) = 0, \ \ \ \ \ (3)

for any constants {n} and {C}. Each choice of these constants would lead to a different form of the shock solution in general, and hence correspond to different physical situations. This type of phenomena is well known for systems of hyperbolic conservation laws, and is still a modern research area for many people. In particular, I hope it is clear that there are many technical considerations when one is viewing these equations rigorously, and many physical ones when one is choosing the conserved quantities for a given physical system.

2. The Devil’s Invention: Asymptotics

“Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever.” – N. H. Abel

Another area where a variety of approaches in rigour exists is the development of asymptotic formulae, especially in connection with perturbation problems. These occur in countless settings, from quantum mechanics and astrophysics, to cancer biology and electrodynamics. These kinds of approaches also play key roles in the development and analysis of numerical approaches to solving differential equations.

The key insight in such methods is to identify small parameters that are either explicitly in the model, such as the viscosity {\nu} in (1), or implicitly, such as the length or time scales of events we are interested in. Once such a parameter has been identified, we write an expansion for our unknown function {u} in some sequence of this parameter, such as a power series, and then formally substitute this into the equations. For an example of this, see this previous post.

This kind of technique has been incredibly successful at elucidating important aspects of physical phenomena. In particular, it can quite quickly give insight into what effects or forces dominate the dynamics in a particular system. But it requires what one might consider a leap of faith, as these expansions are, in general, divergent series. Proving their convergence, or alternatively, that they are formally asymptotic approximations according to a suitable definition, is quite challenging. I will likely return to this point with a more detailed example in a future post about homogenization theory, and how it is divided into several different groups working with different levels of rigour.

There are many other important instances where the very formal theoretical community of mathematicians has diverged from those interested in understanding physical problems, and a real spectrum of approaches has emerged. Another important example is the stochastic calculus, which has emerged to make sense of many earlier physical insights, such as Einstein’s model of brownian motion. I may write more about examples of this kind later, but for now I hope this has given you some insight into the very different notions of certainty that theoretical scientists have. Rigour and certainty are somewhat subjective, but I believe crucial notions in the advancement of understanding. As we begin to question more complex phenomena, we must keep an eye on how careful we are in our investigations. I don’t believe there is a single correct place to be on such a spectrum of rigour, but rather that differences in approach lead to deeper insights, as progress in both deep theory and complicated applications reinforces our fundamental understanding.

Advertisements
This entry was posted in Expository, Mathematical, Perspectives and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s