## Sauro Succi

Print publication date: 2018

Print ISBN-13: 9780199592357

Published to Oxford Scholarship Online: June 2018

DOI: 10.1093/oso/9780199592357.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: null; date: 03 August 2021

# Boltzmann’s Kinetic Theory

Chapter:
(p.17) 2 Boltzmann’s Kinetic Theory
Source:
The Lattice Boltzmann Equation
Publisher:
Oxford University Press
DOI:10.1093/oso/9780199592357.003.0002

# Abstract and Keywords

Kinetic theory is the branch of statistical physics dealing with the dynamics of non-equilibrium processes and their relaxation to thermodynamic equilibrium. Established by Ludwig Boltzmann (1844–1906) in 1872, his eponymous equation stands as its mathematical cornerstone. Originally developed in the framework of dilute gas systems, the Boltzmann equation has spread its wings across many areas of modern statistical physics, including electron transport in semiconductors, neutron transport, quantum-relativistic fluids in condensed matter and even subnuclear plasmas. In this Chapter, a basic introduction to the Boltzmann equation in the context of classical statistical mechanics shall be provided.

I am conscious of being only an individual struggling weakly against the stream of time. But it still remains in my power to contribute in such a way that, when the theory of gases is again revived, not too much will have to be rediscovered.

(L. Boltzmann)

# 2.1 Atomistic Dynamics

Let us consider a collection of N molecules moving in a box of volume V at temperature T and mutually interacting via a two-body intermolecular potential $V(r⃗)$, $r⃗$ being the intermolecular separation between two generic molecules.2

If the linear size s of the molecules, basically the effective range of the short-range interaction potential, is much smaller than their mean-interparticle separation $d=(V/N)1/3$, (p.18) the molecules can, to a good approximation, be treated like point-like structureless particles.

To the extent where the De Broglie length $λ=ℏ/mv$ of these particles is much smaller than any other relevant length scale, their dynamics is governed by the classical Newton’s equations:

(2.1)
$Display mathematics$

where $x⃗i$ is the position coordinate of the i-th particle, $v⃗i$ its velocity and $F⃗i$ is the force experienced by the i-th particle as a result of intermolecular interactions and possibly external fields (gravity, electric field, etc.).

Upon specifying initial and boundary conditions, equations (27.50) can in principle be solved in time, to yield a fully exhaustive knowledge of the state of the system, namely a set of $6N$ functions of time ${x⃗i(t),v⃗i(t)},i=1,N$.

This programme is totally unviable and, fortunately, needless as well. Unviability stems from two main reasons: first, N is generally of the order of the Avogadro number $NAv∼1023$, far too big for any foreseeable computer. Second, even if one could store it, tracking so much information for sufficiently long times would be utopia, since any tiny uncertainity on the initial conditions would blow up in the long run because of dynamical instability of phase space. By dynamical instability, we refer to the fact that any uncertainity δ‎0 on the initial positions and/or momenta grows exponentially in time as $δ(t)=δ0eλt$. The coefficient λ‎, known as Lyapunov exponent, is a measure of the temporal horizon of deterministic behavior of the N-body system, in that at times greater than λ‎−1, the growth of uncertainity is such to prevent any deterministic prediction of the state of the system. It is estimated that a centimeter cube of Argon in standard conditions (300 K, 1 Atm) produces as much as 1029 digits of information per second. This means that in order to keep an exact record of the state of the system over a 1s lifespan, we need a number with nothing less than 1029 digits. Fortunately, we manage to survive with less than that, reason being that we are much larger than the molecules our body is made of !

The physical observables we are interested in, say the fluid pressure, temperature, visible flow originate from a statistical average over a large number of individual molecular histories.

A rigorous definition of what is meant by statistical average is not trivial, but here we shall be content with the intuitive notion of spatial average over a thermodynamic volume, namely a region of space sufficiently small with respect to the global dimensions of the macroscopic domain, and yet large enough to contain a statistically meaningful sample of molecules.

Typical numbers help getting the picture. The density of air in standard conditions is about $nL=2.6871025$ molecules/m3 (known as as Loschmidt number). Hence, a centimeter cube of air contains about $2.71019$ molecules, corresponding to a statistical error of less than one part per billion.

# (p.19) 2.2 Statistical Dynamics: Boltzmann and the BBGKY Hierarchy

Given the very huge numbers involved, it appears therefore wise to approach the collective behavior of the ensemble of molecules from a statistical point of view.

This can be done at various levels of complexity, but, for a start, we shall begin with the simplest one: the single-particle kinetic level.

The chief question of single-particle kinetic theory is: What is the probability of finding a molecule around position $x⃗$ at time t with velocity $v⃗$?

Let $f(x⃗,v⃗,t)$ the probability density, more often simply denoted as distribution function.

The quantity $ΔN=fΔx⃗Δv⃗$ represents the mean number of molecules in a finite volume $Δx⃗Δv⃗$, centered about $(x⃗,v⃗)$ in the so-called single-particle phase space:

$Display mathematics$

Integration upon the velocity degrees of freedom delivers the number of particles per unit volume, i.e., the number density of the system at any given time t:

$Display mathematics$

which recovers the continuum density $n(x⃗,t)$ in the limit $ΔV→0$.

As a result, integration upon the entire phase space delivers the total number of molecules in the system at any given time t,

$Display mathematics$

The distribution function $f(x⃗,v⃗;t)$ is the pivotal object of Boltzmann’s kinetic theory.

In 1872, Ludwig Boltzmann (1844–1906) was able to derive an equation describing the evolution of $f(x⃗,v⃗;t)$ in terms of the underlying microdynamic interactions. This is the celebrated Boltzmann equation (BE), one of the greatest achievements of theoretical physics of the nineteenth century (1).

The BE represents the first quantitative effort to attack the grand-issue of why time goes “one-way only” on a macroscopic scale while the underlying microdynamics is apparently perfectly reversible.3 In this book, we shall not be much concerned with fundamental issues, but rather keep the focus on the BE as a mathematical tool to investigate, analytically or numerically, the properties of fluid flows far from equilibrium.

(p.20) The kinetic equation for the one-body distribution function in the presence of an external force $F⃗(x⃗)$ reads as follows (2):

(2.2)
$Display mathematics$

where $a⃗=F⃗/m$ is the particle acceleration due to external and internal forces.

The left-hand side represents the streaming of the molecules along the trajectories associated with the force field $F⃗$ (straight lines if $F⃗=0$) and C12 represents the effects of intermolecular (two-body) collisions taking molecules in/out the streaming trajectory.

Let us comment on the two sides separately.

Once it is accepted that the cloud of N molecules moves like a lump of fluid in phase space Γ‎1, the streaming term reduces to a mere mirror of Newtonian mechanics.

To convince oneself, simply rewrite the streaming term as a Lagrangian derivative along the trajectory $x⃗(t)$,

$Display mathematics$

Using Newton’s equations, $dx⃗dt=v⃗$, $dv⃗dt=F⃗/m$, this returns precisely the left-hand side of the Boltzmann equation. The streaming term carries the information contained in the distribution function untouched from place to place in phase-space.

Indeed, the solution of the collisionless Boltzmann equation $dfdt=0$, with initial conditions $f(x⃗,v⃗,t=0)=f0(x⃗,v⃗)$, is simply

(2.3)
$Display mathematics$

where $x⃗(t)$ and $v⃗(t)$ is the solution of the Newton’s equations with initial conditions $x⃗(t=0)=x⃗$ and $v⃗(t=0)=v⃗$. The key physical point is the following: the streaming term moves the distribution function in phase space with no loss of information, hence no loss of memory of the initial conditions: reversible motion.

# 2.3 The Born–Bogoliubov–Green–Kirkwood–Yvon (BBGKY) Hierarchy

The right-hand side of the BE, on the other hand, takes care of exchanging information across different trajectories, through intermolecular interactions.

The collision operator encodes two-body collisions, between, say molecule one, sitting at point $x⃗1$ with speed $v⃗1$ and molecule two, sitting at $x⃗2$ with speed $v⃗2$, both at time t.

Formally, this information is stored in the two-body distribution function

$Display mathematics$

Figure 2.1 Sketch of the two-body distribution function f12. The two molecules are correlated to each other, so that if one moves the other must move too. In the one-body representation, each molecule moves independently, although it feels it the effects of the other molecules through short-range collisions described by the collision operator C12.

(p.21) expressing the joint probability of finding molecule one around $x⃗1$ with speed $v⃗1$ and molecule two at $x⃗2$ with speed $v⃗2$, both at time t (see Fig. 2.1).

More precisely, the quantity

$Display mathematics$

gives the average number of pairs of molecules sitting at points $z⃗1≡(x⃗1,v⃗1)$ and $z⃗2≡(x⃗2,v⃗2)$ of phase space at time t.

Living as it does in a $(6+6=12)$-dimensional phase space (13 with time), it goes without saying that f12 is a very heavy-duty object to work with.

The one-body distribution is recovered by integrating over the second particle phase-space coordinates:

(2.4)
$Display mathematics$

where the factor 2 accounts for the fact that there are $N(N−1)/2$ symmetric pairs out of a pool of N particles.

Clearly, this projection from 13 to 7 dimensions erases a huge amount of information, the two-body correlations. The loss of this information prevents an exact reconstruction of f12 from $f1≡f(z⃗1;t)$ and $f2≡f(z⃗2;t)$, separately.

However, as we shall see shortly, educated guesses on the physical nature of the system under consideration, can (partially) make up for this fundamental limitation. In principle, it is not difficult to write down the dynamic equation for f12, the only trouble being that this equation calls into play the three-body distribution function f123, which in turn depends on f1234 and so on, down an endless line known as the BBGKY hierarchy, after Bogoliubov, Born, Green, Kirkwood and Yvon (4).

The physical origin of such open structure is that a N-body system can in principle host molecular collisions at all orders, from binary onward up to order N. If one could solve the BBGKY hierarchy, one would obtain a complete statistical knowledge of the full N-body problem described by the Newtonian equations for the N molecules. This is again utopia, only in statistical rather than dynamic vests!

Consequently, one must settle for less ambitious goals, i.e., approximate descriptions. The loss of information inevitably associated with such approximations is responsible for irreversibility, to be literally intended as our inability to reconstruct the initial conditions exactly (loss of memory).

(p.22) Fortunately, powerful heuristics are available to guide the search for sensible approximations to the BBGKY hierarchy.

Indeed, in actual practice, the probability of a simultaneous interaction between, say, k molecules, decays very fast with k, approximately like $(s/d)3k$, where

$Display mathematics$

is the mean-intermolecular separation and n is the number density, the two being related via $nd3=1$, i.e., one particle on average in a cublet of volume d3.

The ratio

(2.5)
$Display mathematics$

sometimes called “granularity,” provides a direct measure of the degree of diluteness of the system. Indeed, in a system at density n, each molecule inhabits a volume d3, and $n˜$ is the fraction of that volume occupied by the molecule itself (here a cublet of side s).

From its very definition, it is clear that $n˜$ controls the strength of many-body interactions, which fade away as $n˜→0$. For instance, air at standard conditions features $n˜∼10−3$, so that many-body interactions are largely negligible. Water in standard conditions, on the other hand, provides $n˜∼1$, which surely calls for careful consideration of many-body effects. Nevertheless, as we shall see in Chapter 3, the structure of the Navier–Stokes equations of continuum fluid dynamics is to a large extent independent of many-body effects. This is a great gift of mother nature, known as Universality.

# 2.4 Back to Boltzmann

The simple, yet basic, considerations previously suggested, set the stage for Boltzmann’s clever way out of the BBGKY hierarchy.

To close equation (2.2), Boltzmann made a few stringent assumptions on the nature of the physical system: a dilute gas of point-like, structureless molecules interacting via a short-range two-body potential.

Under such conditions, intermolecular interactions can be described solely in terms of localized binary collisions, with molecules spending most of their lifespan on free trajectories (in the absence of external fields), merrily unaware of each other.

Within this picture, the collision term splits into Loss and Gain components:

(2.6)
$Display mathematics$

Figure 2.2 Symbolic diagram of direct and inverse collisions-. Inverse collisions (Gain) place particles in state one, while direct collisions (Loss) take them away from it.

corresponding to direct(inverse) collisions taking molecules out(in) the volume element $dv⃗1dv⃗2$ respectively (see Fig. 2.2).

The right-hand side requires a number of detailed comments.

First, the shorthand f12 stands for $f12(z⃗1,z⃗2;t)$.

In the above, vr is the magnitude of the relative speed between particle 1 and particle 2 and $Ω⃗$ denotes the solid angle associated with the scattering event (see Fig. 2.3). The (p.23) symbol σ‎ denotes the differential cross section, i.e. the effective area presented by a particle in the plane across its center and perpendicular to the relative velocity.

Likewise, $f1′2′$ stands for $f12(z′⃗1,z′⃗2;t)$, where prime indicates the molecular positions and velocities after a direct collision.

These two factors are purely statistical in nature. Note that all four spatial coordinates, both pre- and post-collisional, lie within a sphere of radius $s≪d$, on account of the diluteness assumption. Therefore, in the Boltzmann limit $s/d→0$, they can be reconduced to the same spatial location $x⃗=x⃗1=x′⃗1=x⃗2=x′⃗2$.

This is a major simplification of the Boltzmann equation, with far-reaching consequences on theoretical as well as computational aspects.

The pre- and post-collisional velocities $v⃗1,v⃗2$ and $v′⃗1,v′⃗2$ are related through the three basic Mass–Momentum–Energy conservation laws, that is

(2.7)
$Display mathematics$

Since mass can be assumed invariant across a collision, $m1′=m1$ and $m2′=m2$, the first equation is basically a statement of number conservation, $2=2$, two molecules before collisions, two molecules after.

The other two conservation laws, however, deliver a great deal of information, as we shall see in the sequel.

## 2.4.1 Two-Body Scattering

The two-body collision problem is best treated as the scattering of a single particle of reduced mass mr impinging on a target particle of mass $M=m1+m2$, sitting at the median position $X⃗$, with median velocity $V⃗$, defined as follows:

(2.8)
$Display mathematics$

The reduced mass is given by

(2.9)
$Display mathematics$

(p.24) and it is seen to coincide with the lightest mass, say m1, in the limit $m2≫m1$. For equal mass molecules, the case assumed hereafter, $mr=m/2$ and $M=2m$.

The two-body scattering problem is best treated in a frame with the origin located at $X⃗$.

Using a polar representation $(r,θ)$ for the interparticle separation

(2.10)
$Display mathematics$

the total energy writes as

(2.11)
$Display mathematics$

where

(2.12)
$Display mathematics$

is the angular momentum and V(r) the interparticle potential.

In (2.12), b is the so-called impact parameter, i.e., the distance of the colliding molecule from the origin perpendicular to its relative velocity (see Fig. 2.2).

After noting that mass-momentum conservation yields

$Display mathematics$

it is readily appreciated that energy conservation implies that the relative velocity vector

(2.13)
$Display mathematics$

is conserved in magnitude.

As a result, the only effect of the collision is to rotate the relative velocity by an angle χ‎ in the scattering plane defined by $r⃗$ and $v⃗r$.

The kinematic identity $v⃗i=V⃗−(mj/M)v⃗r$, $i=1,2$, $j=2,1$, delivers the following mapping between post- and pre-collisional velocities:

(2.14)
$Display mathematics$

It can be checked that this one-to-one mapping preserves the volume element in velocity space, i.e.,

(2.15)
$Display mathematics$

a property which shall prove very useful in the sequel.

With the two-body kinematics in place, one can compute the number of molecules, dN, scattered around the solid angle $dΩ⃗=sinχdχdα$, where α‎ fixes the orientation of the scattering plane in three-dimensional space.

(p.25) The relation

$Display mathematics$

defines the so-called differential cross section σ‎.

The next task is to compute the scattering angle $χ=χ(b,vr)$ as a function of the impact parameter and the relative velocity (see Fig. 2.3).

To this purpose, let us consider all impinging particles sitting in the annulus of radius b and thickness db. The conservation of the number of these particles implies, namely:

(2.16)
$Display mathematics$

(2.17)
$Display mathematics$

Figure 2.3 Scattering angle associated with a binary collision. The collision takes place in the plane defined by the interparticle separation $x⃗1−x⃗2$ and the relative speed $v⃗r=v⃗1−v⃗2$. The solid angle $Ω⃗$ is defined by the scattering angle χ‎ in the collisional plane and by the azimuthal angle ϕ‎ around the collisional plane (not shown).

This reveals that the differential cross section is fixed by the functional relation $b=b(χ,vr)$, which in turn depends on the details of the scattering potential V(r).

A quantity of major interest is the cross section:

(2.18)
$Display mathematics$

and its integral version:

(2.19)
$Display mathematics$

also known as total cross section. Note that $σT$, as defined above, is generally a function of the temperature T, so that one can write

(2.20)
$Display mathematics$

s being the size of the molecule, identified with the range of the potential. For short-range potentials one can assume further assume $κ(T)∼O(1)$, i.e., the effective cross section does not differ drastically from the geometrical one.

(p.26) The total cross section defines the molecular mean-free path (see Fig. 2.4):

(2.21)
$Display mathematics$

and the associated collisional timescale

(2.22)
$Display mathematics$

where

$Display mathematics$

is the thermal speed.

Figure 2.4 Geometrical representation of the mean-free path. The molecule two travels a distance $lμ$ before colliding with molecule two. The associated cross section defines the collisional cylinder, whose volume is $Vcol=σlμ$. In the dilute gas limit $n˜→0$, the collisional cylinder collapses to a needle-like shape with $σ/lμ→0$.

The mean-free path is the mean distance traveled by a molecule before colliding with another molecule and represents the pivotal lengthscale of kinetic theory and transport phenomena.

## 2.4.2 Spatial Ordering in Dilute Gases

Based on the definitions (2.19) and (2.21), one obtains

$Display mathematics$

indicating that by construction, the so-called collisional cylinder of volume $σTlμ$ contains just a single colliding molecule.

Recalling the definition of the mean-interparticle distance, $nd3=1$, the previous section yields

$Display mathematics$

Based on the relation (2.20) with $κ∼1$, one further obtains

$Display mathematics$

(p.27) which is the typical scale ordering of dilute gases.

It is of interest to note that the proper meaning of “dilute gas” in the Boltzmann framework does not correspond at all to a gas in the ordinary sense, i.e., a fluid of vanishingly small density. Quite on the contrary, the proper limit is the one where density is formally sent to infinity! The point is that, at the same time, the size s is sent to zero, in such a way as to keep the product $ns2$ constant. In other words, size goes to zero, density goes to infinity and the mean-free path is left constant. Putting all together, the mean-intermolecular distance scales like $d∼n−1/3$ while the molecular size scales like $s∼n−1/2$, so that the diluteness parameter $n˜=(s/d)3$ scales like $n−1/2$ and goes to zero in the limit $n→∞$, (see Fig. 2.5).

Summarizing, the dilute gas limit corresponds to the following limiting scenario:

(2.23)
$Display mathematics$

Figure 2.5 Geometrical representation of the dilute gas limit in Boltzmann kinetic theory. By halving the size s, density quadruples and the total area $ns2$, inversely proportional to the mean-free path, stays the same

(at least this is the intention of the picture.).

This further witnesses the crucial role of the mean-free path as the fundamental length scale of Boltzmann kinetic theory and shows that $n˜$ is the appropriate smallness parameter describing many-body effects in dense gases and liquids. We shall return to these matters in Chapter 7 devoted to the kinetic theory of dense fluids.

## 2.4.3 Two-Body Scattering Problem

To sort out the explicit dependence $χ=χ(b,vr)$, one needs to solve the two-body scattering problem. To this aim, it proves expedient to move to polar coordinates in the scattering plane.

The equations of motion resulting from conservation of energy E and angular momentum J read as follows:

(2.24)
$Display mathematics$

(2.25)
$Display mathematics$

where the right-hand side corresponds to the limit $r→∞$.

Dividing the two, one obtains

(2.26)
$Display mathematics$

(p.28) where

(2.27)
$Display mathematics$

is the ratio of potential to kinetic energy in the rest frame.

For purely repulsive potentials, there exists a minimum-approach distance, rmin defined by the condition $drdθ=0$.

Some algebra delivers the implicit relation:

(2.28)
$Display mathematics$

Note that for repulsive potentials, $V(r)>0$, $rmin>b$, while the opposite is true for attractive ones.

Integrating upon r from rmin to infinity, one obtains

(2.29)
$Display mathematics$

which is known as the apse angle.

The scattering angle is finally derived as

(2.30)
$Display mathematics$

This procedure shows that the dependence of the scattering angle on the potentials is generally pretty involved. As a general rule, however, small-impact parameters correspond to large scattering angles.

## 2.4.4 Distinguished Potentials

Once the atomistic potential is known, the procedure already outlined permits us to compute the scattering differential cross section σ‎, hence the collisional relaxation time and the mean-free path, starting from the atomistic potentials. This accomplishes the fundamental task of transferring information from the atomistic world of trajectories and intermolecular potentials to the kinetic world of statistical distributions, scattering cross sections and mean-free path.

Symbolically, the kinetic micro-meso bridge reads as follows:

(2.31)
$Display mathematics$

Given that some specific potentials stand out for their importance, either from the mathematical point of view or for their applicability to realistic fluids, in the sequel we provide a cursory coverage of such potentials.

### (p.29) 2.4.4.1 Hard Spheres

A particularly important and analytically solvable case is provided by the hard-sphere potential,

(2.32)
$Display mathematics$

Figure 2.6 The hard-sphere potential. For graphical purpose the potential step has a finite amplitude, leading nonetheless to an infinite force (arrow) at $r=R$. This is basically a solid wall, bouncing back the molecules the molecules which impinge on it.

where R is the sphere radius (see Fig 2.6).

Detailed calculations yield $σ=πR2$, i.e., the geometrical area of the particle cross section, as it should be.

Despite their simplicity hard-sphere potentials have played a major role in the kinetic theory of fluids and continue to provide valuable information for molecular dynamics simulations with hard-core repulsive interactions.

### 2.4.4.2 Lennard-Jones potential

Another potential which plays a prominent role in the physics of non-ideal fluids, is the so-called 12–6 Lennard-Jones potential, after the British physicist John Edward Lennard-Jones (1894–1954):

(2.33)
$Display mathematics$

Figure 2.7 The Lennard-Jones 6 - 12 potential. In the figure, σ‎ represents the range of interaction, called R in the text, to avoid confusion with the cross-section. From http://chemistry.stackexchange.com/questions/34214/physical-significance-of-double-well-potential-in-quantum-bonding.

This potential consists of a hard-core repulsion (-12 branch), plus soft-core attraction (-6 branch) (see Fig. 2.7). The former stems for the strong repulsion between incipient overlap of electronic orbitals, when nuclei get seriously close together at distances around one third of nanometer and below. The latter is due to cohesive forces arising from screened multipole electrostatic interactions (Van der Waals interactions) and plays a defining role on the thermodynamic properties of the fluid.

The competition between short-range repulsion and long-range attraction leads to a minimum of depth $−ϵ$ at a distance $r∗=21/6R$, which fixes the typical scale of intermolecular separation in the fluid.

(p.30) The Lennard-Jones potential provides a microscopic basis for the celebrated van der Waals equation of state of non-ideal fluids,

(2.34)
$Display mathematics$

where N is the number of molecules in the volume V.

The attractive branch $a/V2$ echoes the soft-core tail $(r/R)−6$ and the covolume b is related to the spatial scale, $b1/3$, of the hard-core repulsion $(r/R)−12$.

### 2.4.4.3 Maxwell molecules

A special case is provided by the so-called Maxwell molecules, characterized by a $−4$ power-law decay:

(2.35)
$Display mathematics$

The calculations show that for such power-law potential $vrσ(vr)=Const.$, so that the collision time scale is a constant, see eqn (2.22).

This constitutes a major simplification of the Boltzmann collision integral, whence the special role of this potential in kinetic theory.

(p.31) Even though Maxwell’s molecules do not appear to have any realistic counterpart in the physical world, they provide nonetheless a very fruitful theoretical idealization for several mathematical developments in kinetic theory.

In particular, under appropriate simplifications, they permit us to obtain exact solutions of the Boltzmann equation.

Calculations in three spatial dimensions for inverse power-law potentials of the form

(2.36)
$Display mathematics$

show that

(2.37)
$Display mathematics$

This highlights that Maxwell molecules, $α=4$, mark an qualitative borderline: for $α<4$, i.e., slower decay than for Maxwell molecules, the collision rate

(2.38)
$Display mathematics$

turns from an increasing to a decreasing function of the relative speed vr, i.e., essentially the fluid temperature.

A moment’s thought reveals that a collision frequency decreasing with the molecular speed implies that fast molecules experience less friction than the slow ones, which is clearly a portal to collective instability.

Indeed, this opens up non-hydrodynamic scenarios, whereby particles accelerated beyond a given critical speed by, say, a constant external field, do not experience a sufficient collisional drag to be drained back to the bulk distribution. As a result, no local equilibrium can be established and the system enters various sorts of unstable regimes, some of which are of great relevance to fusion and astrophysical plasmas and other states of matter typically governed by long-range microscopic interactions.

### 2.4.4.4 Long-range potentials

An important example of strongly non-hydrodynamic conditions is provided by long-range potentials, such as r−1 unscreened Coulomb electrostatics, or gravitation, formally corresponding to α‎.

For such potentials, the calculations provide a divergent cross section, due to the unbounded accumulation of many small-angle deflections (grazing collisions).

This is not surprising: the mean-free path is virtually zero, because owing to the infinitely long range of the acting force, the molecules are constantly interacting and the accumulation of very numerous small deflections leads to a logarithmic divergence of the cross section.

In practice, such infrared divergence is regulated by imposing a long-range cut off, typically via a so-called Debye screening, after the Dutch chemist Peter Debye (1884 – 1966): (p.32)

(2.39)
$Display mathematics$

The Debye length, λ‎, marks the scale above which electrostatic interactions are screened out due to polarization effects, a condition typical of quasi-neutral plasmas, composed by a mixture of oppositely charged species, say ions and electrons.

The kinetic theory of such screened systems is described by a different collision operator, due to Landau and Balescu–Lenard, after the Hungarian–Belgian physicist Radu Balescu (1932–96) and the German, Philipp Lenard (one “n” only, not to be confused with Lennard-Jones!) (1862–1947) (5).

This takes the following form:

(2.40)
$Display mathematics$

where $B⃗⃗(v⃗,v′⃗)$ is a suitable-tensorial collision kernel.

This expression is obtained from the Boltzmann collision operator by expanding upon the velocity change, $Δv⃗=v′⃗−v⃗$, under the assumption of small deflections:

(2.41)
$Display mathematics$

as it is appropriate for soft-core grazing collisions.

The Balescu–Lenard collision operator belongs to the general class of Fokker–Planck kinetic equations, which we shall discuss in chapter 9. For the case of unscreened long-range interactions, say self-gravitating systems, the derivation of a suitable collision operator is still an open issue in modern statistical mechanics, with important implications in plasma physics, astrophysics, and cosmology.

## 2.4.5 Molecular Chaos (Stosszahlansatz)

Having discussed the details of the two-body scattering problem inherent to Boltzmann’s collision operator, we next move on to consider the all-important statistical aspects of this operator.

In the first place, in order to derive a closed equation, one has to express the two-body distributions $f12′$ and f12, in terms of the one-body ones f1 and f2.

The simplest such closure, which is precisely the one taken by Boltzmann reads as follows:

(2.42)
$Display mathematics$

and same for $f12′≡f1′2′$.

This closure is tantamount to assuming no correlations between molecules entering a collision (molecular chaos or Stosszahlansatz).

(p.33) This assumption is fairly plausible for a dilute gas with short-range interactions, in which molecules spend most of their lifetime traveling in free space, only to meet occasionally for very short lived, in fact instantaneous, interactions.

Note that molecules are assumed to be correlated only prior to the collision, whereas after collision, they become strongly correlated on account of mass, momentum and energy conservation.

Within this picture, the probability for two molecules that met at time t, to meet again at some subsequent time $t+τ$, with the same velocities $v⃗1$ and $v⃗2$, decays exponentially with τ‎.

More precisely, this probability scales like $e−τ/τint$ where $τint$ is the duration of a collisional event. Since in Boltzmann’s theory $τint∼s/vT$ (the thermal speed vT is taken to be a typical particle speed and s a typical effective molecular diameter) is negligibly small, so is the (auto)correlation function at time τ‎.

The situation is obviously completely different in a liquid, where, due to the much higher density, the molecules are in constant interaction.

Violations of Boltzmann’s molecular chaos can occur due to the onset of nonlinear correlations. A most notable example are the famous long-time tails, first detected by Alder and Wainwright (6), where molecular correlations exhibit anomalous persistence due to self-sustained vortices generated by the molecular motion itself.4

Summarizing, in view of the molecular chaos assumption, the Boltzmann equation takes the following form:

(2.43)
$Display mathematics$

The left-hand side is a mirror of reversible Newtonian single-particle dynamics, while the right-hand side describes intermolecular interactions, under the Stosszahlansatz approximation.

# 2.5 Local and Global Equilibria

Given that the collision operator naturally splits into a Gain minus Loss components, it is only natural to ask under what conditions would the two antagonists come to an exact balance.

This singles out a very special distribution function, characterizing the attainment of local equilibrium, a notion which proves central to the purpose of deriving hydrodynamic equations from Boltzmann’s kinetic theory.

Mathematically, the local equilibrium is defined by the condition

(2.44)
$Display mathematics$

where superscript “e” denotes “equilibrium.”

(p.34) The identification of Gain and Loss follow straight from the expression of the collision operator:

(2.45)
$Display mathematics$

(2.46)
$Display mathematics$

This leads to the so-called detailed balance condition:

(2.47)
$Display mathematics$

which holds regardless of the details of the molecular interactions.

This is a strong statement of universality: microscopic details affect the rate at which local equilibrium is reached, not the equilibrium itself, which depends only on conserved quantities.

Of course, detailed balance does by no means imply that molecules sit idle, but rather that any direct (inverse) collision is dynamically balanced by an inverse (direct) partner collision.

For instance, in a room at standard temperature, with no appreciable macroscopic flow, the typical molecule moves at the speed of sound, that is, about three times faster than a Ferrari!

For a fluid at rest, along any given spatial direction, there is, on average, another molecule doing exactly the same along the opposite direction, so that no net macroscopic flow results.

The detailed balance condition has far-reaching consequences on the shape of the equilibrium distribution in velocity space.

To appreciate this, let us first take the logarithm of the eqn (2.47), to obtain

(2.48)
$Display mathematics$

This shows that the quantity $logf$ is an additive collision invariant, i.e., a microscopic additive property which does not change under the effect of collisions.

The immediate consequence is that, at thermodynamic equilibrium, $logf$ must be a function of the five collision invariants

$Display mathematics$

associated with the conservation of number (mass), momentum and energy.

This yields (repeated Latin indices, denoting spatial directions, are summed upon):

(2.49)
$Display mathematics$

(p.35) where $A,Ba,C$ are five Lagrangian multipliers, carrying the entire dependence on the space-time coordinates through the conjugate hydrodynamic fields

$Display mathematics$

namely density of mass, momentum and energy.

These Lagrangian parameters can be computed by imposing conservation of mass-momentum energy:

(2.50)
$Display mathematics$

where, $ρ=nm$ is the mass density, ua, $a=x,y,z$, is the macroscopic flow speed and ρ‎e is the energy density.5

Elementary quadrature of Gaussian integrals delivers the celebrated Maxwell–Boltzmann equilibrium distribution.

In D spatial dimensions, this reads as follows:

(2.51)
$Display mathematics$

where c is the magnitude of the peculiar speed

(2.52)
$Display mathematics$

namely the relative speed of the molecules with respect to the fluid, and

(2.53)
$Display mathematics$

Figure 2.8 From long-range to short-range potentials: top, repulsive bare Coulomb (1/r), Debye-screened Coulomb $(e−r/λr)$, Maxwell molecules ($1/r4$) and the repulsive branch of the 6–12 Lennard-Jones. The prefactors have been adjusted to keep all four potentials on the same scale.

is the thermal speed associated with the fluid temperature T, kB being the Boltzmann constant. With this definition, each direction carries $kBT/2$ units of energy.

Figure 2.9 Top: Maxwell–Boltzmann distribution at unit temperature, $T=1$, at rest, $U=0$, and with flow, $U=1$. Bottom: Maxwell–Boltzmann distribution at rest with $T=1$ and $T=2$.

## 2.5.1 Local Equilibria and Equation of State

Local equilibria associate with perfect (or inviscid) fluids, i.e., fluids in which dissipative effects can be neglected.

At equilibrium, density, temperature and pressure are related through an equation of state:

(27.1)
$Display mathematics$

(p.36) From a kinetic point of view, the temperature is defined as the variance of the equilibrium distribution:

(2.55)
$Display mathematics$

where c is the magnitude of the peculiar velocity and $ρ=nm$ is the mass density.

The definition shows that temperature is basically the variance of the kinetic distribution function, or, in different terms, the peculiar kinetic energy of the molecules with respect to the mean-flow motion (see Figs. 2.8 and 2.9).

Another quantity of chief interest for hydrodynamics is the momentum-flux tensor, often called pressure tensor for short:

(2.56)
$Display mathematics$

The definition indicates that the component Pab of the pressure tensor represents the amount of momentum mva along direction xa, fluxing across the unit surface with normal oriented along direction xb.

The ordinary pressure is given by the diagonal components of the pressure tensor, evaluated at zero-flow conditions $ua=0$.

(p.37) For an isotropic fluid at rest ($ua=0$), each component gives the same result, namely

(2.57)
$Display mathematics$

It is now instructive to compute the value of the macroscopic quantities corresponding to the local equilibrium distribution.

Elementary gaussian integration yields

(2.58)
$Display mathematics$

(p.38) This shows that local equilibrium only supports diagonal components of the pressure tensor, with a corresponding ideal equation of state

$Display mathematics$

It is also of interest to note that, under such conditions, the thermal speed corresponds exactly to the sound speed, defined as the derivative of the pressure with respect to the density at a constant temperature, namely

(2.59)
$Display mathematics$

Knowledge of the ratio of thermal to sound speed, $θ≡vTcs$, as a function of density and temperature is just another way of specifying the equation of state of the fluid, $θ=1$ denoting the ideal gas.

## 2.5.2 The Evershifting Battle

As shown in Section 2.5.1, local Maxwell equilibria are the result of a statistical balance between forward and backward collisions.

This balance between gain and loss terms annihilates the effect of the collision operator on the distribution function.

Quite remarkably, such balance holds true independently of whether or not the macroscopic fields exhibit a variation in space and/or time (whence the label “local”), as long as such variation occurs on scales longer than the mean-free path.

This property stems directly from the assumption that collisions take place in the limit $s→0$, or, more precisely, $s/lμ→0$.

A natural, and indeed often-asked question is:

Do local equilibria annihilate the effects of the streaming operator too?

A moment’s thought reveals that this is not the case, unless the macroscopic fields are totally flat, i.e., constant in space and time, a condition which defines global equilibria, hence equilibrium thermodynamics.

To appreciate the point in a little more detail, let us compute the effect of the streaming operator on local equilibria.

A simple application of the chain rule yields

(2.60)
$Display mathematics$

By evaluating the partial derivatives of fe with respect to ρ‎, $u⃗$ and T, simple algebra delivers

(2.61)
$Display mathematics$

(p.39) where

(2.62)
$Display mathematics$

is the peculiar speed in units of the thermal speed.

Expression (2.53) clearly shows that, by construction, local equilibria are not preserved upon streaming, i.e., they can only be conserved if all macroscopic fields are constant in space and time, which, by definition means they are no longer local, but global ones.

Of course, this is all but a coincidence. The broken-invariance of local equilibria upon streaming, reflects a profound physical mechanism: space-time inhomogeneity is the source of non-equilibrium.

Differently restated, collisions act so as to achieve detailed balance, thereby couching the distribution function into the universal local Maxwell–Boltzmann distribution.

Streaming, on the other hand, works exactly in the opposite direction; it destroys the delicate (detailed) balance established by collisions, and revives non-equilibrium through inhomogeneity.

This is the famous “evershifting battle” between equilibrium and non-equilibrium, as evoked by Boltzmann, a battle which is not over until a featureless uniform macroscopic scenario is attained.

For those versed in philosophical aspects of science, the evershifting battle between equilibrium and non-equilibrium can be seen as a sort of metaphor of Life itself, which depends crucially on the ability to function far from equilibrium in local and temporary elusion of the Second Law (“life on borrowed time”).

It is sometimes heard that the depth of any given equation is measured by the conceptual distance between its left- and right-hand side. If such criterion is anything to go by, there is little doubt that the Boltzmann equation scores very highly indeed.

Back to the ground. Broken invariance of local equilibria under streaming reflects the broken symmetries of classical mechanics. To this purpose, we note that Maxwellian equilibria inherit two basic symmetries of Newtonian mechanics, namely (7):

Space and time translational invariance

(2.63)
$Display mathematics$

where τ‎ and $λa$ are arbitrary constants.

The invariance under such transformation reflects the homogeneity of space and time. For the case where $λa=Vaτ$, Va being a constant velocity, (2.63) reflect invariance under Galilean transformations.

Rotational invariance

(2.64)
$Display mathematics$

(p.40) where Rab is a symmetric, unitary (norm-preserving) rotation matrix.

Rotational invariance, which applies to the particle velocities as well, is ensured by the fact that the peculiar speed $c⃗$ appears through its magnitude alone, so that any sense of preferential direction is erased.

These symmetries are built-in in continuum kinetic theory, but it would be a gross mistake to take them for granted also when space time and velocity are made discrete.

Actually, this is the leading theme of Discrete Kinetic Theory.

The hydrodynamic probe of rotational symmetry is the momentum-flux tensor Pab, which plays a pivotal role in Discrete Kinetic Theory and most notably in Lattice Gas Cellular Automata and Lattice Boltzmann theories.

# 2.6 Summary

Summarizing, Boltzmann kinetic theory describes the dynamics of dilute gases in terms of a probability distribution function including, besides space and time. molecular velocities. The result is a complicated quadratic integro-differential equation describing the competition between free streaming and interparticle collisions. For sufficiently well-behaved (short-ranged) atomistic potentials, such competition ultimately ends up into a universal local equilibrium, which depends only on the local conserved fluid quantities, mass, momentum and energy. This local equilibrium plays a crucial role in the derivation of hydrodynamics from Boltzmann’s kinetic theory.

References

Bibliography references:

1. L. Boltzmann, Lectures on Gas Theory, University of California Press, California, 1964.

2. C. Cercignani, Theory and Application of the Boltzmann Equation, Elsevier, New York, 1975.

3. C. Cercignani, Mathematical Methods in Kinetic Theory, Plenum Press, New York, 1969.

4. K. Huang, Statistical Mechanics, Wiley, New York, 1987.

5. R. Balescu, Equilibrium and non equilibrium statistical mechanics, John Wiley and sons, New York, 1975.

6. B. Alder and J. Wainwright, Decay of the velocity autocorrelation function, Phys. Rev., A 1, 18, 1970.

7. S. Goldstein, Classical Mechanics, Addison–Wesley, London, 1959.

# (p.41) Exercises

1. 1. Prove the relation (2.15).

2. 2. Prove the Maxwellian expression (2.51).

3. 3. What fraction of molecules move faster than $2vT$ in a local Maxwellian? And how many at $5vT$?

## Notes:

(2) The symbol $V(r⃗)$ denotes the interparticle potential, not to be confused with plain V, the volume of the system, and $V⃗$ the barycentric velocity of the two-body problem.

(3) We shall stick to the common tenet that microscopic equations, either classical or quantum, are invariant under time reversal.

(4) This is amazingly reminiscent of the mechanisms invoked by Aristoteles to explain the motion of arrows in air!

(5) The Latin subscript a denotes cartesian components of a vector, so that the notation va is an equivalent substitute for $v⃗$ and shall be used interchangeably throughout the text, see Appendix on Notation.