An Exploration of Physics by Dimensional Analysis
The diversity of dimensions of physical quantities and their
consequences on how the behaviors of physical systems are shaped,
plays an essential role throughout all physics theories.
This is somehow well-known in principle since long ago. However
the explanatory power of such considerations is usually pitifully
neglected in the practice of many physics courses. Dimensions are
there, they qualify each quantity. We are told that we must
respect them, and we have some equations of physical laws using
them. Quantities appear in formulas. Formulas give some results
which come somehow as a surprise for each particular problem.
Let's not keep this as a black box giving surprising results for
each particular problem. Let's more systematically explore how the
fundamental laws of physics relate diverse quantities in very
general classes of situations. And let us present this as a
stimulating introduction to these laws of physics and to how
things go in the universe.
(To make things all clean, a first step should be to present
mathematical foundations of what is dimensional analysis: what
the concept of "different dimensions" for quantities mean
mathematically, and how operations between quantities can
be defined in principle. Such foundations will be
developed in this site later. In the below, this will be assumed
as known. The focus here will be on physics.)
List of independent physical quantities
Here we shall qualify different physical quantities as
"independent" if we can effectively find a class of different
physical systems with a similar behavior, and between which these
quantities vary independently of each other.
The resulting classification of quantities is a little different
from the traditional conventions inherited from history.
And the very interpretation of this physical definition of
independance between quantities, is not always clear and simple.
Typically, physical quantities are least independent in more
fundamental theories (indeed the fundamental constants of c, G of
General relativity, and h of quantum physics, together provide
absolute units for all physical quantities, reduced to real
numbers); we get a wider independence between quantities in more
phenomenological theory (classical macroscopic physics), where the
constants of the fundamental physics which link the different
quantities are considered "very small" or "very big" and therefore
their being bigger or smaller than they are would not yield
Let us first give the list in bulk (or rather, one possible
presentation of this list ; two lists are equivalent if they form
different basis of the same free
- Amount of substance
- Electric current
The amount of substance counts the very large number of atoms or
molecules that appear in macroscopic situations. Thus its deep
meaning is that of natural numbers, but too big for the elementary
unit (an individual atom or molecule) to be of any significance.
This quantity was introduced from the observation that chemical
reactions happen in definite proportions of the ingredients, which
was the first established fact from the beginning of the 19th
century, while its explanation in terms of atoms was only clearly
established later that century.
The standard unit for the amounts of substance is the mol, whose
"true value" is given by the Avogadro number N= 6.022×1023 mol−1.
Generally the definition is that 1 mol of some pure substance
contains about 6.022×1023
molecules of this substance, so that 1 mol of carbon-12 weights 12
grams, thus roughly, 1 mol of hydrogen atoms weights 1 gram (=
Usual lists also include the temperature as another independent
quantity. However the gas constant
is always near in any experiment with temperature, even when
considering solids instead of gases. Thus through this constant,
we can consider temperature as a composite of other quantities.
Namely, a temperature is an energy per amount of substance.
In fact, the thermic energy contained in an object usually has the
same order of magnitude as the product of its temperature by the
amount of substance it contains, but is not identical to it. This
will be detailed later.
Different quantities are naturally produced in different
contexts. They may be universal constants associated with these
specific processes, or they may be quantities describing the
specific case of a given experiment, such as the acceleration of
gravity on the ground that depends on the mass and size of the
Let us explore different constants and their effects.
(Some sections of this exploration have been moved to separate
The Planck constant
This constant governs physical phenomena at the atomic level,
together with the specific constants describing the particles
involved (masses and charges). It is the natural unit of action,
since the deep nature of quantities of action is that of
oscillations number : the principle of least action that governs
classical mechanics comes from the fact that scenarii with similar
action (those near an extremum of action) benefit a constructive
interference from a quantum viewpoint. Depending on context, we
may use ℏ or h= 2πℏ = 6.626×10−34 J·s =
as ℏ is used for writing the partial differential equations of the
wavefunctions, while h is for counting the numbers of
- relates the energy E to the frequency ν by E=h ν
- with space instead of time, it relates the momentum p to the
angular wavenumber k=2π/λ where λ is the wavelength, by
p=ℏk , in other words λ=h/p.
- dictates the angular momentum of particules (as an angular
momentum is homogeneous to a quantity of action, that is the
action exchanged when turning the particle one whole turn around
On the atomic scale, things can oscillate. On the one hand,
electrons can somehow "oscillate" between several positions (visit
several orbitals), or even leave an atom. On the other hand, a
molecule can oscillate as its atoms have an elastic move with
respect to each other.
However, for the same thing that oscillates in one same direction, 2
kinds of "oscillations" should be distinguished depending on their
We shall need to compute and compare both, and tell which one
dominates in each circumstance.
- The ground state oscillation, due to the quantum
indetermination on position and momenta; its amplitude is given
by applying the Heisenberg inequality with the given potential
- Some larger oscillations of higher energies have their
probabilities determined by the temperature (by Boltzmann's law
as we will see).
Many cases of oscillations can be approximately described as an
harmonic oscillation : that is when the potential function is
approximated as a 2nd degree polynomial, so that the oscillation
period does not depend on amplitude of oscillation, but only on the
mass and the coefficient of the 2nd degree term of this polynomial.
Take a particle with mass m in a potential energy field E=k.x2/2.
A mass times a speed squared, equals k times a distance
Thus k is a mass divided by a time squared.
So, the classical oscillation period is t=2π √ m/k.
In the phase space (x,p) where p=mv is
the momentum, the particle with energy E follows the ellipse
with equation k.x2 + p2/m
The area of this ellipse is 2E π √
Quantum physics identifies any oscillation period t to a
quantity of energy (a quantum of energy, that is the difference
between 2 energy levels)
E0 = h/t = ℏ √k/m.
Quantum physics does not allow the energy of an harmonic oscillator
to take any value as classical mechanics would allow, but only
multiples of this quantum of energy. More precisely, the possible
values of the energy are E= (n+ 1/2)E0
where n is a natural number that we shall call the number
of phonons of this oscillation (though the word "phonon" is
traditionally reserved for essentially the same concept but applied
to the oscillations of a crystal instead of a single oscillating
mass), so that the energy difference between any states is a
multiple of E0:
(m+ 1/2)E0 − (n+ 1/2)E0=
Between energies 0 and nE0 there are the n
possible states of oscillation, with energies (m+ 1/2)E0
for any number of phonons m<n, so with
values from E0/2 to (n −1/2)E0.
In the phase space, they "occupy" the inside of the above ellipse
with energy nE0, with area
2nE0 π √ m/k
The uncertainty d on position for the ground state (zero
phonon, energy E= E0/2) corresponds to
the amplitude of a classical oscillation with energy E0
d2= E/k = ℏ /√
km = ℏ2/mE0.
More precisely, the wavefunction as a function of the position x,
is proportional to
so that its expression as a function of the momentum p (the
Fourier transform of the latter) is proportional to
making it fit in the same way with the kinetic energy function p2/2m,
as the wavefunction of position fits with the potential energy
function : (ℏ/d)2/m = ℏ √k/m
The probability density function, square of the wavefunction, is
= h/2√ k.m.
The parameters of the atomic structure
Atoms, in their chemical properties, are dominated by the
electrostatic interaction of electrons with other electrons and with
nucleus in the framework of quantum physics.
So let us take the constant ε 0 = 8.854×10−12
of electrostatics, the elementary charge e=1.60218×10−19 C,
and the Planck constant h= 2πℏ = 6.62607×10−34 J.s.
From them let us write the quantity (does it have any
standard name and notation ?)
ve = e2/(4πε 0ℏ) = e2/(2h
ε 0) =
1.60218e-19^2/(2*6.62607e-34 *8.8542e-12) = 2187.7 km/s.
This is a high speed, not quite far from the speed of light, that is
the typical speed of electrons in atoms.
Its ratio with the speed of light takes the name of the fine-structure
constant (which has some small effects on atoms...):
α = ve/c
unit of energy is formally defined as the kinetic energy of an
electron, with mass me = 9.10938×10−31 kg, going at
1 Ry = meve2/2
= me(e2/ hε 0)2
/ 8 = 13.6057 eV = 2.1799×10−18
as 1 eV= 1.60218×10−19
We can first describe the hydrogen-like
atoms made of one electron linked to one nucleus with charge (atomic number)
Their energy levels are the quantum mechanical versions of the
orbits : an electron with mass me in the orbital
with quantum numbers ℓ < n may be figuratively
understood as being in a fuzzy and undetermined
elliptical Kepler orbit, with
Indeed we can verify in the classical case of a circular orbit that
its speed is v = veZ/n (as the energy is proportional to
v2 and it has the right value for n=Z=1) thus its angular
momentum is L = a mev = n ℏ.
We may also compute in the interpretation by Kepler orbits, the
orbital period T, independent of excentricity thus computable from
the circular case :
T = 2π a/v = (2π a/ve) n/Z = (h/meve2)(n3/Z2)
to be compared with the difference between 2 nearby energy levels
for large values of n : En+1 - En = Ry Z2(1/n2
− 1/(n+1)2) ≈ 2 Ry Z2/n3 = h/T.
Indeed we can understand that an electron in circular orbit has a
sinusoidal movement and thus can only emit
a photon with a definite frequency, the one that will reduce
his energy level by only one unit. This photon will also carry the
one unit of angular momentum that the electron must lose, since the
lower energy level must have lower angular momentum too.
On the other hand, the movement in an elliptical orbit is not
sinusoidal but has a range of other harmonics ;
these correspond to the different possible energies of the first
photon that might be emitted. These higher harmonics are
specifically due to the faster movement of the electron near the
kernel. Photons emitted from this origin specifically take their
energy from the speed of the electron in this section of the orbit.
The electron being slowed down the there is driven into a less
excentric orbit. This is normal because each photon can only carry
one unit of angular momentum.
The Rydberg unit of energy would characterize the interaction
between 2 charges both equal to e. However this is not exactly the
dominant case of interaction between charges. Indeed the Pauli
exclusion principle that does not allow 2 electrons to be very close
to each other, usually behaves in atoms like a repelling force that
is stronger than the electrostatic force : it keeps electrons with
the same spin orientation far enough from each other to make the
electrostatic force between them small and irrelevant.
As for electrons with opposite spin, they can somehow still meet
tunelling : the probability
density of their relative
position does not even cancel near the zero vector; still of
course it is lower there, so that this interaction lowers its own
effects. Anyway electrons naturally stay together in pairs of
opposite spins because the rest of the atomic structure which
determines the state of lowest energy for any electron, usually
gives them both the same offer.
The dominant phenomenon that governs the behavior of atoms is the
attractive electrostatic interaction between the electrons on the
higher (external) energy levels, and the nucleus. So, between a
charge = -e and a charge that is many times e.
Still not as many times as Z, because of the shielding
effect by the negative charge of the electrons on lower energy
levels (and a partial shielding from those on the same level), on
the behavior of the electrons more far from the kernel, that
determine chemical interactions. And in the formula of a,
proportional to n2/Z, the higher value of Z which would
shrink the size of the atom, is balanced by the higher value of n
(which roughly measures how much the quantum mechanical properties
can be approximated into classical ones, by the least action
principle with an action equal to n.h).
Now let us give the value of the coefficient in the formula of a :
a0= ℏ /meve
= 5.292 ×10−11 m
= 0.5292 Å (Angstrom)
This is the fundamental unit of distance from which all sizes of
atoms are derived. For example in a water molecule H2O, the
distance between the O and H atoms, is 0,958 Å = 1.810 a0.
We can also look at the volume taken by each molecule of H2O
in liquid water : its molar mass is 18.015 g/mol, and its density is
thus 1 cm3 contains N/18.015 = 3.343 × 1022
molecules. The volume per molecule is that of a cube with size 3.1
In the case of diamond, the molar mass is 12.01 and the density is
3.52 g/cm3. Each atom thus takes the volume of a
cube of 1cm(12.01/3.52 N)1/3 = 1.783 Å. The distance of
each atom with each of its 4 closest neighbors is 1.544 Å.
Let us explain why the nucleus is much smaller than the atom : both
sizes are determined by the wavelength ℏ /mv of a
particle with a mass m at a speed v. The difference is mainly that
the mass involved for the size of atoms is the mass of an electron,
that is much smaller than the mass of protons and neutrons, mp/me = 1,836.15. (There is currently no
explanation for this value)
A smaller contribution to the ratio is that the typical speed of
electrons in atoms, 2,187.7 km/s, is slower than the typical speed
of protons and neutrons inside the nucleus, which we calculated
above as 85,000 km/s. (The ratio between these speed also determines
the amplitude of how neutrons become more numerous than protons in
heavy atoms, and finally the fact that too heavy atoms are
The compressibility of condensed matter
Let us now compute how strongly can condensed matter (liquids and
solids) resist compression. We saw in the case of diamond that the
size of semi-major axis roughly expressed by a = (ℏ /meve)
(n2/Z), kept values close to ℏ /meve
(a distance of 3× ℏ /meveb
between atoms in a covalent bound can be understood as 2 excentric
orbits meeting in the middle) while n=2 so that the shielding
effects may have decreased the effective value of Z to something
like Z=4 despite the basic value Z=12. But for the value of the
energy − Ry Z2/n2 we have a further factor of
Z, so that we have several times Ry per electron. Moreover, usually
each binding electron binds 2 atoms but each atom is bound by more
than 2 electrons. Thus the total binding energy per atom may be
several Ry. But in the absence of covalent bounds holding all atoms
together on the large scale, the effective binding energy can be
lower. Still the resistance to compression can remain as it is just
a matter of volume per molecule and does not depend on the precise
configuration between molecules.
The compressibility of condensed matter is the proportionality
coefficient between the compression rate (an infinitesimal
dimensionless quantity) and the pressure. Its inverse, called the bulk modulus,
is a pressure. A pressure is homogeneous to an energy density (an
energy per volume). We can get such a coefficient by multiplying the
binding energy per atom, by the number of atoms per volume.
According to this site,
the inverse of diamond's compressibility has almost the highest
value of all materials. It is 443 GigaPascals, thus 4.43×1011 Pa = 4.43×1011 J/m3.
This value corresponds to an energy per atom equal to 2.511×10-18 J = 1.15 Ry. A
more precise calculation taking account of the exact meaning of
things, is presented below.
[From Wikipedia] : The compressibility of water is a
function of pressure and temperature... At the zero-pressure
limit, the compressibility reaches a minimum of 4.4×10−10 Pa−1
around 45 °C before increasing again with increasing
The bulk modulus is thus 2.2×109
Pa , corresponding to an energy per molecule = 6.8×10−20
J = 0.031 Ry.
As the pressure is increased, the
compressibility decreases, being 3.9×10−10 Pa−1
at 0 °C and 100 MPa (=1000 atmospheres).
The low compressibility of water means that even in the deep
oceans at 4 km depth, where pressures are 40 MPa, there
is only a 1.8% decrease in volume.
Seen in this way, water appears as quite a compressible substance
intermediate values between the above for glass (35 to 55 GPa) and steel (160 or 170 GPa).
The speed of the sound in condensed matter
Dividing the bulk modulus by the density of mass (in kg/m3),
gives an energy per mass, thus a square of a speed. The square root
of this quantity gives the speed of the sound in that substance. The
more compressible is a substance, the slower is the speed of sound
Examples: in water, the speed of sound is 1,484 m/s = √2.2×109 /1000 -
1497 m/s at 25 °C in fresh water, and 1560 m/s in
sea water (without bubbles or other things), decreasing to a minimum
of 1480 at about 800m depth, then increasing again.
In steel, with 7700 kg/m3 , we have a longitudinal
velocity of 6000 m/s.
In granite it is 5000 m/s.
However there is a lower speed for transversal waves (S-waves), as
solids are usually more "compressible" by distortions (change of
shape) than by uniform compression (change of volume)).
The velocity of seismic waves tends to increase with depth, and
ranges from approximately 2 to 8 km/s in the Earth's crust (starting
in the Tibetan Plateau with a 16 km thick upper crust with P-wave
velocity 5.55 km/s and S-wave velocity 3.25 km/s, to 8km/s of P-wave
in the upper mantle), and values go up to 13 km/s down in the
deep mantle, thus even larger than the value of the speed of sound
in diamond from the value of its bulk modulus mentioned above (11.2
km/s). This is because at very high pressures, the precise
configuration of atoms becomes irrelevant, as the atoms just resist
being smashed in volumes : all electrons in the external layers
contribute to resist pressure as they need their space (because of
Pauli's exclusion principle); while at low pressures, molecules who
had all the space could take their "confort" away from each other
and be only weakly bounded to their neighbors in subtle ways.
This speed is much slower than the typical speed of electrons in
atoms ve = 2187.7 km/s. This is because the kinetic
energy Ry = meve2/2 of
electrons at ve is replaced by some energy E per atom
with the same order of magnitude but now converted back into a speed
v by the formula E = mv2 where m is the much heavier mass
of the whole atom instead of the mass of the electron.
Now we can explain why it is at the size of the Earth that rocky
planets undergo significant compression by their own weight:
Take the velocity of the seismic waves in the deep mantle
13 km/s, that gives the order of magnitude for the compression
of matter at high pressures, and multiply it by the gravitational
time of condensed matter, that was for the Earth 805 s. The result
is 10.500 km, comparable with the Earth's radius of 6,356 km. So the
Earth undergoes significant compression because its radius is not
small compared to 10.500 km. (There is the superficial difference of
pressure and thus compression from the surface to the mantle, that
drives the velocity of seismic wave from small to large values; then
there is the compression we speak about that happens more far deep,
governed by the velocity of 13 km/s).
Now what about the speed of the sound in the air ? In dry air at 20 °C (68 °F), the
speed of sound is 343.2 metres per second, thus 4 times smaller than
that in water. But this comes from a very different law, that we
shall present now.
At the macroscopic level, a temperature is an energy per entropy.
At the microscopic level (from the viewpoint of a few particles), a
temperature is an energy per quantity of information.
Because the microscopic nature of entropy is that of a quantity of
To understand this, let us present Boltzmann's law, general
description of physical systems in thermodynamical equilibrium with
an environment at a temperature T (this law can itself be deduced
from the first
principles of quantum physics):
Boltzmann's law: For any physical system whose range of
possible states (configuration space) can be described in the form
of (an abstract split of its configuration space, into)
a list of its possible distinct elementary states with definite
values of their energies, it is said to have temperature T
if it is fully described as obeying the only probability law on
such a list of states, so that the probability to be in each state
is proportional to exp(-E/kT)
where E is the energy of this state, and k
is the Boltzmann constant (that does the conversion between units
of temperature and energy).
(This definition is actually independent of the choice of such a
list; this choice is no more unique when there exists more than one
elementary state with the same definite value of the energy − the
energy of a system is no more definite if it has nonzero
probabilities to be in elementary states with different definite
values of the energy).
Thus, between 2 states A and B with energies E and E' such that
E'-E=kT*ln(2), the state A is twice more probable than A.
But if we have another state B' with the same energy as B, then
being in state (B or B') will be as probable than being in state A.
So, the state (B or B'), where B and B' have each probability 1/2,
has more energy than A (the difference is kT*ln(2)) but it
has entropy = 1 bit of information = ln(2), while A, as a pure
state, has zero entropy.
Finally, A is equiprobable with (B or B') because both have the same
However in many cases, the quantity of information is just given by
the quantity of matter, and can thus be just replaced by it.
For example, if you compress a gas by dividing its volume by 2 and
then cool it to keep its initial temperature, then you have just
substracted 1 bit of indetermination on the position of each
molecule of gas : the cooling must have taken away a quantity of
entropy equal to n*ln(2) where n is the quantity of
molecules (their number divided by the Avogadro number). It went
away as a heat, thus with a quantity of energy that is the product
of this quantity of entropy by the temperature.
For the temperature to stay constant, a decrease of volume V of n
moles of gas by a small fraction dV/V must be accompanied by a
release of entropy, that, counted in units of information (1 bit =
ln(2)), equals to nN dV/V where N is the Avogadro number.
Thus as a quantity of heat with temperature T, it goes with a
quantity of energy dE= kTnN dV/V = Tn R dV/V where R=kN
is the gas constant (8.314 J·K−1mol-1).
This energy that goes out, came in as a work of pressure PdV, thus
the equation of ideal gases PV=nRT.
Macroscopically, quantities of entropy S are counted so that for a
quantity of heat (flowing from a place to another) at temperature T,
its energy E and its entropy S are related by E=ST. This unit of
entropy is converted into the unit of information (1 bit = ln(2)) by
the Boltzmann constant k : 1 conventional unit of entropy (1
joule per kelvin) = 1/k units of information or 1/(k
ln(2)) bits of information, so that the number of units of
information, to enter Bolzmann's law, is E/kT = ((1/k)
times the number of units of entropy)
Energy of a quantum harmonic oscillator with a given temperature
If the typical energy kT of a given temperature is large
compared to the quantum of energy of oscillation, then the mean
"amplitude" of the movement is approximated by classical mechanics :
the density of probability in the phase space is proportional to
exp(-E/kT). For a one-dimensional oscillator, regions in the
phase space limited by values of the energy of oscillation, have
their area proportional to the interval of energy (difference
between chosen max and min of energies). Thus the number of states
is just proportional to this interval. The mean value of the energy
is equal to kT.
Let us exactly compute the mean value of the energy of a quantum
oscillator (with a probability distribution of the phonons number
given by Bolzmann's law) above its ground energy .
It is the product of the value of the energy of phonons E0
with the average value of the number of phonons.
This mean number can be computed as the sum for all natural numbers
j>0, of the probabilities pj to
have n no smaller than i. Indeed, in this way, the
probability of having each possible number n of phonons, is
counted n times in this sum, once for every 0<j≤n.
Then we find pj = exp(-jE0/kT)).
Indeed p0=1 and this formula satisfies the
Bolzmann's law that the probability (pn - pn+1)
of having exactly n quanta of energy, is proportional to
Then the sum of the pj for all nonzero
values of j, gives 1/(exp(E0/kT)−1)
leading to a mean energy E0/(exp(E0/kT)−1)
If E0 is very small compared to kT then this is
approximately E0/(E0/kT) = kT,
or more precisely, kT-1/2, corresponding to the classical
oscillator with thermic agitation (neglecting the quantum effects).
If E0 is very large compared to kT then this is
approximately E0 exp(-E0/kT) : this is
as if we just had a possibility of the first phonon with probability
Let us compare these formulas, for E0=1 and variable kT:
In particular for E0=5kT, the oscillating
energy 0.00678 is only 3.39% of the classical oscillating energy of
kT=0.2, and can thus be considered very small (shut down by
the quantization of movement).
The speed of sound in the air
The most obvious effect of temperature, is its determination of the
speed of sound in the air.
Namely, it has the same magnitude as the average speed of molecules.
Or more precisely, as the average value of the component of the
speed in the direction of propagation.
The distribution of probabilities of the component vx of
the speed of a molecule with mass m in a given direction x,
is given by Boltzmann's law: exp(-mvx2/2kT).
This gives an order of magnitude for the speed of sound, of vx=sqrt(kT/m).
But let us look for a more exact formula.
The exact formula is based on the compressibility of the air : if a
volume of air decreases by some proportion, how much does its
pressure increase ?
We have the formula of ideal gases : PV=nRT.
The variation of pressure itself depends on 2 things : the variation
of volume (directly given) and the variation of temperature. The
problem is to compute the variation of temperature.
The compression is adiabatic : it preserves the quantity of entropy.
The decrease of entropy of positions is given by the given relative
decrease of volume. Now this entropy is converted into other
possible forms corresponding to the increase of temperature.
Any given relative increase of temperature dT/T goes together with
entropy increases as follows :
In the case of a diatomic molecule such as O2 or N2,
that are the two main components of air, this count gives:
- For every dimension of free movement, the entropy increase (as
a quantity of information) is dT/2T because the kinetic energy
is the square of the speed : the relative increase is of dT/T
for the mean energy but dT/2T of relative increase of the mean
speed, thus an absolute increase of dT/2T of the entropy of
- For every dimension of elastic movement, the entropy increase
is dT/T as both the position and the speed undergo that entropy
With the free movements alone, a temperature increase dT/T of n
moles of gas "contains" an entropy increase
- 5 dimensions of free movement : 3 for the speed of the
molecule, and 2 for the rotation of an atom around the other
- 1 dimension of elastic movement, that is an oscillation of the
distance between both atoms.
dS = (5/2)nR dT/T.
If we count the elastic movement, then it is dS = (7/2)nR dT/T.
Thus the entropy decrease of positions dS = -nRdV/V (as dV/V is
negative, this quantity is positive) goes with a temperature
increase (without elastic movement)
Finally the relative pressure increase is dP/P= dT/T - dV/V =
The bulk modulus is B=dP (V/-dV)= γP where γ=1.4 is the heat
The speed of sound is sqrt(γPV/m) where m=mass of the volume V of
With vibrations, γ= 1.2857
The proportion of water vapor is quite variable. Let us take an air
with about 1.2% of water vapor (that is an ordinary value), for its
further dimensions of movement to cancel the effects of lesser ones
of the 9,3% part of monoatomic Argon (3 dimensions of free
Let us compute the molecular mass of air. Taking for example 77.1%
of nitrogen with molecular mass 28; then 20.75% oxygen with
molecular mass 32 ; then 0.93% Argon with atomic mass 40; finally,
1.22% water vapor with molecular mass 18. We get m/n= 28.82 g/mol (the value is 28.96 for dry
The above becomes v=√
γRT/0.02882 where R=8.314 J·K−1mol-1,
so v=√ 288.48γT
The vibration does not take place at usual temperatures, but only at
higher temperatures. Let us investigate this now.
The strength of bonds between neighbor atoms or molecules
The rigidity of the covalent bounds as oscillators (oscillations of
the value of the distance between 2 atoms due to the form of the
potential energy as a function of this distance), was already
involved in the former study of the speed of sound in solids.
Take a crystal, or any sort of condensed matter. The rigidity of
bonds between neighbors can be deduced from a measure of the speed
of sound there, by the following reasoning.
Imagine a crystal with atoms or molecules configured in horizontal
First slice is steady
Second slice oscillates up-down
Third slice is steady
Forth slice oscillates down-up
Fifth slice is steady
and so on.
This situation can be equivalently interpreted in 2 ways.
One way is that each moving slice of atoms oscillates as blocked by
both neighbor slices
The other way is to notice that this forms a stationary wave of
sound (=a superposition of 2 waves in opposite direction).
Thus, the classical oscillation period for a typical bound between
neighbor atoms or molecules there, is the time for the wave to go
through 4 slices (it is π√2 =4.44 for a more exact analysis). But
there are several bounds there, on each side (up and down). To
represent the effect of only 1 bound, the period is longer (2π).
The rigidity of bonds in diamond
But let us make a separate computation the case of diamond. It has a
bulk modulus of K=443 GigaPascals, defined
as the ratio of the infinitesimal pressure increase to the resulting
relative decrease of the volume. Take a cubic piece of diamond with
size x, and compress it to a size x(1-ε) with a small ε. Its volume
V is compressed as V(1-3ε). The pressure after compression is 3Kε.
The potential energy of compression is 9VKε2/2.
We said, each atom takes the volume of a cube of z=1.783 Å. Thus the
potential energy per atom is 9z3Kε2/2.
Each atom has 4 bonds, while each bond connects 2 atoms. We
calculated z3K= 1.15 Ry.
So the potential energy per bond is 9z3Kε2/4 =
The distance of each atom with each of its 4 closest neighbors is
So in the formula E=k.x2/2 for a covalent
bond in diamond, we have k = 2.17 Ry / Å2.
The vibrations of the nitrogen molecule
Now let us come back to the vibrations of a diatomic molecule, with
potential energy E=a.x2/2.
The quantum of vibration has energy ℏ √k/m.
On the one hand, the potential energy of the bond is rather strong
as we saw. There are several reasons for this: a bond has 2
electrons, and the effective charge of each nucleus is several times
the elementary charge.
This (influence on k) is somehow balanced by the fact this
potential is counted along an inter-atom distance of 1.544 Å that is
about 3 times the fundamental distance a0= 0.5292 Å.
But on the other hand, the mass m involved here is the reduced
mass, = half the mass of an atom, instead of the mass of an
electron. An electron has a mass that is approximately 1/1836 that
of the proton.
Thus 1/22030 times that of the carbon atom, or 1/29380 times that of
the oxygen atom (1/14690 times the reduced mass for dioxygen).
Then we need to get the square root of this, and get a fraction of
something like 1/100.
Now let us look at the exact strength of bond for the nitrogen
It is observed in the form of its absorption spectrum: the main wavelength
of absorption in the infrared, forms a peak around 4.3 μm (or
4.24 ?). (For carbon monoxide it is 4.8 μm).
So the classical oscillation period for nitrogen is t=2π √ m/k
= (4.3 μm)/c.
The reduced mass m for nitrogen is about 14/2=7 times the
mass of a proton, = 1.16293 × 10-26 kg.
Thus k = m(2πc/4.3 μm)2
(from google calculator, with 1Ry= 2.1799×10−18 J)
k = (14.0067 u * (((2 * pi * c * (1e-10 m)) / (4.3
micrometers))^2)) / (2*2.1799e−18
J) = 10.24 Ry/Å2
That is, nearly 5 times the value of the rigidity of the bond
between carbon atoms in diamond that we obtained above.
The distance between nucleus is 1.10 Å for N2, but 1.21 Å
for O2 (reference)
For O2 the frequency is (from this article)
1580 c/cm, so the period is 6.33 μm/c
k = (16 u * (((2 * pi * c * (1e-10 m)) / (6.33
micrometers))^2)) / (2*2.1799e−18
J) = 5.40 Ry/Å2
This is hardly more than a half of the value for nitrogen. The
difference of distances between nucleus and the fact that N2
has a triple bond while O2 only has a double bond, would
not seem sufficient to explain this difference.
We can explain this by the fact that N2 is a very stable
molecule, where electrons happen to remarkably fit at a low energy
level for the precise value of the distance between atoms, while
they don't fit so well with O2 (it does not have so low
energy level, so that O2 is very reactive in
You may say : the value 5.40 Ry/Å2 is still quite strong
compared to the one 2.17 Ry / Å2 we found for diamond.
But, not only the bond for diamond is a simple bond, but its
inter-atom distance (1.544 Å) is significanly larger (than 1.21 Å);
and the typical energy of electrons is quite sensitive to such
distances: for a free electron, doubling its wavelength means a
division by 2 of its momentum, thus a division by 4 of its kinetic
energy. And a potential energy function with that height but spread
in twice the space, has its rigidity divided by 16.
Namely, the kinetic energy of an electron is 1 Ry when its
half-wavelength is πa0= h /2meve
= 1.66 Å.
Now let us compute the quantum uncertainty on the inter-atom
distance (d2= ℏ /√
km = ℏ2/mE =
ℏλ/(2πc*m) where E is the quantum of
vibrational energy and λ is the wavelength):
For N2 it is d2= ℏ*4.3 μm/(2πc*7u)
that is d= (hbar*(4.3 micrometers/(2*pi*c*7*u)))^0.5= 0.0455
For O2 it is d= (hbar*(6.33
micrometers/(2*pi*c*8*u)))^0.5= 0.0516 Å
Finally, let us see at which temperature do the vibrational states
For this, all we need is to compare the vibrational frequencies,
with the energies corresponding to the temperatures, which also
appear in the form of the spectrum of the black-body radiation of
these temperatures - but there is a significant constant numerical
factor between them.
Each oscillation period t=λ/c defines a quantum of energy E=
h/t=hc/λ that corresponds to a characteristic temperature T=
hc/kλ (where k is the Boltzmann constant).
For λ=4.3 μm we have T=hc/(k*4.3 micrometers) = 3 346 K.
For λ=6.33 μm we have T=2273 K.
The temperature of the surface of the Sun where the visible light
comes from, T=5778 K
corresponds to a wavelength
The one of melting ice, T=273.15
K, corresponds to the wavelength 52.67 μm
The black-body radiation
You may wonder why, if the temperature of the surface of the Sun
represents an energy corresponding to the wavelength 2.490μm, the
main radiation from there, namely visible light, has quite shorter
wavelengths : 390 nm (violet) to 700 nm (far red) and even shorter
ones : ultraviolet (which is visible for birds, and also affects us
otherwise). This seems surprising, especially as in temperature T,
the probability of a state as a function of its energy E is
proportional to exp(-E/kT), suggesting that its presence becomes
quickly insignificant as E takes values larger than kT.
So let us explain the gap here.
It is due to the fact that, while higher energy states are
individually less probable, they are more numerous, letting larger
collections of possibilities with small individual probabilities
take a share of importance over lower energy ones that are
individually more probable but fewer.
Let us describe this as concerns the black-body radiation, that is
the radiation from an ideally black body with a given temperature.
(By a simple reasoning of thermodynamic equilibrium, we can see that
a body that would have another color, absorbing a proportion a<1
of the light of a given wavelength, and diffusing the remaining 1-a,
would also radiate a times the black-body intensity at this
wavelength ; the black color means a=1).
For the concerns of radiation, the important measure is the volume
in the configuration space of photons, that has an absolute unit
(number of locations). We previously made such
a count for the configuration space of protons and neutrons in an
atomic nucleus. But here we consider photons that are bosons
(while protons and neutrons are fermions), so that the Pauli
exclusion principle does not apply here : each location in the
configuration space has its state described by the number of photons
it contains. Indeed, this number of photons is the number of quanta
of oscillations for the electromagnetic field at this location.
Like with protons and neutrons, there are in fact 2 effective
locations in the configuration space per space-time location,
because there are 2 states of spin (polarizations).
We calculated the mean energy of a quantum oscillator with energy E0
at temperature T, to be = E0/(exp(E0/kT)−1).
A particle has 3 coordinates of position and 3 coordinates of
momentum, forming 3 pairs of coordinates (x,px) (y,py)
(z,pz) each made of 1 position and 1 momentum, and
the unit of area inside each pair is given by the Planck constant h
that, in terms of wave, represents a phase of 2π (one period of
Now it is the same for the volume in the phase space of photons:
Two of the coordinates of momentum define the direction of
propagation; the corresponding "position coordinates" represent the
position of the photon on a screen orthogonal to this direction of
Take a photon whose direction of propagation is close to the z
axis, and consider one of the pairs of coordinates in the phase
(x,θ) where θ measures angle in radians, of the
deviation of the propagation axis in the x direction : x≈θ.z
The "unit of area" (number of locations) in the surface of the phase
space limited by a given small interval dθ of values of θ,
and an interval Dx of values of x, is then =Dx.dθ/λ where λ
is the wavelength of the photon (because on a screen making an angle
dθ with the direction of propagation, the picture of the
field at a given time is that of a wave with length λ/dθ)
The last pair of coordinates can be expressed as (t,ν) (t= time, ν=
frequency), a choice that makes its unit of area (its 2π phase)
directly equal to 1. In other words, the number of locations =
(interval of time) times (interval of frequencies).
Now let us deduce the function that compares the contributions of
different wavelengths λ (or frequencies ν=c/λ) in the
black-body radiation of a temperature T.
In each unit of surface S and each solid angle Ω (a small one, in
steradians, roughly orthogonal to the surface S), the number of
"locations" of the electromagnetic field with frequencies between ν
and ν+dν (with dν much smaller than ν) radiating away in each time
interval dt, equals to
(2 S.Ω.t/c2).ν2dν = (2
There are different ways to express the contributions of different
We may either be interested in visibility (number of photons), or in
flow of energy.
The mean number of photons per unit of location is 1/(exp(hν/kT)−1).
The mean energy per unit of location is hν/(exp(hν/kT)−1).
We may either count them for intervals of frequency dν, or intervals
of logarithm of frequency, (dν/ν).
Finally, up to constant multiplicative factors, 2 functions can be
considered relevant for comparing the contributions of wavelengths:
Note that the total flow (summed over all wavelengths) of radiated
photons (per surface per time) is proportional to T3, and
the total flow of energy is proportional to T4.
- The function ν4/(exp(hν/kT)−1) measures the
flow of energy per logarithmic interval of frequencies
- The function ν3/(exp(hν/kT)−1) measures
either the flow of photons per logarithmic interval of
frequencies, or the flow of energy per interval of frequencies.
Now let us fix the temperature and conventional units so that kT/h=1,
and compare the values of the above function for different
In particular, contrary to the previous context, the contributions
from the values around E0=hν=5kT cannot be
considered "very small" but belong to the range of the main
contributions to the black-body radiation.
[More developments on the properties of matter will be written
Back to main pages : List of physics
theories - Set Theory and