Concepts of truth in mathematics
Let us review 4 distinct concepts of «truth» for a mathematical
formula, from the simplest to the most subtle.
We first saw the relative truth, that is the value of a
formula interpreted in a supposedly given model (like an implicit
free variable, ignoring any difficulty to specify any example). In
this sense, a given formula may be as well true or false depending
on the model, and on the values of its free variables there.
Provability
Then comes the quality of being relatively true in all models of a given
axiomatic theory, which coincides with provability in this theory,
i.e. deduction from its axioms by the rules of first-order logic. Namely,
there are known formal systems of proof for first-order
logic, with known proof verification algorithms, universally applicable to
any first-order theory while keeping this quality (ability to prove exactly
all universally true formulas).
This remarkable property of first-order logic, together with the
fact that all mathematics is expressible there (what is not
directly there can be made so by insertion into set theory, itself
formalized as a first-order theory), gives this framework a
central importance in the foundations of mathematics : it
reconciles Platonism and formalism, while giving a clear, natural
sense to the concepts of «proof», «theorem» and «consistency».
Different systems can do this job but all such formalisms, when
correct, are equivalent to each other : any proof according to one is
automatically convertible into a proof according to any other.
The completeness
theorem ensuring this, first expressed as stating the
existence of a model of any consistent first-order theory, will be
proven by constructing these models out of the infinite set of all
ground expressions in a language constructed from the theory
(the language of the theory plus more symbols extracted from its
axioms). As the set of all ground expressions in a language can
itself be constructed from this language together with the set ℕ
of natural numbers, the validity of this theorem only depends on the
axiom of infinity, that is the existence of ℕ as an actual infinity,
sufficient for all theories (ignoring the diversity of infinities in set
theory).
However, these are only theoretical properties, assuming a
computer with unlimited (potentially infinite) available time and
resources, able to find proofs of any size. Not only the precise size of
a proof may depend on the particular formalism, but even some
relatively simple formulas may only have «existing» proofs that «cannot be
found» in practice as they would be too long, even bigger than the number of atoms
in the visible physical Universe (as illustrated by Gödel's
speed-up theorem).
Within limited resources, there may be no way to distinguish whether a
formula is truly unprovable or a proof has only not yet been found.
To include their case, the universal concept of provability (existence of a proof)
has to be defined in the abstract. Namely, it can be expressed as a formula of first-order arithmetic
(the first-order theory of natural numbers with operations of
addition and multiplication), made of one existential quantifier
that is unbounded in the sense of arithmetic (∃_{ℕ} p,
) followed by a formula where all quantifiers are bounded, i.e. with
finite range (∀x < (...), ...). For example, p may
be an encoding of the proof, or the time needed by a proof searching
algorithm to find it.
However, once given an arithmetical formula known to be a correct
expression of the provability predicate (while all such formulas are
provably equivalent to each other), it still needs to be interpreted.
Arithmetic truths
This involves the third concept of mathematical
truth, that is the realistic
truth in first-order arithmetic. This is the ideally meant
interpretation of arithmetic: the interpretation of ground formulas
of first-order arithmetic in «the true set ℕ of all, and only all,
really finite natural numbers», called the standard model of
arithmetic. But any axiomatic formalization of arithmetic in
first-order logic is incomplete, in both following senses of the question:
- Due to the incompleteness
theorem, the set of all real truths of arithmetic (formulas
true in ℕ) cannot be exhaustively produced by any algorithm (using
unlimited resources but a finite amount of initial information). Thus,
they cannot be all logical consequences of any algorithmically produced
axiomatic theory either (since logical deduction from given axioms is
algorithmic).
- Even when abstractly considering to take all real truths as axioms,
it still would not suffice to determine the model, as it cannot exclude
non-standard
models. (This last form of incompleteness does not have any
name because it equally affects the first-order description of any
infinite system, according to the Löwenheim-Skolem
theorem, and is thus unremarkable.)
This incompleteness affects the provability predicate itself, though
only on one side, as follows.
On the one hand, if the formula p(A) of provability
of a formula A, is true, then it is provable: a proof of p(A)
can in principle be produced by the following method in 2
steps:
- Find a proof of A (as it is assumed to exist);
- Process it by some automatic converter able to formally
convert any proof of A into a proof that a proof of A
exists.
On the other hand, it is not always refutable when false : no matter
the time spent seeking in vain a proof of a given unprovable formula,
we might still never be able to formally refute the possibility to
finally find a proof by searching longer, because of the risk for a formula
to be only provable by unreasonably long proofs.
In lack of any possible fixed ultimate algorithm to produce all truths of
arithmetic, we can be interested with partial solutions: algorithms
producing endless lists of ground arithmetic formulas with both qualities
- Infallible (stays included in the set of truths, describing
the standard model of arithmetic);
- Large (compared by inclusion with other algorithmically
producible sets of truths).
A natural method to progress in the endless (non-algorithmic) search
for better and better algorithms for the second quality without breaking
the first, consists in developing formalizations of set theory describing
larger and larger universes beyond the infinity of ℕ, where properties
of ℕ can be deduced as particular cases. Indeed, if a set theory T'
requires its universe to contain, as a set, a model U of a
set theory T, then the arithmetic formula of the consistency
of T will be provable in T' but not in T,
while all arithmetic theorems of T remain provable in T'
if T' describes U as standard.
Set theoretical truths
The above can be read as an indispensability argument for our last
concept of truth, which is the truth of set theoretical statements.
To progress beyond logical deduction from already accepted ones, more
set theoretical axioms need to be introduced, motivated by some
Platonist arguments for a real existence of some standard universes where they
are true; the validity of such arguments needs to be assessed in intuitive,
not purely formal ways, precisely in order to do better than any predefined
algorithm. Arguments for a given axiomatic set theory, lead to
arithmetic conclusions :
- The formal consistency of this set theory;
- The arithmetic theorems in its framework
Both conclusions should not be confused :
- 1. cannot come as a direct particular case of 2. (unless it is
wrong) even though it must be true for 2. to be consistent and thus
of any interest;
- the reason for the truth of 2. refers to the existence of a
standard model of this theory, while 1. only means that
non-standard models exist.
But as the objects of these conclusions are mere properties of finite systems
(proofs), their meaning stays unaffected by any ontological assumptions
about infinities, including the finitist ontology (denying the reality of any
actual infinity, whatever such a philosophy might mean). It sounds hard to
figure out, then, how their reliability can be meaningfully challenged by
philosophical disputes on the «reality» of abstractions beyond them
(universes), just because they were motivated by these abstractions.
But then, the claim of consistency (1.), with the mere
existence of ℕ, suffice to let models of this theory really
exist (non-standard ones, but working the same as standard ones).
For logical deduction from set theoretical axioms to be a good arithmetic truth
searching algorithm, these axioms must be :
- Sane = keeps accepting some standard universes (to keep the
output infallible);
- Strong = rejects "small" standard universes = sets a high
minimum "size" on the hierarchy of its sub-universes
(to make the output large).
But for a collection of such axioms to keep these qualities when put
together in a common theory, they need to be compatible,
in the sense that their conjunction remains sane. Two such statements
might be incompatible, either if one of them limits the size of the
universe (thus it shouldn't), or if each statement (using both kinds of open
quantifiers when written in prenex
form) endlessly alternates between truth and falsity when the
universe expands, in such a way that they would no more be true together in any
standard universe beyond a certain size (their conjunction must not
limit the size of the universe either). The question is, on what sort of big standard
universes might good axioms more naturally be true together ?
A standard universe U' might be axiomatically described
as very big by setting it a little bigger than another very big
one U, but the size of this U would need a
different description (as it cannot be proven to satisfy the same
axioms as U' without contradiction), but of what kind ?
Describing U as also a little
bigger than a third universe and so on, would require the axioms to keep
track of successive differences. This would rapidly run into inefficient
complications with incompatible alternatives, with no precise reason to
prefer one version against others.
The natural solution, both for philosophical elegance and the efficiency
and compatibility of axioms, is to focus on the opposite case, of universes
described as big by how much bigger they are than any smaller one (like how
we conceived
a ultimate universe as the union of a standard multiverse) :
axioms must be
- Open
= describing universes where eternity is a very long time
especially towards the end (= open universes).
It is also convenient because such descriptions are indeed
expressible by axioms interpreted inside the universe, with no
need of any external object. Indeed, if a property was only
expressible using an external object (regarding this universe as a
set), we could replace it by describing instead our universe as
containing a sub-universe of this kind (without limiting its size
beyond it), and why not also endlessly many sub-universes of this
kind, forming a standard multiverse: stating that every object is
contained in such a sub-universe. This is axiomatically expressible
using objects outside each of these sub-universes, but inside our
big one; and such axioms will fit all 3 above qualities.
Finally, the properly understood meaning of set theory is neither
axiomatic nor realistic, but a sort of fuzzy intermediate between
both: its axioms aim to approach all 3 qualities (strong and open but still
sane) selecting universes with the corresponding 3 qualities
(big and open but still standard), but these qualities are all fuzzy to interpret,
and any specific axioms list (resp. universe) only aims to approach them,
while this quest can never end.
Fortunately, rather simple theories such as ZF, already satisfy these
qualities up to a very high degree, describing much larger realities than
usually needed. This is how a Platonic view of set theory (seeing the
universe of all mathematical objects as a fixed, exhaustive reality) can
work as a good approximation, though it cannot be an exact, absolute fact.
Alternative logical frameworks
The description we made of the foundations of mathematics
(first-order logic and set theory), is essentially just an
equivalent clarified expression of the widely accepted ones (a
different introduction to the same mathematics). In section 3 will
be presented other logical frameworks that are either restricted
versions of first-order logic, or anyway naturally expressible in
set theory. But other, more radically different frameworks
(concepts of logic and/or sets), called non-classical
logic, might be considered. Examples:
- Some logicians developed the «intuitionistic logic», which
lets formulas keep a possible indefiniteness as we mentioned for
open quantifiers, but treated as a modification of the pure
Boolean logic (the rejection of the excluded middle, where ¬(¬A)
does not imply A), without any special mention of
quantifiers as the source of this uncertainty. Or it might be
seen as a formal confusion between truth and provability. In
this framework, {0}∪ ]0,1] ⊂ [0,1], without equality. I could
not personally find any interest in this formalism but only
heard that theoretical computer scientists found it useful.
- When studying measure
theory (which mathematically defines probabilities in
infinite sets), I was inspired to interpret its results as
simpler statements on another concept of set, with the following
intuitive property. Let x be a variable randomly
taken in [0,1], by successively flipping a coin for each of its
(infinity of) binary digits. Let E be the domain of x,
set of all random numbers in [0,1]. It is nonempty
because such random numbers can be produced. Now another
similarly random number, with the same range (y ∈ E)
but produced independently of x, does not have any
chance to be equal to x. Thus, ∀x ∈ E,∀y
∈ E, x ≠ y. This way, x ∈ E
is no more always equivalent to ∃y ∈ E, x
= y.
We will keep classical logic in all following sections, ignoring
such alternatives.
Other languages:
FR : Concepts de vérité en mathématiques