# Concepts of truth in mathematics

Let us review 4 distinct concepts of «truth» for a mathematical formula, from the simplest to the most subtle.

We first saw the relative truth, that is the value of a formula interpreted in a supposedly given model (like an implicit free variable, ignoring any difficulty to specify any example). In this sense, a given formula may be as well true or false depending on the model, and on the values of its free variables there.

#### Provability

Then comes the quality of being relatively true in all models of a given axiomatic theory, which coincides with provability in this theory, i.e. deduction from its axioms by the rules of first-order logic. Namely, there are known formal systems of proof for first-order logic, with known proof verification algorithms, universally applicable to any first-order theory while keeping this quality (ability to prove exactly all universally true formulas).
This equivalence between universal truth (in all models) and provability, is ensured by the completeness theorem, which was originally Gödel's thesis. It gives to the concepts of «proof», «theorem» and «consistency» (vague a priori concepts from intuition) perfectly suitable mathematical definitions, and practically eliminates any object of disagreement between realism (Platonism) and formalism for first-order logic. This remarkable property of first-order logic, together with the fact that all mathematics is expressible there (what is not directly there can be made so by insertion into set theory, itself formalized as a first-order theory), gives this framework a central importance in the foundations of mathematics.

The proof of the completeness theorem, first expressed as the existence of a model of any consistent first-order theory, goes by constructing such models out of the infinite set of all ground expressions in a language constructed from the theory (the language of the theory plus more symbols extracted from its axioms). As the set of all ground expressions in a language can itself be constructed from this language together with the set ℕ of natural numbers, the validity of this theorem only depends on the axiom of infinity, that is the existence of ℕ as an actual infinity, sufficient for all theories (ignoring the diversity of infinities in set theory).

However, these are only theoretical properties, assuming a computer with unlimited (potentially infinite) available time and resources, able to find proofs of any size. Not only the precise size of a proof may depend on the particular formalism, but even some relatively simple formulas may only have «existing» proofs that «cannot be found» in practice as they would be too long, even bigger than the number of atoms in the visible physical Universe (as illustrated by Gödel's speed-up theorem). Within limited resources, there may be no way to distinguish whether a formula is truly unprovable or a proof has only not yet been found.

To include their case, the universal concept of provability (existence of a proof) has to be defined in the abstract. Namely, it can be expressed as a formula of first-order arithmetic (the first-order theory of natural numbers with operations of addition and multiplication), made of one existential quantifier that is unbounded in the sense of arithmetic (∃ p, ) where p is an encoding of the proof, and inside is a formula where all quantifiers are bounded, i.e. with finite range (∀x < (...), ...), expressing a verification of this proof.

However, once given an arithmetical formula known to be a correct expression of the provability predicate (while all such formulas are provably equivalent to each other), it still needs to be interpreted.

#### Arithmetic truths

This involves the third concept of mathematical truth, that is the realistic truth in first-order arithmetic. This is the ideally meant interpretation of arithmetic: the interpretation of ground formulas of first-order arithmetic in «the true set ℕ of all, and only all, really finite natural numbers», called the standard model of arithmetic. But any axiomatic formalization of arithmetic in first-order logic is incomplete, in both following senses of the question:
• Due to the incompleteness theorem, the set of all real truths of arithmetic (formulas true in ℕ) cannot be exhaustively produced by any algorithm (using unlimited resources but a finite amount of initial information). Thus, they cannot be all logical consequences of any algorithmically produced axiomatic theory either (since logical deduction from given axioms is algorithmic). This first incompleteness result can be easily deduced from truth undefinability as we already saw it : as provability is definable, it cannot coincide with truth.
• Even when abstractly considering to take all real truths as axioms, it still would not suffice to determine the model, as it cannot exclude non-standard models. (This last form of incompleteness does not have any name because it equally affects the first-order description of any infinite system, according to the Löwenheim-Skolem theorem, and is thus unremarkable.)
This incompleteness affects the provability predicate itself, though only on one side, as follows.
On the one hand, if the formula p(A) of provability of a formula A, is true, then it is provable: a proof of p(A) can in principle be produced by the following method in 2 steps:
1. Find a proof of A (as it is assumed to exist);
2. Process it by some automatic converter able to formally convert any proof of A into a proof that a proof of A exists.

On the other hand, it is not always refutable when false : no matter the time spent seeking in vain a proof of a given unprovable formula, we might still never be able to formally refute the possibility to finally find a proof by searching longer, because of the risk for a formula to be only provable by unreasonably long proofs.

In lack of any possible fixed ultimate algorithm to produce all truths of arithmetic, we can be interested with partial solutions: algorithms producing endless lists of ground arithmetic formulas with both qualities
• Infallible (stays included in the set of truths, describing the standard model of arithmetic);
• Large (compared by inclusion with other algorithmically producible sets of truths).
A natural method to progress in the endless (non-algorithmic) search for better and better algorithms for the second quality without breaking the first, consists in developing formalizations of set theory describing larger and larger universes beyond the infinity of ℕ, where properties of ℕ can be deduced as particular cases. Indeed, if a set theory T' requires its universe to contain, as a set, a model U of a set theory T, then the arithmetic formula of the consistency of T will be provable in T' but not in T, while all arithmetic theorems of T remain provable in T' if T' describes U as standard.

#### Set theoretical truths

The above can be read as an indispensability argument for our last concept of truth, which is the truth of set theoretical statements. To progress beyond logical deduction from already accepted ones, more set theoretical axioms need to be introduced, motivated by some Platonist arguments for a real existence of some standard universes where they are true; the validity of such arguments needs to be assessed in intuitive, not purely formal ways, precisely in order to do better than any predefined algorithm. Arguments for a given axiomatic set theory, lead to arithmetic conclusions :
1. The statement of the formal consistency of this set theory;
2. The arithmetic theorems we can deduce in its framework

Both conclusions should not be confused :

• By the second incompleteness theorem, 1. cannot come as a particular case of 2. (unless it is wrong) even though it must be true for 2. to be consistent and thus of any interest. We can already see how the naive attempt to deduce 1. from 2. fails : consistency is logically deducible from the existence of a model, but the truth undefinability prevents the theory from using the current model as an example of existing model where to interpret its formulas and thus witness their consistency.
• the reason for the truth of 2. refers to the existence of a standard model of this theory, while 1. only means that non-standard models exist.
But as the objects of these conclusions are mere properties of finite systems (proofs), their meaning stays unaffected by any ontological assumptions about infinities, including the finitist ontology (denying the reality of any actual infinity, whatever such a philosophy might mean). It sounds hard to figure out, then, how their reliability can be meaningfully challenged by philosophical disputes on the «reality» of abstractions beyond them (universes), just because they were motivated by these abstractions.
But then, the statement of consistency (1.), with the mere existence of ℕ, suffice to let models of this theory really exist (non-standard ones, but working the same as standard ones).

For logical deduction from set theoretical axioms to be a good arithmetic truth searching algorithm, these axioms must be :
• Sound = keeps accepting some standard universes (to keep the output infallible);
• Strong = rejects "small" standard universes = sets a high minimum "size" on the hierarchy of its sub-universes (to make the output large).
But for a collection of such axioms to keep these qualities when put together in a common theory, they need to be compatible, in the sense that their conjunction remains sound. Two such statements might be incompatible, either if one of them limits the size of the universe (thus it shouldn't), or if each statement (using both kinds of open quantifiers when written in prenex form) endlessly alternates between truth and falsity when the universe expands, in such a way that they would no more be true together in any standard universe beyond a certain size (their conjunction must not limit the size of the universe either). The question is, on what sort of big standard universes might good axioms more naturally be true together ?

A standard universe U' might be axiomatically described as very big by setting it a little bigger than another very big one U, but the size of this U would need a different description (as it cannot be proven to satisfy the same axioms as U' without contradiction), but of what kind ? Describing U as also a little bigger than a third universe and so on, would require the axioms to keep track of successive differences. This would rapidly run into inefficient complications with incompatible alternatives, with no precise reason to prefer one version against others.

The natural solution, both for philosophical elegance and the efficiency and compatibility of axioms, is to focus on the opposite case, of universes described as big by how much bigger they are than any smaller one (like how we conceived a ultimate universe as the union of a standard multiverse) : axioms must be

• Open = satisfied by any universe where eternity is a sufficiently long time especially towards the end (= open universes).

It is also convenient because such descriptions are indeed expressible by axioms interpreted inside the universe, with no need of any external object. Indeed, if a property was only expressible using an external object (regarding this universe as a set), we could replace it by describing instead our universe as containing a sub-universe of this kind (without limiting its size beyond it), and why not also endlessly many sub-universes of this kind, forming a standard multiverse: stating that every object is contained in such a sub-universe. This is axiomatically expressible using objects outside each of these sub-universes, but inside our big one; and such axioms will fit all 3 above qualities.

Finally, the properly understood meaning of set theory is neither axiomatic nor realistic, but some fuzzy intermediate between both: its axioms aim to approach all 3 qualities (strong and open but still sound) selecting universes with the corresponding 3 qualities (big and open but still standard), but these qualities are all fuzzy, and any specific axioms list (resp. universe) only aims to approach them, while this quest can never end. Fortunately, rather simple set theories such as ZF, already satisfy these qualities to a high degree, describing much larger realities than usually needed. This is how a Platonic view of set theory (seeing the universe of all mathematical objects as a fixed, exhaustive reality) can work as a good approximation, though it cannot be an exact, absolute fact.

Section ends with Alternative logical frameworks in 1.A if not read yet.
 Set theory and Foundations of mathematics 1. First foundations of mathematics Philosophical aspects 1.1. Introduction to the foundations of mathematics 1.2. Variables, sets, functions and operations Intuitive representation and abstraction Platonism vs Formalism 1.3. Form of theories: notions, objects, meta-objects Realistic vs. axiomatic theories 1.4. Formalizing types and structures 1.5. Expressions and definable structures Time in model theory The time of interpretation The metaphor of the usual time The infinite time between models 1.6. Logical connectives 1.7. Classes in set theory Truth undefinability The Berry paradox Zeno's Paradox 1.8. Binders in set theory Time in Set theory Expansion of the set theoretical universe Can a set contain itself ? 1.9. Quantifiers The relative sense of open quantifiers Interpretation of classes ⇦ Classes in an expanding universe Concrete examples 1.10. Formalization of set theory 1.11. Set generation principle Justifying the set generation principle Concepts of truth in mathematics Alternative logical frameworks ⇨ 2. Set theory (continued)

Other languages:
FR : Concepts de vérité en mathématiques