1.B. Introduction to incompleteness
Truth undefinability
The incompleteness phenomenon (time order of interpretations) appears
for the case of the interpretation of statements, in the form of the Truth Undefinability Theorem. Let us
introduce some details about it (but sketches of full proofs will only be given in 5.7, still skipping
some tedious steps).
Any theory 1TT able to describe a theory (beyond the most trivial ones),
can also be developed to describe any well-describable theory. Namely we can start
with an 1TT with a general notion of "expression" (in any language)
then define there the class of expressions only using some specific language; then
provability from given axioms is simply definable from logical validity.
Now let T be any theory which contains 1TT, or which can be
developed to produce it. For example it can be some set theory (even without the axiom of infinity,
replaced by some tools to work with classes instead of sets...).
Then such a T is able to describe itself just if it is well-describable. But for now we only need
general notions of expression and statements which will contain images of those of T.
The possibility to specifically describe T will only be used later, for the Incompleteness Theorems.
Let S be the meta-set of statements of T, and S the class defined in T to
play the role of S in any model M of T.
From any statement F∈S we can form a ground term of T we shall denote
as its quotation "F", which aims to designate the object
playing the role of that statement : M⊨"F"∈S.
Let S_{1} be the meta-set of all formulas of T with only one free variable
(with type S but this type detail may be ignored),
meant to define invariant sub-classes of S (candidate classes of "true statements").
Truth Undefinability Theorem (weak version). For any model M of T,
the meta-class {A∈S | M⊨A} of statements true
in M, cannot agree with any invariant class of objects "statements" in M:∀C∈S_{1}, ∀M⊨T, ∃A∈S,
M⊨(A ⇎ C("A"))
Truth Undefinability Theorem (strong version). ∀C∈S_{1},
∃A∈S, T ⊢ (A ⇎ C("A")).
The tradition focuses on proving the strong version (details in Part 5) : the proof using the liar paradox
provides an explicit A, defined as ¬C("A") where
the quote "A" is obtained in explicit (finitistic) though complex ways. So it
gives the "pure" information of a "known unknown" : a necessary error on an explicit A
"simply" made of ¬C, with just the more complex argument providing the quote
"A" by an explicit (finitistic) process of self-reference, tricky to form and understand.
But a different proof of the weak version is possible (strangely unknown in the literature), from the Berry
paradox (technical details,
involving subtleties in the foundations of arithmetic, were moved to Part 4) : from a definition of the truth of statements
one can define which formula defines which number, and thus define a number in the
style of "the smallest number not definable in less than twenty words" which would lead to contradiction.
This proof is not only somewhat simpler and more obvious to accept, it also provides a different
information: it shows the
pervasiveness of the "unknown unknown" giving a range of statements A which are "less pure" in their
kind but less complex in size, among which an "error" must exist, without specifying where
(it depends on which number would be so "defined").
Let us draw a first direct consequence. Consider some "almost complete" foundational theory T, i.e.
a first-order theory containing TT and, for instance, approaching by second-order universal elimination a second-order
theory which is complete, i.e. fully defining its intended standard model M. Even then, no invariant
predicate defined by T in M (such as "truth" predicates mentioned in 1.A) can ever
be the "right" truth predicate over the class of "statements of T", i.e. giving to all statements of
T the values they take in the same M. Therefore, any model of T defined by
T in M and matching a T-invariant "truth" predicate (such models must exist thanks to
consistency) will necessarily be non-standard, differing from M in its interpretations of statements.
The hierarchy of formulas : existence and provability
To decipher the facts of truth undefinability and incompleteness, formulas can be analyzed by syntax to specify
the sources of possible undecidability or other kind of indefiniteness.
So to speak, binders are the main source of indefiniteness. Thus, the most definite formulas are those
without binder ; then, intuitively, the definiteness of a given binder is a matter of how set-like its range is: in set theories,
open quantifiers (ranging over the universe or a given class) are less definite than bounded quantifiers
(∃x∈ and ∀x∈ ) and other binders whose range is a set.
For FST, the condition for a class to be a set is to be finite (skipping more technical details). Similarly
in arithmetic, quantifiers can be bounded using the order : (∃x<k, ) and (∀x<k, )
respectively abbreviate
(∃x, x<k ∧ ) and (∀x, x<k ⇒ ) and the same for ≤.
Let us qualify a statement as existential if it is made of an open ∃ as root, followed by a formula without open
quantifier. Existential statements of FST or equivalent theories (TT and Z_{1}), are essentially the same as
halting conditions of algorithms (processed by
an abstract "computer" with unlimited available time and resources):
- Any existential statement ∃x, A(x) of FST is the halting condition of an algorithm
that enumerates every x, interprets A over each, and stops when it finds A true somewhere;
- The halting condition of any algorithm is expressible as an existential statement
∃t, A(t) of FST where, roughly speaking (details are of course more complex), A claims
that this algorithm will halt at time t.
They are also crucially linked to the concept of provability in both following ways.
- If an existential statement (∃x, A(x)) of FST is true in its standard model then
it is also provable in FST (so both conditions are equivalent), i.e. also true in all models of FST. Indeed any
such x from the standard model can in principle be explicitly designated (quoted), thus also designating
an element satisfying A in any other model (elements so designated there are themselves called
standard), and giving a proof of this statement by existential introduction. Similarly, standard
numbers are those which can in principle be written as some 1+...+1.
- The concept of provability in any explicit theory, is expressible as an existential statement of FST. Indeed,
this is what any concept of proof for any logical framework is expected to look like. In practice, usual
examples consist in
- giving a first-order theory with a (TT-invariant) class of axioms defined without open quantifier
(but a definition by an existential formula would fit too, which amounts to the range of all outputs of
a given algorithm);
- deducing theorems from these axioms by first-order logic: the class of these theorems is finally expressible by
an existential formula.
Incompleteness theorems
The famous two incompleteness theorems by Gödel, are respectively deducible from the weak and strong
versions of the truth undefinability theorem. The hardest step is to show that Z_{1} suffices to
develop TT, so that incompleteness properties of TT also affects Z_{1}.
This difficulty can be skipped by ignoring arithmetic and simply expressing incompleteness as
affecting TT and thus FST (admitting the development of TT from FST which is easier).
The first theorem says that no algorithm can produce exactly all truths of arithmetic.
The deduction is very simple : for any algorithm, the class of objects it can produce is invariant. Therefore it
cannot coincide with the class of true statements of a theory to which the weak truth undefinability theorem applies,
namely the theory 1TT and any other theory containing it. To also apply it to arithmetic requires to use
Godel's work of developing 1TT from arithmetic. Without it we can still deduce the incompleteness of
second-order logic (its lack of complete proof algorithm) by a formalization of 1TT in second-order logic
which completely determines its truths.
(to be completed)
The size (complexity) of a proof, reflects the number of elements
that need to be described in an attempted construction of a counter-example to a given
theorem, to observe the necessary failure of this construction (while the choice of formalization
only affects the size of proofs in more moderate proportions); this is not a priori
predictable from, say, the size of the statement, for reasons which will be developed later.
As will be shown
by the incompleteness
theorem, unclarities remain about provability and existence of models:
- The proven equivalence between consistency and existence of a model, does not inform
whether these are both true, or both false:
- Provability, if true, can stay
practically unknowable: if a proof exists, nothing says how to find it; it may be
too big to be ever given.
- The consistency of a theory may be true but unprovable: completeness only ensures
that, in this case (and it may be undecidable whether it is the case or not), there also exists
universes in which, equivalently, this theory is inconsistent and it has no models.
For any two well-defined theories (in an algorithmic sense)
T, T' which encompass arithmetic (such as set theories),
no
suffice to make these formulas decidable for every value
of s, as soon as they aim to encode provability in a theory which also contains arithmetic.
But any axiomatic formalization of arithmetic in
first-order logic is incomplete, in both following senses of the question:
- The set of all real truths of arithmetic (formulas
true in ℕ) cannot be exhaustively produced by any algorithm (using
unlimited resources but a finite amount of initial information). Thus,
they cannot be all logical consequences of any algorithmically produced
axiomatic theory either (since logical deduction from given axioms is
algorithmic). This first incompleteness result can be easily deduced
from truth
undefinability as we already saw it : as provability is definable, it
cannot coincide with truth.
- Even when abstractly considering to take all real truths as axioms,
it still would not suffice to determine the model, as it cannot exclude
non-standard
models. (This last form of incompleteness does not have any
name because it equally affects the first-order description of any
infinite system, according to the Löwenheim-Skolem
theorem, and is thus unremarkable.)
This incompleteness affects the provability predicate itself, though
only on one side, as follows.
On the one hand, if the formula p(A) of provability
of a formula A, is true, then it is provable: a proof of p(A)
can in principle be produced by the following method in 2
steps:
- Find a proof of A (as it is assumed to exist);
- Process it by some automatic converter able to formally
convert any proof of A into a proof that a proof of A
exists.
On the other hand, it is not always refutable when false : no matter
the time spent seeking in vain a proof of a given unprovable formula,
we might still never be able to formally refute the possibility to
finally find a proof by searching longer, because of the risk for a formula
to be only provable by unreasonably long proofs.
The time of proving
Among the possible formalizations of proofs, are formulas V(p,s) of
meaning that the number p encodes a valid proof of a
statement encoded by the number s, depending on unary predicate symbols
t,l,a which encode the classes of components of T
(its types, symbols and axioms ; the induction axioms of arithmetic would need to
admit the use of symbols t,l,a if these symbols
were not definable, while they usually are since we only consider definable theories).
The resulting provability formula (∃p, V(p,s)) is
independent of the chosen sound and complete way to define
V, in the sense that all such provability formulas are provably equivalent to each other
in first-order arithmetic. More formally, for any two such arithmetical formulas V, V' encoding
proof verification in the same theory,
(Arithmetic + symbols and axioms for t,l,a) ⊢ ∀s, (∃p,
V(p,s)) ⇔ (∃p,
V'(p,s))
However to complete the definition of provability, once written as a formula of this kind,
remains the issue of interpreting the ∃p realistically,
in the standard model of arithmetic ℕ, intuitively described as the system
of only all actually finite natural numbers, as will be clarified in Part 4. This use of
the standard ℕ can be intuitively understood as the use of actual infinity, needed by
lack of predictable limits on the needed (minimal) sizes of proofs for given statements.
Indeed as will be
explained in 1.A, incompleteness theorems will undermine this definiteness in both
ways : establishing both the unpredictability of the size of needed proofs,
and the irreducible undecidability of some provability predicates in any algorithmic
formalization of arithmetic (while the realistic view sees them all as definite).
If no proof of a statement could be found within given
limited resources, it may still be a theorem whose proofs could not be found as they may be any
longer. This is often unpredictable, for deep theoretical reasons which will appear from the study of the
incompleteness theorem and related ones such as Gödel's
speed-up theorem :
- It is unlikely to be predicted in advance: intuitively,
a reliable prediction of existence of proof of A would be a small proof of existence of a
larger proof of A, and thus look like a small proof of A itself (though this implication
between provability properties cannot be itself a theorem);
- No amount of vain search can justify to give up, as a theorem may require
indescribably large proofs (thus beyond the resources of any computer or even all the information
that our visible universe may contain): by lack of inverse method (construction of proofs from the absence of counter
examples), no limit can be described for the size of the smallest needed proofs for given theorems
(or even logically valid statements beyond tautologies) that would only depend on the size of the
statement, beyond the simplest cases.
Other languages:
FR : Temps en
théorie des modèles