Theorem. Let T a theory able to describe itself, in the sense that we can define there (thus find in every model M it may have) a system T ' representing an image of the first 2 levels of T (types, structure symbols, and the notion of expression made of them) in this sense:
Let H(y) be the expression F(y : J(y)) with variable y ranging over the class of expressions of T ' with the same type (in T ') as F (in T) except for the intended type of their distinguished variable (which will range over expressions with one more variable, but such a difference on the intended type of a variable does not affect the formal processes over expressions using it); and actually the same type as H.This is itself a theorem of finitistic mathematics, i.e. refers to a specific method (algorithm), which, given as input the data of some finite systems (expressions, proofs) provides related systems as output after finitely many operations (not unreasonably more than the size of input). Therefore it is valid in the framework of any theory able to describe expressions (such as arithmetic, through Godel's constructions), which may differ (be weaker than) the theory T it may describe (such as a set theory). Next theorems will also be of this kind.
Then let X be the term (tH : J(tH)). Thus F(X), that is F(tH : J(tH)), can also be abbreviated as H(tH).
Thus T⊢ X = (tH : ttH) = tF(X) ∎
Theorem. Under the above assumptions on theory T, for any invariant unary predicate v with domain the class of ground formulas of T', there is a ground formula G of T on which v necessarily gives the wrong value : T⊢ G⇎v(tG).Proof : by the self-quotation theorem, taking ¬v as F. ∎
Among unary predicates v over all ground formulas, the logically consistent ones, which we can call interpretations, represent classes of models for the relation of elementary equivalence (bijectively if we consider all unary relations disregarding their definability). The truth undefinability theorem implies that any invariant interpretation can only describe a class of models not containing the current one.In particular, arithmetic cannot express the interpretation of all its own ground formulas in the standard model (the current interpretation, trivially interpreting the object symbols by those of the framework). Namely, while it can interpret by recursion all algebraic terms by a single formula, this cannot be extended to ground formulas with an unlimited number of quantifiers. Still, it can do things the other way round:
Theorem. No algorithm can produce any consistent interpretation of all ground formulas of arithmetic.Here is a proof a bit different (simpler ?) than Gödel's:
An algorithm that stops to decide a formula as true or false (in standard time), will keep its result when interpreted in a non-standard model (the finiteness of its operations keeps it unaffected by non-standard numbers). The contradiction with the truth undefinability theorem refutes the existence of a deciding algorithm. ∎Thus, the above described kind of arithmetic definition of an interpretation, cannot be reduced to an algorithm, for the following reason. Each step of its "construction" involves a search of a proof (a refutation of a formula from previous axioms); among these searches, will necessarily be endless ones, whose result is differently interpreted between models : an algorithm (here a search for proof) which does not stop in standard time, may be reinterpreted in a non-standard model as stopping (after a non-standard number of operations, finding a proof with non-standard size).
Theorem. If T ′ is
built like T which can express the provability p
in T ′, and F is a
1) (T⊢ F) ⇒ (T⊢ p[F])
2) (T⊢ ¬p) ⇔ (T⊢ 0) (Gödel's second incompleteness theorem : no consistent theory can prove its own consistency)
3) (T⊢ p[F] ⇒ F) ⇔ (T⊢ F) (Löb's theorem, from which «This claim is provable» is deduced)
1) The proof of F in T can be copied into T ′, or converted to a proof of existence of a proof.
2) Let T⊢ (G ⇎ p[G]). The proof (T⊢ G) ⇔ (T⊢ G∧p[G]) ⇔ (T⊢ 0) formalized in T gives T⊢(p[G]⇔p). Thus (T⊢ ¬p) ⇔ (T⊢ ¬p[G]) ⇔ (T⊢ G) ⇔ (T⊢ 0).
3) from 2) applied to the theory (T+¬F). Finally take F=«F is provable», ie T⊢ (F ⇔ p[F]). ∎
So, while the rules of provability are formally
well-defined (as they have been effectively expressed by Gödel in
the language of arithmetic), the interpretation
of these rules and thus the very question of the provability of
some formulas from some specified theory, will remain sometimes
undecidable. Because they depend on ℕ (or equivalently, on the
concept of finiteness) which admits non-standard models, while
"proofs with non-standard length" are not real valid proofs, since
an abstract "proof" needs to be of standard length for being
valid, but while the notion of standardness seems philosophically
clear, it is finally relative from a mathematical viewpoint.
One might object that the countable model made by the
construction of the completeness theorem, only simulates a
powerset, not of the true ℕ, but of a non-standard model of ℕ.
Indeed, the models provided by the precise construction we
described in the proof of the Completeness theorem, all contain
non-standard natural numbers.
However, the argument of the Skolem's paradox still holds in 2
On the one hand, by the fact that it is still anyway a countable
simulation of the powerset of a countable set : as (inside a fixed
model of set theory) bijections between sets provide bijections
between their powersets, the oddity of this being lost (thus
getting a "powerset" that cannot be the "true" one) when comparing
the interpretations of countability between different models,
remains intact. We can also consider the correspondance by the embedding of the
standard (external) ℕ into the internal (non-standard) one : being
externally countable, the internal "P(ℕ)" is insufficient
not only to exhaust the external powerset of the internal ℕ, but
also (by restriction of the ∈ predicate) that of its (external)
subset of standard numbers.
On the other hand, for example, in the framework of set theory with the axiom of choice, there is another construction of a countable model of (the first-order theory of) second-order arithmetic, with the "true" (standard) ℕ but a different (countable) P(ℕ), with an elementary embedding from this model (ℕ, P(ℕ)) to the "true" one.