The famous incompleteness theorem by Kurt Gödel, comes from finding a way to write a formula saying "This formula is unprovable", then analyzing its status. The resulting paradox affects arithmetic (and thus any theory containing it), as he showed by a very hard work of developing proof theory from it.

Let us focus on the most interesting aspects of this result and its proof, by directly assuming the framework of proof theory in which they are naturally expressed.

- the possibility for a formula to quote itself as an object;
- the truth predicate over ground formulas taken as objects.

To avoid having multiple copies of "the same expression" among objects, let us assume a formalization generalizing the concept of term algebra, to the use of all needed symbols (with connectives and quantifiers). (Another method would be to work with equivalence classes of expressions, maybe implicitly by only using operations compatible with this equivalence)

**Theorem.** Let *T* a theory able to describe
itself, in the sense that we can define there (thus find in every
model *M* it may have) a system *T '* representing an image of the first 2 levels of *T* (types, structure symbols,
and the notion of expression made of them) in this sense:

- For any expression
*F*in*T*, there is a ground term [*F*] in*T*, which is the quotation of*F*: in any model, [*F*] designates the copy of*F*in*T*′. *T*can describe this quotation process in*T '*, by a unary term*J*, taking argument in some class of expressions of*T '*, and values in the class of ground terms of*T '*: for any expression*F*in*T*we have*T*⊢*J*[*F*] = [[*F*]]- It can define formal substitutions (compositions) in
*T '*: between variables*A*(ranging over expressions of*T '*with a distinguished free variable and a given list of others) and*u*(ranging over ground terms of*T '*), the operation (*A*:*u*) gives the expression in*T '*obtained by replacing the variable of*A*by*u*. So, for any unary formula*F*and ground term*K*of*T*we have*T*⊢ [*F*] : [*K*] = [*F*(*K*)].

Proof.

LetThis is itself a theorem of finitistic mathematics, i.e. refers to a specific method (algorithm), which, given as input the data of some finite systems (expressions, proofs) provides related systems as output after finitely many operations (not unreasonably more than the size of input). Therefore it is valid in the framework of any theory able to describe expressions (such as arithmetic, through Godel's constructions), which may differ (be weaker than) the theoryH(y) be the expressionF(y:J(y)) with variableyranging over the class of expressions ofT 'with the same type (inT ') asF(inT) except for the intended type of their distinguished variable (which will range over expressions with one more variable, but such a difference on the intended type of a variable does not affect the formal processes over expressions using it); and actually the same type asH.

Then letXbe the term ([H] :J[H]). ThusF(X), that isF([H] :J[H]), can also be abbreviated asH[H].

ThusT⊢X= ([H] : [[H]]) = [F(X)] ∎

**Theorem.** Under the above assumptions on theory *T*,
for any invariant unary predicate
*v* with domain the class of
ground formulas of *T'*, there is a ground formula *G* of *T* on which *v*
necessarily gives the wrong value : *T*⊢
*G*⇎*v*[*G*].

Among unary predicates *v* over all ground formulas, the logically
consistent ones, which we can call *interpretations*, represent classes of models for the relation
of elementary equivalence
(bijectively if we consider all unary relations disregarding their definability).
The truth undefinability theorem implies that any invariant interpretation
can only describe a class of models not containing the current one.
Somehow, any "construction" of a model depends on the framework of a preexisting
universe (with its model of arithmetic), whose standardness cannot be defined
in the absolute.

- define a formal interpretation of all ground formulas of a given consistent theory (assuming its consistency but not adding this assumption as an axiom in this object-theory so assumed to be consistent), and adding one by one to axioms each ground formula consistent with previous ones (like in the proof of the completeness theorem);
- then define a model which fits with this interpretation.

This affects the provability in arithmetic, that is a particular use of second-order logic, which does not admit any complete rules of proof similar to what the completeness theorem gives to first-order logic. The use of strong axiomatic systems contributes to reduce the margin of undecidabilities, by the ability of bigger universes to process the definition of some subsets of given sets that could have been ignored by smaller universes, thus forcing them to be present in the considered universe.

We can actually build countable models of set theory whose set of
integers remain standard: in a set theoretical framework with the
axiom of choice, we can take a standard sub-universe *U*
then construct a countable sub-universe of *U* with an elementary
embedding into *U*, by using the axiom of choice to interpret in
*U* the operator symbols from all existence formulas true in *U*.
This keeps the "true" (standard) ℕ but gives a different
(countable) ℘(ℕ).

**Theorem.** No algorithm can produce
any consistent interpretation of all ground formulas of arithmetic.

An algorithm that stops to decide a formula as true or false (in standard time), will keep its result when interpreted in a non-standard model (the finiteness of its operations keeps it unaffected by non-standard numbers). The contradiction with the truth undefinability theorem refutes the existence of a deciding algorithm. ∎Thus, the above described kind of arithmetic definition of an interpretation, cannot be reduced to an algorithm, for the following reason. Each step of its "construction" involves a search of a proof (a refutation of a formula from previous axioms); among these searches, will necessarily be endless ones, whose result is differently interpreted between models : an algorithm (here a search for proof) which does not stop in standard time, may be reinterpreted in a non-standard model as stopping (after a non-standard number of operations, finding a proof with non-standard size).

**Theorem.** If *T* ′ is
built like *T* which can express the provability *p*
in *T* ′, and *F* is a
ground formula,

1) (*T*⊢ *F*) ⇒ (*T*⊢ *p*[*F*])

2) (*T*⊢ ¬*p*[0]) ⇔ (*T*⊢
0) (Gödel's second incompleteness theorem : no consistent theory
can prove its own consistency)

3) (*T*⊢ *p*[*F*] ⇒ *F*) ⇔ (*T*⊢
*F*) (Löb's theorem, from which «This claim is provable» is
deduced)

Proofs:

1) The proof of *F* in *T* can be copied into *T* ′, or converted to a proof of existence of a
proof.

Contrapositive viewpoint if *T* can describe models:
every model of some *T* ′ is a model
of *T*.

2) Let *T*⊢ (*G* ⇎ *p*[*G*]). The proof (*T*⊢
*G*) ⇔ (*T*⊢ *G*∧*p*[*G*]) ⇔ (*T*⊢
0) formalized in *T* gives *T*⊢(*p*[*G*]⇔*p*[0]).
Thus (*T*⊢ ¬*p*[0]) ⇔ (*T*⊢ ¬*p*[*G*]) ⇔
(*T*⊢ *G*) ⇔ (*T*⊢ 0).

3) from 2) applied to the theory (*T*+¬*F*). Finally
take *F*=«*F* is provable», ie *T*⊢ (*F* ⇔ *p*[*F*]).
∎

So, while the rules of provability are formally well-defined (as they have been effectively expressed by Gödel in the language of arithmetic), the interpretation of these rules and thus the very question of the provability of some formulas from some specified theory, will remain sometimes undecidable (relative to a universe with its interpretation of standardness).

A version of self-quotation can be expressed for the case of algorithms.

is it worth generalizing the self-quotation theorem by taking a T' with explicitly richer language than T, and where F only exists in T', named by a ground formula of T which may not be the quote of a formula of T ? Should we then re-write the proof for this case (and how) or can it just be deduced by re-applying the self-quotation theorem already formulated, at the next level of the hierarchy ? Might this provide a way to simplify the proof of the second incompleteness theorem ? While it is already rather short, we might still wonder if it can still be simplified further by doing each thing at its right level of the hierarchy, in case it wouldn't be so yet...

In this second incompleteness we might distinguish 2 external frameworks: the most external being totally finitist while the more internal one would be arithmetic with its concept of provability involving a quantifier (existence of a proof); and the third, most internal level being any theory (containing arithmetic)

We may also introduce other theorems: the speed-up theorem and Chaitin's theorem are particularly interesting.

Set theory and foundations of mathematics

1. First foundations of mathematics

2. Set theory (continued)

3. Algebra 1

4.1. Finiteness and countability

4.2. The Completeness Theorem

4.3. Non-standard models of Arithmetic

4.4. How theories develop

4.5. Second-order logic

4.6. Well-foundedness and ordinals

4.7. Undecidability of the axiom of choice

4.8. Second-order arithmetic

4.9.The Incompleteness Theorem