# Book Review: The Annotated Gödel

## A Gentle Guide Through Some Hairy Math

"This sentence is false." Anyone with a grade-school-level understanding of logic can see the contradiction there: if the sentence is true, it's false, and vice versa. But are there deeper implications to the fact that you can make paradoxical statements like that? Is it a problem for logic in general?

Turns out it's a pretty big problem! And not just for logic, but for most of mathematics. That's (more or less) what Kurt Gödel showed in his Incompleteness Theorems, published in 1931 in a paper titled "On Formally Undecidable Propositions of Principia Mathematica and Related Systems".

I've known the gist of Gödel's Incompleteness Theorems for a while, but I was a bit fuzzy on the details and Wikipedia only goes so far. It was tempting to go to the source, but reading a 1931 math paper translated from German seemed like a daunting prospect.

Thankfully, Hal Prince (a retired software engineer) has written a sort of "Idiot's Guide" to the 1931 paper, with the original 26-page paper interspersed with commentary and explanation designed for non-mathematicians (for a total of about 150 pages). The approach works beautifully: I have a pretty rudimentary logic background, but I understood about 70% of the paper thanks to the commentary. And you get a good sense of the beauty and genius of Gödel's approach.

(Caveat for what follows: I am bad at advanced math so it's very likely that there are mistakes.)

## The Big Question

Prince does a good job situating Gödel's paper in the broader historical
context, which makes it easier to understand the goals of the paper. Gödel was
directly analyzing the *Principia Mathematica*, a philosophical tome that aimed
to build the foundations of mathematics from first principles, which in turn fit
into the goal of underpinning mathematics with a complete and consistent set of
axioms (known as Hilbert's Program). What does that mean exactly?

**Consistency**in a formal mathematical system means that you can't prove a statement to be both false and true. Any "good" mathematical system should be consistent because an inconsistent system is basically meaningless (since you can prove anything you want).**Completeness**means that all statements in the system can be proved true or false. This sounds like a nice property to have! But it's not at all obvious which mathematical systems are complete.

There's a related property of mathematical systems called **soundness** which says
that all the system's axioms are true and you can construct formulas by using
rules of inference that preserve truth. Soundness pops up in the introduction to
the paper and Price does a nice job describing the relationship between
soundness and consistency—soundness implies consistency but not vice versa—but
it's not all that relevant to the meat of the paper since Gödel uses
$\omega$-consistency (a pretty technical definition) instead of soundness for
his rigorous proof.

So the big question Gödel is asking is: for mathematical systems that are
**consistent**, which ones are **complete**? He frames this as a question about PM
(the formal system described in *Principia Mathematica*), but his arguments end
up being general enough that they describe most formal mathematical systems.

## The Big Idea

Gödel's big idea is amazingly simple: he makes a mathematical "sentence" that says "this isn't provable", then shows that it can't be proved or disproved in a consistent system (which makes sense intuitively). Of course if it were that easy someone would have done it long before Gödel; the hard part is the rigorous construction of the "sentence".

He starts from the ground up by designing his own mathematical system called P. At first, P looks like a pretty familiar formal logic system:

- There are symbols like $\neg$ and $\forall$
- There are variables like $x_1$ and $z_3$ (where the subscript indicates the type)
- There are rules for combining variables and symbols into expressions
- There are axioms like $p \lor p \implies p$
- There are rules of inference for constructing theorems out of axioms

The variable types were a bit confusing to me at first, but the commentary clarifies it: type 1 includes natural numbers, type 2 includes classes of numbers (like "odd numbers"), type 3 includes classes of classes, and so forth. (As Prince explains, the type system is necessary to prevent certain contradictions that could invalidate proofs that use the system.) Without going into too much detail, P has enough axioms to do basic operations on natural numbers using Peano arithmetic. So far so good; nothing earth-shattering per se.

The magic happens when Gödel introduces numeric mappings into the mix:

We now assign unique natural numbers to the symbols of the system P as follows:

"0"…1, "$f$"…3, "$\neg$"…5, "$\lor$"…7, "$\forall$"…9, "("…11, ")"…13

We also assign unique numbers of the form $p^n$ (where $p$ is a prime > 13) to the variables of type $n$. In this way, every finite sequence of symbols (and so every formula) corresponds to a unique finite sequence of natural numbers. We now map the finite sequences of natural numbers to the natural numbers, again in 1-1 fashion, by letting the sequence $n_1,n_2,...,n_k$ correspond to the numbers $2^{n_1} 3^{n_2} ... {p_k}^{n_k}$, where $p_k$ is the $k$th prime (in order of size). Thus there is a unique natural number assigned not only to each symbol, but also to each finite sequence of symbols.

Quite a mouthful! It took me a while to digest what this all meant, but about 10
pages later it struck me: *Gödel just specified a programming language, complete
with a compiler!*

OK, he didn't *actually* write a programming language because modern computers
didn't exist in 1931. But his system translates symbolic expressions (like
$\exists z(z<=y \land y = x \cdot z)$) into (very large) numbers, which is
directly analogous to the relationship between source code (e.g. collections of
symbolic expressions) and executable binaries (strings of 1s and 0s, a.k.a.
large numbers).

I got so excited that I couldn't help but hack together a quick partial P compiler:

```
import re
import sys
def convert_formula(f):
symbols = [convert_symbol(s) for s in f.split(' ')]
result = 1
for i, n in enumerate(symbols, start=1):
result *= kth_prime(i)**n
return result
var_symbol_re = re.compile('([xyz])([0-9])')
def convert_symbol(s):
m = var_symbol_re.match(s)
if m:
return var_base(m[1]) ** int(m[2])
match s:
case '0':
return 1
case 'f':
return 3
case '¬':
return 5
case '∨':
return 7
case '∀':
return 9
case '(':
return 11
case ')':
return 13
case _:
raise f'unrecognized symbol {s}'
def var_base(v):
return kth_prime(ord(v) - ord('x') + 7)
def var_base(v):
match v:
case 'x':
return 17
case 'y':
return 19
case 'z':
return 23
case _:
raise f'unrecognized variable {v}'
primes = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71]
def kth_prime(k):
return primes[k-1]
if __name__ == '__main__':
print(convert_formula(sys.argv[1]))
```

The Gödel numbers (as the translated numbers are referred to nowadays) get big pretty fast:

```
$ python godelize.py '∀ x2 ( x2 ( 0 ) )'
510754105381279972974126665511493085266748272181514834722050277438707995396139651747342177188476046892898586220701172368813885194955271313011965397023293392487875907564387718818431906892451483534240841453528342548081437901959752551414803666716847136316647242745039162081008389566297654201552981336724297412610402251211400520638405898888252386385449355226221209434129104092622676412250347196787790435131199063471374781055191182225000000000
```

What's the deal with using numbers raised to the power of prime numbers? Unlike
a real programming language, where the resulting binary is optimized for
execution, Gödel is never going to execute his "programs" but just analyze them
to prove things. And he can't even use a computer to analyze them (only his
formidable brain), so the mathematical properties of the numbers are much more
important than the ability to actually perform computations on them. (Gödel
never writes a specific Gödel number in the rest of the paper; he just assumes
that they *can* be computed in theory.)

The most important property of Gödel's numbering system is uniqueness: every
expression in P uniquely maps 1-1 to a number thanks to his clever "namespacing"
scheme and the properties of primes (Prince does a nice job laying out the gory
details). The 1-1 mapping includes expressions *about* other expressions
(because expressions are just numbers!), which is essential to the rest of the
proof.

### Building Up "Subroutines"

With his numerical mapping system defined, Gödel starts to build up a whole set of recursive functions that he can use to prove the things he cares about. He starts with basic definitions like the definition of primality:

$$ isPrim(x) \equiv \neg\exists z (z <= x \land z \neq 1 \land z \neq x \land z | x) \land x > 1 $$

Which translates to something like "define $isPrim(x)$ as 'there doesn't exist a
$z$ such that $z$ divides $x$'" (with some extra conditions). Because he
previously defined everything that the expression uses, $isPrim(x)$ is *itself*
a number that can be used in further definitions.

Gödel then starts building up more complicated functions that operate on sequences of numbers (i.e. expressions). For instance, here's his definition of concatenation of two sequences (defined as the $*$ operator):

$$ \begin{align}x * y \equiv \mu z \{ z \leq nthPrim(len(x) + len(y))^{x+y} \land \\ \forall n [n \leq len(x) \implies mem(n,z) = mem(n,x)] \land \\ \forall n [0 < n \leq len(y) \implies mem(n + len(x), z) = mem(n, y)]\}\end{align} $$

(Here $\mu$ roughly translates to "the smallest number such that…" and $mem(n, x)$ is the $n$th member of the sequence $x$.) The definition translates to something like "$x * y$ is the smallest $z$ that matches $x$ in its first $len(x)$ positions and matches $y$ in its next $len(y)$ positions." It looks complicated, but that definition is itself just a number!

And finally, after building up a whole collection of "subroutines" for about 20 pages, we get to a function that tells us if a sequence is a proof of something:

$$ \begin{align} isProof(x) \equiv \forall n \{0 < n \leq len(x) \implies \\ isAx(mem(n,x)) \lor \\ \exists p,q[0 < p, q < n \land \\ isImmConseq(mem(n,x), mem(p,x), mem(q,x))]\} \\ \land len(x) > 0 \end{align} $$

Which means that for a sequence $x$, $x$ is a proof if every member of $x$ is an axiom or an immediate consequence of previous members of the sequence. From there it's not hard to get to a function that determines which sequences are provable:

$$ isProv(x) \equiv \exists y(isProofOf(y,x)) $$

(Translation: "set $isProv(x)$ to mean 'there exists a $y$ such that $y$ is a proof of $x$.'")

This is crazy if you think about it! Gödel has figured out a way to turn abstract concepts like "this sequence of symbols is provable" into boring and easy-to-reason-about natural numbers. (Maybe it's not so crazy when you're used to dealing with programming computers, but it still blows my mind that it works.)

### The Proof

Once Gödel has built up his "subroutines", the actual incompleteness proof is almost anti-climactic (and not quite as elegant as what precedes it in my opinion). He sets up a relation:

$$ Q(x,y) \equiv \neg isProofOf_{\kappa}(x, y[\boldsymbol{y_1} \gets \overline{y}]) $$

Which roughly translates to “$x$ is not a proof of expression $y$" (with a bunch
of conditions and technicalities that I won't go over here). This is *not* an
expression in P, but with a bit of manipulation Gödel comes up with a way to
express it in P (it's a bit hard to follow but Prince does a good job filling in
the blanks):

$$ \neg isProofOf_{\kappa}(x, y[\boldsymbol{y_1} \gets \overline{y}]) \implies isProv_{\kappa}(q[\boldsymbol{x_1} \gets \overline{x}, \boldsymbol{y_1} \gets \overline{y}]) $$ $$ isProofOf_{\kappa}(x, y[\boldsymbol{y_1} \gets \overline{y}]) \implies isProv_{\kappa}(not(q[\boldsymbol{x_1} \gets \overline{x}, \boldsymbol{y_1} \gets \overline{y}])) $$

Here $q$ is basically the way to express $Q$ in P; the weird-looking implications come from the definition of decidability. Then there's some more alphabet soup manipulation:

$$ p = forall(\boldsymbol{x_1}, q) $$ $$ ... $$ $$ r = q[\boldsymbol{y_1} \gets \overline{p}] $$ $$ ... $$ $$ \neg isProofOf_{\kappa}(x, forall(\boldsymbol{x_1},r)) \implies isProv_{\kappa}(r[\boldsymbol{x_1} \gets \overline{x}]) $$ $$ isProofOf_{\kappa}(x, forall(\boldsymbol{x_1},r)) \implies isProv_{\kappa}(not(r[\boldsymbol{x_1} \gets \overline{x}])) $$

From this, Gödel finds that $forall(\boldsymbol{x_1}, r)$ is not provable and $not(forall(\boldsymbol{x_1}, r))$ is also not provable, which means $forall(\boldsymbol{x_1}, r)$ is undecidable, which means that P is incomplete. (Actually more than just P, which is where the $\kappa$ business comes in, but I won't go into that here.) This result is called Theorem VI in the paper, but nowadays it's called Gödel's First Incompleteness Theorem.

If you didn't follow that, don't worry—just buy the book, it explains it much better! (I'm still not at 100% comprehension after having stared at it a lot, but I think I get the gist thanks to Prince's hand-holding.)

## So What?

No one cares about P per se—it's just a system that Gödel made up. So after proving that P is incomplete, the rest of the paper explores the implications for other mathematical systems. It turns out that Gödel didn't really rely on anything special about P for the proof and the results are very general:

In proving Theorem VI, the only properties of the system P that we used were the following:

- The class of axioms and the rules of inference (that is, the relation "immediate consequence") are primitive recursively definable (once we substitute natural numbers for the symbols in some way).
- Every primitive recursive relation is definable within the system P (in the sense of Theorem V).
Therefore, in every formal system that satisfies Requirements 1 and 2 and is $\omega$-consistent, there are undecidable propositions of the form $\forall x F(x)$, where $F$ is a primitive recursively defined property of the natural numbers.

There are actually a whole lot of mathematical systems that Gödel proved are incomplete! He specifically mentions Zermelo-Fraenkel set theory but it doesn't stop there. Ever the over-achiever, in section 3.1 of the paper Gödel proves that first-order arithmetic is incomplete and in 3.2 he proves that first-order predicate calculus is incomplete. (I had some trouble following these sections because they involve more number theory and Greek letters than I'm comfortable with.) In general, you don't need much mathematical power before Gödel catches up to you—most formal mathematical systems worth their salt are incomplete.

And it gets even worse! For most of the paper Gödel is only talking about completeness, which is a nice-to-have, not consistency, which is a must. But there are implications for consistency as well: namely the famous Second Incompleteness Theorem (Theorem XI in the paper), which says that the formal systems that Gödel cares about can't prove their own consistency. Gödel's proof is pretty short (relying on leftover stuff from the proof of Theorem VI):

$$ isConsist(\kappa) \implies \neg isProv_{\kappa}(forall(\boldsymbol{x_1}, r)) $$ (If $\kappa$ (a superset of P) is consistent, then $forall(\boldsymbol{x_1}, r)$ isn't provable, which is what Gödel showed in Theorem VI.) $$ isConsist(\kappa) \implies \forall x \neg isProofOf_{\kappa}(x, forall(\boldsymbol{x_1}, r)) $$ $$ isConsist(\kappa) \implies \forall x \neg isProofOf_{\kappa}(x, p[\boldsymbol{y_1} \gets \overline{p}]) $$ $$ isConsist(\kappa) \implies \forall x Q(x,p) $$ (Which comes from plugging in some previous formulas.)

This is translated to an expression in P: $$ implies(w, forall(\boldsymbol{x_1}, r)) $$ (where $w$ is a way to express "P is consistent" in P).

We already know that $forall(\boldsymbol{x_1}, r)$ isn't provable, which means that $w$ isn't provable either, so P can't prove its own consistency.

At this point, Gödel just drops the mic instead of going into any of the huge implications for math and philosophy.

## The Verdict

When it comes to math and science, I'm often unsure whether it's worth going to
the source of the Big Ideas or it's better to read about them in forms that have
been digested over time. *The Annotated Gödel* strikes a nice compromise: you
get the full flavor of Gödel's unique approach and thought process while
benefiting from the understanding built up in the decades since the paper was
published. I was surprised at how digestible and enjoyable it was, and I'd
highly recommend reading it if you're a non-mathematician who wants to dive into
the foundations of math.