Laws and theories in science have been traditionally enunciated under three items of principle (Chaitin 2006):
1. Ockham’s razor, which is an approach that favours
the smallest necessary number of assumptions, the smallest possible number of
variables and parameters, and the smallest needed number of equations for
formulating hypotheses. In other words, if there are two competing theories
that explain the same set of experimental data, the simpler theory is likely to
be better. In terms of computer algorithms (see below), the best theory is
likely to be that which requires the smallest computer program for calculating
(and hence explaining) the observations.
2. Comprehension is compression: If the observations are pointing to
an underlying law, it should be possible to explain them in terms of an
algorithm that requires a smaller number of bits than the number of bits
required to represent the observed data. Even the most random (i.e. 'lawless')
set of data can be 'explained' in terms of an algorithm of the same size. But
we intuitively feel that we 'understand' something in a better way if it is
explained by a 'simple' theory, i.e. by an algorithm requiring a smaller
number of bits than that needed for describing the entire set of data.
3. Leibniz’s principle of
sufficient reason: 'Everything happens for a reason'. If something is true,
it must be true for a reason (however, see below). This is why mathematicians
try to discover the proof for the general case (i.e. prove theorems), rather
than accepting something to be true just because a large amount of data seem to
indicate that.
As I mentioned
in Part 64, Hilbert tried to make the rules of symbolic logic so
precise and 'artificial' that even a mechanical
proof checker could be used to see if a proof based on the formal axioms is
correct or not. The idea was to make mathematics completely certain and
objective, with no room for human interpretations or subjectivity.
Hilbert’s goal
was to put all pure mathematics on a formal and strictly objective footing. His
idea was to formalize mathematics in terms of axioms and artificial language,
so that everybody could agree on whether a proof is correct or incorrect. He
hoped that mathematics would then comprise of absolute truths, and there would be a theory of everything in mathematics. He and his collaborators like
John von Neumann did achieve considerable success in this
direction.
But then came Kurt Gödel (1931). He shook the
foundations of mathematics by proving his incompleteness theorem, which
demonstrated that mathematics contains statements that, if proved false, would
render it inconsistent, but which cannot be proved to be true (cf. Dyson 1997). In other
words, some mathematical statements are true for no reason.
Thus Gödel’s
work went against Hilbert’s belief that there is a theory of everything in mathematics. Hilbert (as also Leibniz)
had visualized a world in which all
truths can be calculated and proved by a 'universal coding'. This coding
comprised of a few dozen axioms, and it was sought to systematize all
mathematics by deriving all mathematical truths from these axioms in a
'mechanical' way. Gödel proved that this is not possible. His theorem says that
no formal system encompassing elementary arithmetic can be at the same time
both consistent and complete:
Within any sufficiently powerful and noncontradictory system of language, logic, or arithmetic, it is possible to construct true statements that cannot be proved within the boundaries of the system itself.
Gödel
established a procedure for encoding arithmetic statements as (large) numbers.
Theorems in arithmetic had hitherto been speaking only about numbers. Gödel
made them self-referential as well; i.e. speak about themselves as well.
With this interpretation of arithmetic theorems, he constructed the now famous Gödel
sentence, which was a precise
arithmetic statement meaning
'This statement is unprovable';
or:
'This statement
cannot be deduced from Hilbert’s formal axiomatic system'.
If this
statement is indeed provable, then it is a false statement. And if it is not
false but true, then it is indeed unprovable, and we have incompleteness in our
formal axiomatic system: We have a true statement that our formal system of
logic and axioms is not able to capture. Some truths are unprovable.
The well-known
Goldbach conjecture illustrates
the point. The conjecture states that every even number greater than 2 is the
sum of two prime numbers. Computer calculations have verified the truth of this
conjecture up to extremely large even numbers. But we can never be sure that
the conjecture will not fail for some still larger even number. It has not been
possible to prove the conjecture by rigorous mathematical reasoning.
Suppose the Goldbach conjecture is undecidable.
It would then be true without being provable because there could be no exception to it. The existence of any
exception to the conjecture would disprove the conjecture, and that would
contradict its undecidability.
Gödel put in
some very clever thinking when he introduced a procedure (using prime numbers)
for converting all arithmetic statements into numbers. His procedure was able
to deal with pronouns like 'this', and even self-referencing
pronouns like 'I'. This is something usually not found in mathematical
formulas. Gödel converted the 'This statement is unprovable' entity into a
large numerical statement in number theory, and although this converted entity
looked like a statement in real mathematics, it was indirectly referring to
itself, and saying that it is unprovable.
Gödel proved
another incompleteness theorem, according to
which:
No
formal system can prove its own consistency.
According to Chaitin (2006), Gödel’s
proof 'does not show that mathematics is incomplete. More precisely, it shows
that individual formal axiomatic mathematical theories fail to prove the true
statement "This statement is unprovable". These theories therefore
cannot be "theories of everything" for mathematics. The key question
left unanswered by Gödel is: Is this an isolated phenomenon, or are there many
important mathematical truths that are unprovable?'
In Chaitin’s algorithmic information theory, some
mathematical facts cannot be compressed into a theory because they are too
complicated (in terms of algorithmic complexity). His work suggests that 'what
Gödel discovered was just the tip of the iceberg'.
No comments:
Post a Comment