[Note by Vinod Wadhawan: I am happy to publish on my blog this
series of posts by Ambar Nag. He is brilliant,
and has a very concise and lucid style of writing.]
Chapter 3: Occam’s Razor
(Continued in Part 2)
CONTENTS
Chapter 1: Correlation vs causality
Chapter 2: Falsifiability
Chapter 3: Occam’s Razor
Chapter 4: Intelligence vs rationality
Chapter 5: Social media
Chapter 6: Survival machines
Chapter 7: Something from nothing
Chapter 8: The myth of creation
Chapter 9: The fine-tuned Universe
Chapter 10: The Intentional Stance
Chapter 11: Non-overlapping
magisteria
Chapter 12: Putting it together
Chapter 13: The problem of consciousness
Chapter 14: Artificial intelligence
Chapter 15: The brief history of mind
Chapter 15: The brief history of mind
This post, in a series of six
connected parts, presents a set of thinking tools for the aspiring Rationalist.
It draws from various books, videos, websites and conversations that have
inspired me. These thinking tools can be effectively employed in drawing-room
and coffee-machine debates. They cannot, unfortunately, be used to bring other
people around to your point of view (for reasons that I go into in Chapter 4).
What motivated this post? By
the time I was out of my teens, I had stopped believing in God(s) or Satan,
Good or Evil, Heaven or Hell, Ghosts or Spirits, Eternal Soul or Rebirth,
Reason or Purpose, Morality or Sin. But most folks are convinced that you must
believe in something (as opposed to nothing). Spiritual beliefs tend not to
attract much enquiry — after all, it is obvious what
you believe in if you believe in Krishna or Jesus. A purely materialistic
worldview needs explaining though, ironic as that may sound. This post is an
attempt to articulate some of the ideas that make up such a worldview.
What (or who) is a
Rationalist? The dictionary meaning does not convey a lot,
using as it does, words like “reason” which would themselves require
explaining. So, while the term Rationalist is used in the title, the meaning of
the word (my meaning) should be allowed to emerge by the end of the post. We
will revisit this question in Chapter 3.
Though the ideas presented
here touch upon Science, Philosophy and Religion, I would like to declare that
I am not a scientist, not a philosopher and not a spiritual person. So, in some
sense I am not qualified to make strong assertions on any of these topics. But
it also gives me confidence that these concepts can be understood and used by
anyone who is open to them.
Here’s the usual hyperbole — these ideas can change your
life. And here’s a disclaimer — while I may refer to some
notions as “wrong” or “silly”, it is not the aim of this post to offend anyone.
Attacking a person’s beliefs is not, in my mind, tantamount to attacking the person who holds that
belief. People deserve respect, ideas
don’t.
Most of the ideas introduced
here are easy to comprehend at a logical level but some are hard to accept because
they require an “inversion of reasoning”, to use Dan Dennett’s phrase. Concepts
covered in the initial chapters are more basic and likely more familiar for
most, with later chapters getting into slightly more complex, even strange
ideas.
Finally, you will notice soon
enough that not many of the thoughts presented here are my own. This post is
mostly about faithfully representing (and connecting) existing ideas along with
the occasional fresh thought. “Is any thought really original?”,
we could rhetorically ask. It’s evidently much easier to put together a toolkit
than to create a new tool.
I hope this compilation of
ideas will, as claimed in the subtitle, serve as an easy introduction to
Freethought for the uninitiated. Links and references are provided wherever I
am unable to go deeper into a topic without digressing too much from the theme
of the Chapter. The books which are referenced by title are highly recommended
reading, in fact they are the biggest inspiration for this post.
Chapter 1: Correlation vs causality
Causal claims are easy to
make but usually difficult to establish… and even harder to refute
I said to a friend over
coffee, “My doctor put me on a course of vitamins last week”. She said “I see…
come to think of it your skin is glowing and you’re looking great! I’d like to
get my hands on those vitamins”.
What’s wrong here?
1. My friend didn’t mention that
I was looking great until after I mentioned the vitamins
2. I may also have started
working out at the gym since last week
3. My doctor may have put me on
vitamins because I had a vitamin deficiency
So, can we ever be sure (or
at least confident to some degree) that vitamins will make your skin glow?
Scientists use the term
“correlation” to describe a relationship where every time A occurs B also tends
to occur. But it’s always meant in a statistical sense; i.e.
there should be many recorded instances of A and B occurring together, and
usually some instances of A and B not occurring together. Note
that if A is correlated with B then B is correlated with A or, simply put, A
and B are correlated.
Causality is a stricter
condition. A might cause B if -
1. A and B are correlated
2. When A and B both occur, B
always occurs (soon) after A
3. No third factor is already
known to cause both
4. Some plausible mechanism
leading from A to B can be described
The third condition is easy
to illustrate — the guy who “observes” that every time they honk at
the car in front of them at a traffic light, the car moves. Not because they
honked but because the light turned green, which is also why they honked!
Or, A and B could each follow
a random pattern but an observer could infer a correlation by selectively
noting the “positives” (i.e. instances of co-occurrence) and ignoring the
“negatives” — an instance of cognitive bias
we will examine in Chapter 4. This kind of fake correlation can sometimes
trigger compulsive, self-reinforcing behaviour patterns (“superstition”)
as revealed by the psychologist B.F. Skinner.
The last condition of a
plausible mechanism sounds vague but it is important — why, for instance,
would hurricanes be caused by legalizing
abortion in US states? This brings us to the next
question — why are even the most
(seemingly) absurd causal claims so hard to refute? We will try to answer this
in Chapters 2 and 3.
Meanwhile, how do scientists
attempt to establish causality?
One of the most popular and
accepted methods is the Randomized Control Trial (RCT).
Suppose you want to check whether a “treatment” A has an “effect” B. You start
by randomly splitting a group of “subjects” (usually people or animals) into
two groups — you can call these the Test
and Control groups. Then, you apply the treatment (say vitamin supplement) to
the Test group but not to the Control group and you do this
over a period of time. You then compare the effect (health indicator of
interest) on the Test group against the Control group.
Using some fairly basic math
you can test your results for “statistical significance” — that means eliminating the
possibility that the measured effect is purely due to chance. The
RCT methodology also employs “double blinding” as a safeguard against
subjective bias (see Chapter 4 for various forms of this). For example, in a
clinical trial this means that neither the doctor nor the patient knows whether
a given patient belongs to the test or control group.
It is interesting to note
that in most countries, no drug can be sold without going through clinical
trials but “dietary supplements” (which must be labelled as such) are not
subject to the same rigour. It’s no surprise then that supplements are a $50
billion market globally, with little evidence to prove they even do any good.
Economists have a more
rigorous notion of causality — Granger Causality, where A may be said to cause B if occurrences of
A predict occurrences of B. And finally, there is serious
doubt as to whether causality represents a fundamental property of the universe
or just a convenient way for us to think about the world.
We’ll go into some of these views in Chapter 10.
Chapter 2: Falsifiability
Not all claims can be put to
the test… especially the ones people want you to believe
Calvin’s best friend Hobbes
is a stuffed tiger. Who turns into a real tiger every now and then, and off
they go on their adventures. Of course, Hobbes immediately turns back into a
stuffed tiger in the presence of a third person. Making Calvin look ridiculous
when he exclaims “But Mom, Hobbes did it!”.
When six-year old Calvin
asserts that Hobbes is a real tiger but that is only known to (and knowable by)
him, he is making an unfalsifiable claim. Needless to say, scientists don’t
like unfalsifiable claims (aka “untestable” claims). They are too easy to make
and, by definition, impossible to confirm or falsify.
Unfalsifiability is a reasoning fallacy
wherein a claim cannot possibly be contradicted by observation or experiment.
It is the obverse of falsifiability, a concept originated by the philosopher
Karl Popper. Unfalsifiable claims or statements fall outside the domain of
science. Those who believe them to be true do so on faith (more on this in
Chapter 3).
Bertrand Russell came up with
the analogy of Russell’s Teapot to
illustrate that the burden of proof should lie on a person making unfalsifiable
claims, rather than on others to disprove such claims. Christopher Hitchens
came up with Hitchens’ Razor — “that which can be asserted without evidence can be
dismissed without evidence”.
The point is this — if a claim is unfalsifiable,
whether it is true or false does not matter (it is neither). In fact, it is
outside the realm of objective truth.
So we can now see how
legalizing abortion in US states could have caused hurricane Katrina. But then,
so could any number of other, arbitrary causes. Like how mini-skirts cause earthquakes.
Unfortunately physics has a
few unfalsifiable theories of its own (like this one and this one) each
of which scientists have spent significant effort to research.
So we are saying that
statements or claims must be falsifiable to be useful. What about theories? For
a theory to be useful it must rule out (or assign very low probability to)
certain outcomes while allowing certain others. A theory that says that
“anything is possible” has no power. Or, in the words of Eliezer Yudkowsky — “if you can invent an equally persuasive explanation
for any outcome, you have zero knowledge”.
There are several other
reasoning fallacies that the Rationalist should watch out for —
Circular Reasoning: Providing an explanation of
something by assuming it or using a term in its own definition; e.g. only a
crazy person would kill someone, so anyone who kills must be crazy.
Infinite Regress: An argument that relies on
a proposition whose truth depends on another similar proposition (and so on…);
e.g. Turtles all the way down
Mind Projection Fallacy: Projecting the mind’s
properties onto the external world. Some examples will be useful here, though
we won’t be using this concept till Chapter 10 —
·
The fallacy in assuming that colour is an inherent property of objects
rather than how our brains interpret different wavelengths of light reflected
off objects
·
The fallacy in assuming that probabilities are
a property of systems (or events) rather than a way for us to represent our
ignorance of them
Chapter 3: Occam’s Razor
A good theory explains a lot
by assuming very little… a bad theory does just the opposite
Contrary to what many people
believe, science does not provide proof of anything. The concept of “proof”
applies only in mathematics, as does the notion of absolute truths.
Instead science allows the
accumulation of evidence in favour of a hypothesis, and against competing ones.
The more evidence in support of a hypothesis, the more likely it is to be an
accurate representation of the world. But only a representation—the map can
never completely describe the territory.
Evidence can be based on
direct observation or experiments. Experiments must be replicable,
not performed and recorded just once by one person or team. This is to
eliminate confirmation bias (see Chapter 4). After all, scientists are as human
as any of us and extremely keen to have their theories accepted. Valid evidence
that contradicts an accepted theory must be accounted for — either the theory has to be
modified to explain the new facts, or if that is impossible, discarded in
favour of a new theory that can explain all the facts, old and
new. Sometimes a simpler, less accurate theory can coexist with a complex, more
accurate (or more “general”) theory as is the case with classical mechanics and
special relativity.
So how do scientists decide
which hypotheses to test? Is every hypothesis to be considered equally
promising and tested as such? Here’s where Occam’s Razor comes in. It says
the following — between two competing
theories that explain the same set of facts, the one with fewer assumptions
should be preferred because it is more likely to be correct. Note that Occam’s
Razor cannot help us to test the accuracy of a hypothesis, only evidence can do
that. It is only a heuristic guide to decide which theory to test first or
which theory is preferred a priori, i.e. in the absence of any evidence. And finally, Occam’s
Razor can be a double-edged sword (pun intended) as this website somewhat
ironically demonstrates.
Coming back to the question
we posed in the introduction — is there a well-defined line that separates
rational people from people who are not rational? Probably not. People can be
rational to different degrees. Then, is there a rule or test to distinguish
rational beliefs from beliefs that are not rational? I believe
there must be, otherwise the Rationalist would have no leg to stand on. The
rule is this — rational beliefs are those
that are held on the basis of empirical evidence and can be abandoned if the
evidence turns out to be false.
Not all beliefs are held on
the basis of evidence, some can be held on faith. But a faith belief
can be unshakable in the face of contradicting evidence. In fact, because of
the unfalsifiable way in which these beliefs are invariably stated, evidence
becomes irrelevant. Belief based on faith can be a slippery slope — if you believe some claims, you
might as well believe all. After all, how do you decide which statements to
believe and which ones to reject? Is it possible to have a consistent basis for
doing so? Faith beliefs may be handed down by authority figures but what
happens when those individuals or institutions become less important (or less credible)
or cease to exist? Finally, what happens when your beliefs are found to be in
conflict with another person’s beliefs? Once evidence is off the table, there
is little to choose between competing beliefs.
There is, of course, an
escape hatch out of this and it doesn’t come from science — one can deny that there are
any “objective truths” that can be discovered through observation. In such a world, all truths
are subjective (or observer-dependent) and knowable only
through introspection. The question of whether introspection can actually
reveal universal truths will be taken up in Chapter 12.
At this point I’d like to
make sure we agree to the rules of the game called “Being a Rationalist” -
1. Causal claims are false in
the absence of a credible mechanism
2. Unfalsifiable statements can
be rejected outright
3. Theories must explain more
than they assume
Having accepted the cookies
policy, we can read on. In the next Section we will examine Cognitive Biases -
the reasons why people simply refuse to “play by the rules”.
Written by Ambar Nag.
ambarnag@gmail.com
Written by Ambar Nag.
ambarnag@gmail.com
(Continued in Part 2)
Well written..
ReplyDeleteThank you Subhashji. I am going through your blog Ujaale ki Aur and enjoying it :-)
Delete