Chapter 13: The
problem of consciousness
Consciousness as an emergent property of the brain
Rene Descartes was a genius who gave us Cartesian geometry, used to
this day. He also said the famous words “cogito
ergo sum” — I think therefore
I am. Without getting into the details of the argument, what it gives us is a
picture of mind as
something separate from matter. This gives rise to questions like -
·
Can consciousness exist without the body?
·
Can consciousness be transferred from one source to
another?
·
Can consciousness survive death?
We can start with two diametrically opposed (but at this point equally
valid) worldviews — the first one says that consciousness is fundamental and the material
world is emergent from it (“idealism”), while the second describes
consciousness as an emergent property of (living) matter.
Hindu philosophical thought contains very advanced enquiries into
the nature of
consciousness. The Atman is
the individual soul (or consciousness) and the Brahman is the universal
consciousness. There is a school of thought Advaita which says that both are
the same and it is Maya which
divides them. But all the Hindu systems emphasize direct experience,
rather than observation, as the way to know the nature of consciousness.
What about the materialist view? At this point in time, neuroscience is
not even close to giving us a credible mechanism by which consciousness could
emerge from matter. But scientists are working on it — Anil Seth gives
a fascinating account of the latest advances. The problem then is whether we
are prepared to accept that
consciousness could have a materialistic explanation that is complete.
Dan Dennett outlines several obstacles to such an acceptance, like
the hard problem of
consciousness — is it possible to
completely describe someone’s subjective mental
state by objective analysis? A lot of studies show how we cannot have
authoritative knowledge of the working of our own minds. Our brain is
constantly playing tricks on us, as revealed by a number of illusions that have
been systematically designed by researchers. So it would appear that the third
person (objective) account of our mind is more reliable than our first person
(subjective) impressions. This is reinforced by the realization that our brain
tracks information on a strictly “need to know” basis. We know that we have a
liver, a pancreas, a pair of kidneys etc. But we know this from third person
observations and not from
first person experience.
The debate on whether consciousness is fundamental or emergent is
unlikely to be resolved anytime soon. So again, we can apply our heuristic
rules from Chapter 8 to decide what kind of theory we like when it
comes to understanding consciousness. Starting with consciousness as a
fundamental property of the Universe (like spacetime or mass-energy) seems like
a “top down” approach, leading in its most extreme form to panpsychism.
It also does not make any specific, testable predictions that can be validated
through experiments. And finally, the examples from Chapter 7, of phenomena
assumed in the past to be fundamental but which are now better understood as
emergent, gives us some inkling that the same may turn out to be true of
consciousness.
The kind of theory we prefer is bottom up and it is empirical, i.e.
provides a way to validate or falsify its predictions. We start with a model
where the brain is the seat of consciousness and not a conduit for
consciousness. We know that our brain has almost 100 billion neurons and that
these neurons are connected. It is fed a constant stream of rich data to
process, by the sense organs. Drawing an analogy to the ant colony from Chapter
7, we can at least contemplate how billions of neuron interactions could
produce self-awareness even when none of the individual neurons have any. Individual
ants don’t have to possess intelligence (or goals) for their interactions to
produce intelligent (or goal-directed) behaviour. Individual neurons don’t have
to be self-aware for their interactions to produce self-awareness.
The analogy between brains and ant colonies appears in Douglas
Hofstadter’s 1979 classic Gödel,
Escher, Bach. Hofstadter refers to “Strange Loops” as the crux
of consciousness — an interaction wherein the top level of a system is built on lower
levels but is able to influence the bottom level, and thereby itself. It is
related to the concept of recursion which any computer programmer
would be familiar with — think of two plane mirrors facing each other or a video camera pointed at a screen to
which its output is connected.
A lot of this is speculation though, and we must acknowledge (once
again) that neuroscience is not likely to provide a precise description of the
mechanism anytime soon. But assuming that a complete and credible materialistic
theory of consciousness will emerge
at some point in the future, the answers to the three questions posed at the
beginning of this Chapter would be, according to that theory No, No and No.
Now let’s move on to the question of why, according to Neo-Darwinism, did we
evolve consciousness? Why as in “how come”? It is obvious that any complex
organism needs to be able to differentiate its own body parts from its
surroundings. A lobster can’t afford to claw itself. So self-awareness of the body
would be an essential brain function. But what about self-monitoring of the
mind? Why would that be useful or necessary?
The answer may be tied to another uniquely human adaptation — language. Language evolved because of the advantage it
afforded to the individual. Recall from Chapter 6 that natural selection acts
on individuals (actually genes) and not on groups or species. The use of
language is as much for deception as
it is for “true” communication of beliefs and intentions. An individual who
indiscriminately communicates every thought to their fellow humans is unlikely
to survive for long. This makes it not just useful but essential for the brain
to have a self-monitoring ability. My thoughts, memories, beliefs,
expectations, intentions must be tracked and represented separately. This would
explain the feeling of “self” — what’s it like to be me? That in fact is Nagle’s definition of
consciousness.
Does this explanation, due to the fact that it relates consciousness with
language, imply that only human beings are conscious? Yes, but there is a different
explanation which does not rest on language as a prerequisite. It starts by
describing the brain as a prediction engine which uses sensory inputs to build
a “model” of the external world (more on this in Chapter 14). In order to make
this model as accurate as possible, the brain must include itself as part
of the model. Instead of assuming an external “perceiver” or “experiencer”
(aka soul)
we may define consciousness as the brain’s high-level representation of itself.
And consciousness, like the brain, may itself be an adaptation, as
we shall argue in the next chapter.
Chapter 14:
Artificial intelligence
Is it possible to have a sentient AI?
What is AI? AI is simply the ability of machines to do tasks that were
so far assumed to require human intelligence. Self-driving cars are a popular
example.
One of the applications of AI is Machine Learning (ML). Being a data
scientist, here at last is a subject on which I can claim some
expertise :-). Machine learning is a set of algorithms that can learn to
perform a wide range of tasks without being explicitly programmed. Here
“learning” refers to constant improvement by analysing more and more data; i.e.
encountering more “cases”. So, while an elevator control system automates a
task once performed by humans, it does not get better at it by analysing
patterns. In other words, it lacks the ability to learn.
An ML algorithm which is trained to recognize images of cats will
initially need to be fed many images of cats (labelled as “cat”) and other
images (labelled “not cat”). Once it is “trained” it can start to identify cat
pictures accurately. Replace cat with “malignant tissue” for a more
useful application of ML, namely
image-classification, also used by self-driving cars.
One of the most powerful ML algorithms (class of algorithms actually)
is the neural network, which as the name suggests, learns in a way similar to
the human brain. Our brains use Bayesian Inference to interpret data
coming in from the sense organs (sight, sound, smell….). It has a “prior”
expectation which it updates based on incoming data. What you see (or hear or feel…) is your brain’s best guess based
on both the prior belief and the sense data. Sometimes the prior belief is
so strong that it overrides the
incoming information. Cognitive scientists like to demonstrate this through
visual and auditory illusions.
We are now ready to try and answer the question we posed at the end of
Chapter 10 — why can’t AI be
considered minds separate from bodies? Or can an AI ever attain consciousness?
The analogy of the human brain as a necktop
computer is useful in certain respects but misleading in
some ways. The fact that computers are made of silicon while the brain is made
of organic (“wet”) stuff is not important here. But they are fundamentally
different for a different set of reasons —
·
The human brain has a bottom up organization, there is
no centralized control. Every neuron is fighting for survival by making itself
useful much as each worker survives in a market economy by finding jobs to do.
But an individual transistor in a computer will not find itself “unplugged” if
it stays idle for a period of time so it doesn’t need to actively seek out
tasks to do.
·
The human brain has evolved in a partly hostile, partly
cooperative environment where it has constantly needed to make decisions in
life-or-death situations, competing and cooperating with fellow humans (as we
argued in Chapter 12). Computers, on the other hand, live in a relatively
sterile environment.
·
The emergence of consciousness in the human species
is because we are survival machines (see Chapter 6)
and not despite that fact. In other words, consciousness is not an attribute
which makes us something “more than” survival machines; it is something that
makes us better survival machines.
If an AI is ever to become a “conscious” mind unattached to a body it
would need to have
·
Bottom up organization comprised of autonomous elements
·
A need to survive in a competitive environment with
possibilities for cooperation. A need to practice deception, which requires
self-reflexive thought. The tagline for the 2015 sci-fi movie Ex Machina sums
this up pretty well — “There is nothing more human than the will to survive”.
Chapter
15: The brief history of mind
Chapter
15: The brief history of mind
A story should have a beginning, a
middle, and an end... but not necessarily in that order
Recall from Chapter 8 that the
heliocentric model of our solar system took more than 1000 years to be accepted
from the time it was first proposed by Aristarchus of Samos in the 3rd century
BCE. It took further work by Copernicus and Kepler in the 16th Century to
construct a mathematical model, and Galileo’s observations through the
newly-invented telescope, to validate the model. Was the new theory greeted
with enthusiasm and excitement? Of course not! The disgraceful treatment of Galileo by the Catholic Church should continue to remind us that new
ideas which threaten to turn our worldview upside-down are likely to be met
with strong opposition.
While some of the opposition may be
genuine scientific skepticism, a lot of it is simply blind rejection of new
thinking that is seen to threaten faith beliefs. In the case of the
heliocentric model, there was a lot at stake - one of the pillars of the
anthropocentric worldview (see Chapter 9) was the belief that the Earth enjoys
a central, supreme position in the Universe. It was heresy to suggest
otherwise.
We are right now facing a crisis of
even greater proportions with respect to our worldview, and it is this -
·
The
question of Origins has had many different answers offered by religious and
spiritual traditions over the centuries (Chapter 8 contains a link to a list of
“creation myths”). But at a fundamental level they are all parallel narratives
which, translated to modern language, go somewhat like this - first there
was a mind, then came matter followed by lifeforms.
·
With
the latest advances in science we have reached a point where we can say quite
confidently that the above sequence is wrong. It was matter that came first,
followed by lifeforms emerging from (inanimate) matter and finally, minds
emerging from lifeforms.
What do we mean by “minds”? A mind is
something that has a capacity for any of the following - rationality, goals,
beliefs, intentions, motives, reasons and purposes. The regular English
meanings of these words will serve us perfectly well here. What we are
essentially saying is that none of these played any role in shaping our world
until very recently, that is until the appearance of Homo Sapiens. In other
words, the 4.5-billion-year history of the Earth (and the 10 billion year prior
history of the Universe) is best understood in terms of blind, mechanistic,
purposeless processes. Neither the equations of physics nor the algorithmic
sequence described by Evolution requires any assumption of a rational
agent working behind the scenes - one that has goals, purposes, intentions or
beliefs.
But most spiritual belief systems rest
on a contrary set of assumptions -
1.
Minds
pervade vast regions of space rather than being localised in the place between
our ears
2.
Minds
have existed since the beginning of time rather than having evolved relatively
recently
https://www.pmfias.com/origin-evolution-life-earth-biological-evolution/
This picture provides a clearer
perspective on what we mean by “relatively recently” - the history of minds
does not account for more than 0.01 billion (or 0.2 percent) of the 5-billion-year
history of our planet.
But so what if the assumption of a
primordial mind is not required, can we not retain it still? The general
heuristic of Occam’s Razor would eliminate any redundant assumptions, i.e.
those that do not add explanatory power to our model of reality. But we will go
further and invoke another heuristic, already introduced in Chapter 10 - the
Intentional Stance from Dan Dennett.
Dennett describes three levels of
analysis for modelling (or predicting) the behaviour of things of varying
complexity. The Physical Stance describes a thing (or system) in terms of its
physical components and their interactions, making predictions based only on an
understanding of these. It is, in principle, the most accurate approach and the
one used in Physics and Chemistry. For instance, the behaviour of a stone can
be accurately predicted based on the fact that it is held together by
electrostatic forces and is subject to the gravitational force. But if we
replace the stone with a live bird, the Physical Stance would not continue to
serve us well. Though it is true that a bird, like any other object, is made up
of atoms and molecules and is subject to the laws of physics, we need a higher
level of abstraction to predict, that when released it will fly up and not fall
down like a stone. This second level of analysis is the Design Stance which
assumes, in this case, that birds are “designed for flying”. Never mind that it
is design without a designer (see Chapter 6). The higher complexity of a
living thing (bird) compared to an inanimate object (stone) warrants the
adoption of the Design Stance. It is what we implicitly use in Biology and
Engineering.
When it comes to predicting the
behaviour of humans, even the Design Stance will generally not suffice. There
isn’t anything in particular that a person is “designed” to do. Of course, the
same goes for birds but most people would agree that a bird’s behaviour
patterns are less complex, and therefore easier to predict, than those of a
person. The third and highest level of abstraction then, is the Intentional
Stance.
Here is how it works:
first you decide to treat the object whose behaviour is to be predicted as a
rational agent; then you figure out what beliefs that agent ought to have,
given its place in the world and its purpose. Then you figure out what desires
it ought to have, on the same considerations, and finally you predict that this
rational agent will act to further its goals in the light of its beliefs. A
little practical reasoning from the chosen set of beliefs and desires will in
most instances yield a decision about what the agent ought to do; that is what
you predict the agent will do.
Having defined the three levels of
analysis, here’s the main insight. Jumping to the next (higher) level involves sacrificing
accuracy and reliability in the interest of “zooming out” irrelevant
details.
Consider rivers. A river can be
described using the Physical Stance as water flowing downhill. This approach
affords us a reasonably good understanding of not only its “normal” state but
also flooding or drying up. But skipping to the next level, we would need to
assume that rivers were designed “for” something - perhaps to provide water
which sustains life (see Chapter 9 for more examples of this type of
reasoning). But that wouldn’t explain why it floods or dries up. To explain
that,
we would need to attribute desires, goals and intentions (in short, minds) to
the river. What we end up with is a River Deity whose “actions” we may be able
to influence by appealing to its mind through prayers and rituals. And since
the only minds we know are human minds, we inevitably converge on an
anthropomorphic representation, usually female in the case of rivers.
In Chapter 10 we had encountered a
similar scenario relating to lightning and thunder and also formed a hypothesis
around why the Intentional Stance would be an evolutionarily “safe” strategy.
It seems that the Intentional Stance is more likely to be applied to things or
systems which are both -
·
Either
essential for or dangerous to our survival, and
·
Beyond
our control by physical means
Coming back to the question we set out
to answer, we already have a reasonably accurate description of the processes
that got us here (Big Bang, Relativity, Quantum Mechanics, Evolution…), and
these descriptions are all based on the Physical Stance. In the past we
had neither the methods nor the technology to be able to come up with models
that describe Nature from the bottom up (see Chapter 7). Of course,
there will always be gaps in our knowledge. We agreed in Chapter 3 that every
scientific theory is tentative and susceptible to falsification. But because of
this, in terms of agreeing with empirical data it easily outperforms anything
we had before. So it doesn’t seem like there is anything to be gained by
switching to what is essentially a more “top-down” approach; i.e. the Design or
Intentional Stance.
The Intentional Stance applied to
Nature has not, over the years, given us a reliable understanding of any aspect
of it. But shedding it would have a domino effect on many “meta-beliefs” of the
kind we went over in Chapter 12 -
·
The
idea that everything has a cause or happens for a reason
·
The
idea that Nature (or the Universe) has goals and purposes
·
The
idea that morality has any influence beyond the domain of human societies
·
The
idea that introspection can reveal “truths” about the outside world
Obviously, a great deal is at stake for
some of us. Let’s look at one of the popular arguments in defence of the
primordial mind - the one which starts with the doctrine of consciousness as a
fundamental property of the Universe (see Chapter 13). It has been used to
construct narratives that are, in essence creationism disguised in
sciency-sounding language.
One of its unwitting allies is Quantum
Mechanics (QM); in particular, the concept of “observer effect” based on the
famous double-slit experiment. The concept of observation (better
understood as “measurement”) in QM is a subtle one – it is any interaction
between the quantum system and its environment which leaves a record in the
environment. The observer can be a device (like a camera or screen) or even
another particle; it does not have to be a “conscious” entity. More on this
age-old debate here and here.
To get science across to the layperson,
scientists must provide interpretations in plain English. This is hard enough
to do in the case of QM, which is highly mathematical and difficult to explain
in terms of anything familiar (in fact it’s downright weird). To make things
worse, we have plenty of ideologues waiting to take advantage of the fact that
words in English (or any human language) don’t have meanings that are
unambiguous and context-free, which allows them to twist meanings of words to
support whatever it is they want to believe and want others to believe. It
seems like a Catch-22 - be misinterpreted or be ignored! Perhaps we need fewer
scientists and more science popularizers.
Written by Ambar Nag.
ambarnag@gmail.com
(Concluded)
(Concluded)