Pages

Saturday, 30 June 2012

34. Ockham's Razor


We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, so far as possible, assign the same causes (Isaac Newton).


We say that the planets in our solar system go around the Sun. This is a simple model or theory which we use for calculating, for example, the time of the next solar eclipse.


But we can make an equally valid calculation by working with a model in which our Earth is the hub, and other planets and the Sun go around it (remember Ptolemy's geocentric model?). However, the second calculation would be far more complicated than the first.


A more sophisticated model of reality here is as follows: There is point called the 'centre of mass' (c.m.) around which all entities in the solar system, including the Sun, revolve. Since the Sun is far more massive than any other entity in the solar system, the c.m. is quite close to the centre of the Sun. Therefore, for most practical purposes, it is adequate to assume that the Sun is at the centre of the solar system.
 
It makes sense to first choose the simplest of the two or more alternative theories for explaining a set of observations about a phenomenon. The proverbial Ockham's razor shaves away the unnecessary stuff, and only the simplest or the most parsimonious theory, which makes the smallest number of assumptions, usually survives in our criterion for a good theory.

But it is conceivable that the simplest theory may be wrong or inadequate. The idea of Ockham's razor (it is only an idea, after all) is that one should proceed to simpler theories until simplicity can be traded for greater explanatory power.

Confronted with a multiplicity of candidate theories, we have to bring in likelihood or probability considerations ('Which theory is more likely to be right?'). It turns out that algorithmic information theory (AIT) comes to our help here, and provides a certain degree of legitimacy to the philosophical-looking Ockham-razor approach:

In AIT we define a parameter called algorithmic probability (AP). It is the probability that a random program of a given length fed into a computer will give a desired output; say the first million digits of π. Following Bennett and Chaitin’s pioneering work done in the 1970s, let us assume that the random program has been produced by an unintelligent monkey. The AP in this case is the same as the probability that the monkey would type out the same bit string (a sequence of 0s and 1s), i.e. the same computer program, as, say, a Java program suitable for generating the first million digits of π. The probability that the monkey presses the first key on the keyboard correctly is 1/2 or 0.5. The probability that the first two keys would be pressed correctly is (0.5)2 or 0.25. And so on. Thus the probability gets smaller and smaller as the number of correctly sequenced bits increases. The longer the program, the less likely it is that the monkey will crank it out correctly. This means that the AP is the highest for the shortest programs, and vice versa.


Now suppose we are having a bit-string representing a set of data, and we want to understand the mechanism responsible for the creation of that set of data. In other words, we want to discover the law, or the computer program, among many we could generate randomly, which produced that set of data. According to the above AIT rationalization for Ockham’s philosophy, the shortest such program is the most plausible guess, because it has the highest AP. The simplest explanation is usually (but not always) the right one.

The Ockham-razor idea has two parts: The principle of plurality, and the principle of parsimony, economy or succinctness. The former says that plurality should not be posited without necessity. And the latter says that it is pointless to do with more what can be done with less.

The celebrated scientific method is implicitly based on three axioms:
  • The existence of objective reality.
  • The existence of natural laws.
  • The constancy of natural laws.
In science we assume that theories or models of natural law must be consistent with repeatable experimental observations. This assumption is based on the above axioms, and Ockham's razor is often invoked in scientific debate.
We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it.  However, such models of the universe are not of much interest to us mortals.  It seems better to employ the principle known as Occam's razor and cut out all the features of the theory that cannot be observed (Stephen Hawking in A Brief History of Time).
Albert Einstein is famous for many one-liners, including the following: 'Everything should be made as simple as possible, but not simpler.'

Saturday, 23 June 2012

33. Emergence



The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts (Anderson 1972).
In complexity science (and in philosophy), 'emergence' is a technical term. It signifies a breakdown of the reductionistic approach usually adopted in conventional science.

To give you a feel for what emergence means, I shall explain why even the second law of thermodynamics is actually an emergent law. The second law is something which every macroscopic system must obey, in spite of the fact that the dynamics of the individual microscopic constituents of the system does not require that the law be obeyed. Because of this law, in the macroscopic world in which we live, we take the direction of increasing time as that in which the entropy of an isolated system increases and irreversible processes occur. All macroscopic phenomena have the property of time-asymmetry.

I say that the second law is an emergent law because it is not deducible from the laws of mechanics obeyed by the microscopic particles comprising a macroscopic system: The laws governing the dynamics of the microscopic particles are time-symmetric, at least in classical physics. Thus the time-asymmetry of the dynamics of such macroscopic systems is an emergent property, not present at the microscopic level.

Similarly, causality may be largely an emergent property, but I must think about it some more.


The existence of emergent laws has bothered several philosophers and scientists from times immemorial. In fact it continues to do so. Many of them are still not able to reconcile themselves to the very idea of emergence. Even Boltzmann, the originator of statistical thermodynamics, did try for a while to reconcile the time-symmetry of Newton’s equations of motion with the time-asymmetry inherent in the second law. Starting from Newton’s equations of motion, Boltzmann did succeed in achieving time-asymmetry in the nonlinear 'transport equation' derived by him. The solution of this equation was the famous 'Boltzmann H function'. However, he soon realized that although he had succeeded in deriving the occurrence of time-asymmetric behaviour in a macroscopic system, he had introduced probability considerations for doing this, and this had serious connotations regarding the nature of uncertainty.

He then went to the other extreme, and approached the whole problem from the statistical angle alone. But the equation derived by him for what we now call Boltzmann probability has a dynamical flavour. It is the probability that a dynamical system will be in one of a given set of states, and is equal to the fraction of the total observation time that the system spends in that set of states.

Boltzmann’s work met with stiff resistance and ridicule because his contemporaries just could not accept emergence. The second law is a statistical law (cf. Part 22), and people made a distinction between EXACT LAW which is never violated and STATISTICAL LAW which is practically never violated for a system of large number of constituents. Boltzmann was very depressed by the response to his profound work, and this possibly contributed to his suicide.

It has been suggested by some scientists that the time-asymmetric nature of the second law is partly a consequence of our inability to keep individual track of each of the very large number of molecules (typically ~1023) in a macroscopic system, implying that the origin of its time-asymmetric nature is statistical. But a better rationalization can be achieved by probability-based arguments:

Let us refer to the 'free expansion' figure I drew for Part 22, and which I show here again:



In the left part of this figure, all the molecules of the gas are in the left half, and their positions and velocities are randomly distributed. At the moment the partition is removed, there happens to be a particular set of positions and velocities, which arose from a particular set of initial conditions. It is important to realize that any other set of random positions and velocities at that moment would still give the same irreversible diffusion of the molecules to twice the volume. So this is a highly probable thing to happen.

Now look at the final configuration in which the gas has, on its own, occupied twice the volume. The motion of each molecule is governed by time-symmetric laws of dynamics. If we could somehow reverse the velocities of all the molecules at one particular moment of time, they can go back to occupying the left half of the chamber shown in the left part. But for this to happen the initial conditions, or the boundary conditions, for all the ~1023 molecules will have to be one particular set from among the infinite such sets of initial conditions, and the probability for that to be the case is very very close to zero. That is why irreversibility and time-asymmetry arise at the macroscopic level.

The second law is an example of what is called WEAK EMERGENCE. Although different parts of a macroscopic system are interconnected, they are not necessarily interdependent. If we select a smaller part of such a macroscopic system, the second law still holds for it. Similarly, the pressure and temperature of a gas are emergent properties, but only weakly emergent properties.

STRONG EMERGENCE, by contrast, is something for a system as a whole. There is an interdependence among the various subparts, and each part is indispensable for the overall emergent behaviour.


Strong emergence is global. Weak emergence may be local.

Is space-time an emergent property of the universe? Click HERE.


Saturday, 16 June 2012

32. Self-Organization in Complex Systems


'Complexity' is a technical term. It is not the same thing as complicatedness. What is a complex system? In general terms, a complex system consists of a number of interacting 'members', 'elements' or 'agents', which have the potential to generate qualitatively new collective behaviour. Manifestations of this new behaviour are the spontaneous creation of new spatial, temporal, or functional structures.

Complex systems can self-organize, which means that globally coherent patterns can emerge in them out of local interactions. Flocking behaviour of birds is an example of this. Simple local rules like 'separation' (avoidance of crowding, or short-range repulsion), 'alignment' (steering towards the average heading of neighbours), and 'cohesion' (steering towards the average position of neighbours, or long-range attraction), result in well-organized flock patterns, even when nobody is in command.


Many other such examples of self-organization can be seen in Nature: shoals of fish; swarms of insects; bacteria colonies; herding behaviour of land animals. Experiments carried out on humans showed something similar: When 5% of the 'flock' changed direction, others followed suit.

Per Bak, who made seminal contributions to complexity science, gave the following definition: Complex systems are systems with large spatial and/or temporal variability. In the context of this definition, some counter-examples are a gas and a crystal. Both are epitomes of uniformity or sameness, with hardly any variability; all portions are the same.

We can formally define complexity as something we associate with a complex system.

As explained in Part 30, a characteristic feature of complex systems is the emergence of unpredictable and therefore unexpected properties or behaviour. The emergence of life out of nonlife was one such property.

We humans and our interactions with one another, and with our biosphere, are among the most complex imaginable systems. What is our future going to be like? Although we cannot make definite predictions, even probabilistic statements about the more likely scenarios can have a salutary effect on how we conduct our affairs (e.g. regarding the management of climate change) to achieve high levels of sustainability.

Why do complex systems self-organize? The answer to such questions has to do, as usual, with the second law of thermodynamics for open systems. Imagine a bathtub filled with water. Suppose you suddenly pull out the stop. An interesting vortex structure develops soon, as the water drains out. The vortex is an ordered dynamic structure, which appears to have emerged 'spontaneously' from stagnant water. So this is self-organization. Strictly speaking there is really nothing spontaneous about it, because there is a driving force, namely gravity. We still call it 'spontaneous' because it has happened even when no design work by anybody went into it.

Under the continual action of the driving force (gravity), the system is in a far-from-equilibrium condition, in which, although there is an overall increase of disorder (entropy) as the water is accelerated in an irreversible manner down the drain, there is creation of order locally.


The whirlpool is an energy-dissipating structure. It is also an example of energy-driven organization, popularly known as just self-organization, resulting in 'emergent' or unexpected properties or patterns.

Niele (2005) made a distinction between driving forces and shaping forces in the emergence and evolution of complex dissipative structures. In the whirlpool, the driving force sustaining the energy-dissipating structure results from an energy gradient, whereas the shaping forces come from interactions within the whirlpool and with the surrounding tub etc. For example, the shape of the tub and the shape of the drainage hole influence the shape of the vortex. The shaping forces within the whirlpool are encoded in the structure of water molecules and in the interactions among them (predominantly 'hydrogen-bond' interactions). No water molecule has any embedded information or instructions about how to construct the vortex. Yet ‘strings of synchronized interactions’ among the water molecules do the shaping of the complex vortex structure.

In the context of complexity, the important point is that it is impossible to go backwards from the observed whirlpool structure, and work out in a reductionistic fashion the details of the positions and velocities of all the molecules and the interactions among them that have given rise to the observed complex behaviour. This is generally true of all dissipative systems. And most real-life systems are dissipative systems.

Similarly, except in a broad macroscopic or hydrodynamic sense, the observed complexity cannot be predicted in detail in a constructionistic fashion from the underlying simplicity of the shapes of the molecules and the interactions among them.

The gross features of order and pattern in the whirlpool are on a scale millions of times larger than the features of the interactions causing them. In any case, one cannot perform computations at infinite speed, and a system is said to be computationally irreducible if the simplicity underlying it cannot be worked out or computed in reasonable time. Both reductionism and constructionism stand discounted.

A principle of self-organization was enunciated by the British cybernetician W. Ross. According to it, an open dynamical system tends to move towards the nearest attractor in phase space. What is the mechanism of self-organization by movement towards the nearest attractor? It is the either deterministic or probabilistic ('stochastic') variations that occur in any dynamical system, enabling it to explore different regions in phase space until it reaches an attractor. Entering the attractor stops further variation outside the basin of the attractor, and thus restricts the freedom of the components of the system to behave independently. There is an increase of coherence, or decrease of local entropy.


My book Complexity Science discusses such things in substantial detail.

Saturday, 9 June 2012

31. Biological Evolution


In 1859, Charles Darwin announced one of the greatest ideas ever to occur to a human mind: cumulative evolution by natural selection (Richard Dawkins).

In the previous post (Part 30) I used the terms 'biological evolution' and 'natural selection', without realizing that I have not yet explained them. I do that in this post.

The name most associated with evolution is that of Charles Darwin. The year 2009 marked his second birth centenary, as also 150 years of the publication of his celebrated book On the Origin of Species by Means of Natural Selection. The basic idea of biological evolution by natural selection is remarkably simple, yet of fundamental importance:

Consider the mother-child relationship. Mothers go through very substantial pain, hardship, risk to health and life, deprivation, sacrifices etc., and yet most of them are happy to bear a child and rear it. Why is that so?


In a population of females, there would be some variation regarding attitude to motherhood. There are bound to be some who avoid undergoing all the hardships and sacrifices I mentioned above. As a result, they do not get pregnant and bear children. Similarly, there are some females in the population who are happy with the very thought of motherhood; pain and sacrifices notwithstanding. Such females would not only contribute their progeny to the population, the progeny is likely to be like them, favourably disposed to the idea of motherhood.

Over many generations, the result would be that the percentage of females not inclined to become mothers decreases; no progeny means no representation of the tendency against motherhood in the population. This is 'natural selection': Nature selects in favour of those females who are happy being mothers, and weeds out those who are not. There is a gradual 'evolution' of this trait in the population, so much so that, in due course, there are hardly any females left who do not wish to bear children.

Such reasoning can be generalized to other forms of biological evolution. Living organisms are thermodynamically open systems, i.e. they are constantly exchanging matter and energy with the environment. There is a fair amount of dynamic equilibrium between a living organism and its surroundings. The organism cannot survive if this equilibrium is disturbed too much, or for too long. The fact that an organism survives implies that, in its present form, it has been able to adapt itself to the environment. If the environment changes slowly enough, organisms can evolve (over a long-enough time period) a new set of capabilities or features which enable them to survive even under the changed conditions. Over long periods of such evolutionary change, creatures may even develop into new species. This was the message of Darwin’s (1859) bold theory of evolution through cumulative natural selection.

A consequence of Darwin's theory was that all living organisms are the descendents of only one or a few simple ancestral forms.


Darwin started with the observation that, given enough time, food, space, and safety from predators and disease etc., the size of the population of any species can increase in each generation. But this indefinite (exponential) increase does not actually occur. In fact, usually only a small minority of the offspring reach maturity to produce the next generation of offspring; the rest die prematurely. Thus, there must be limiting factors in operation. Influenced by Malthusian ideas, Darwin imagined that if, for example, available food is limited, only a fraction of the population can survive and propagate itself. But what decides who will survive and who will not?

Darwin’s answer was that, since not all individuals in a species are exactly alike (i.e. since there is variation in the population), those better suited to cope with the prevailing conditions stand a better chance of survival ('survival of the fittest'). Moreover, the fittest individuals not only have a better chance of survival, they are also more likely to procreate. Thus, attributes conducive to survival and propagation have a better chance of getting ‘naturally selected’ at the expense of less conducive attributes. And the effects of this natural selection accumulate over time, i.e. over several generations. This is the process of cumulative natural selection recognized by Darwin.

Children tend to resemble their parents to a substantial extent. The reason is that the progeny of better-adapted individuals in each generation, which survive and leave behind more offspring than others, acquire more and more of those features which are conducive for good adaptation to the existing or moderately-changing environment. A species perfects itself, or adjusts itself, for the environment in which it must survive, through the processes of both cumulative natural selection and inheritance.

Thus there are four basic features of Darwinian evolution:
  • Variability and variety in members of a population in the matter of coping with a given environment.
  • Inheritance of this variation by the next generation, with a few random modifications.
  • Differential survival and reproductive success of individual members of this new generation in the given environment.
  • Establishment of a new population more adapted to the environment, possessing new variations for passing on to the next generation.
Darwin changed the way we humans perceive ourselves. And the basic idea of evolution by natural selection has gone far beyond the precincts of biology. Apart from biological Darwinism, we speak of chemical Darwinism, quantum Darwinism, neural Darwinism, and what not. What evolves in any open system of interacting entities is complexity.

Biological entities embody enormous amounts of order and organization. Can Darwinian evolution alone explain it? No. As Stuart Kauffman has emphasized, there is, in fact, an underlying complexity and order on which Darwinian evolution operates. Evolution of biological complexity is determined by two factors: self-organization (to be discussed in the next post), and natural selection. Self-organization or spontaneous ordering can occur in any open dynamical system, Darwinism or no Darwinism. Darwinian natural selection acts on this existing order and hones it further.

Wednesday, 6 June 2012

History Lessons


The teacher said, "Let's begin by reviewing some American History. Who said 'Give me Liberty , or give me Death'?"

She saw a sea of blank faces, except for Little Akio, a bright foreign exchange student from Japan , who had his hand up: "Patrick Henry, 1775," he said.

"Very good! -- Who said, 'Government of the People, by the People, for the People, shall not perish from the Earth'?"

Again, no response except from Little Akio: "Abraham Lincoln, 1863."

"Excellent!" said the teacher continuing, "Let's try one a bit more difficult -- Who said, 'Ask not what your country can do for you, but what you can do for your country'?"

Once again, Akio's was the only hand in the air and he said: "John F. Kennedy, 1961."
 
The teacher snapped at the class, "Class, you should be ashamed of yourselves, Little Akio isn't from this country and he knows more about our history than you do."

She heard a loud whisper: "Screw the Japs."

"Who said that? -- I want to know right now!? she angrily demanded.
Little Akio put his hand up, "General MacArthur, 1945."

At that point, a student in the back said, "I'm gonna puke.'

The teacher glares around and asks, 'All right! -- Now who said that?"

Again, Little Akio says, "George Bush to the Japanese Prime Minister, 1991."
 
Now furious, another student yells, "Oh yeah? -- Suck this!"

Little Akio jumps out of his chair waving his hand and shouts to the teacher, "Bill Clinton, to Monica Lewinsky, 1997!"

Now with almost mob hysteria someone said, "You little shit! -- If you say anything else -- I'll kill you!"
 
Little Akio frantically yells at the top of his voice, "Michael Jackson to the children testifying against him, 2004."

The teacher fainted. As the class gathered around the teacher on the floor, someone said, "Oh crap, We're screwed!"
 
Little Akio said quietly, "The American people, November 4, 2008."