Pages

Saturday, March 31, 2012

21. The Nature of Information


How did life emerge from non-life? It did so through a long succession of processes and events in which more complex structures evolved from simpler ones. Beginning with this post, I shall take you through a fascinating journey of easy comprehension, explaining how complexity science answers such questions. This and the next few posts will introduce some of the jargon and basic concepts in modern complexity science.


The second law of thermodynamics for open systems is the primary organizing principle for all natural phenomena (cf. Part 6). The relentless expansion and cooling of our universe has been creating gradients of various types, which tend to get annulled as the blind forces of Nature take the local systems towards old or new equilibria. New patterns and structures get created when new equilibrium structures arise.

Take any living entity; say the human body, or even a single-celled organism. The amount of information needed for describing the structure of a single biological cell is far more than the information needed to describe, say, an atom or a molecule. The technical term one introduces here is 'complexity'. We say that the biological cell has a much higher DEGREE OF COMPLEXITY than an atom or a molecule.

Let us tentatively define the degree of complexity of any object or system as the amount of information needed for describing its structure and function.

Since the degree of complexity has been defined in terms of 'amount of information', we should be clear about the formal meaning of 'information'. The word 'bit' is commonplace in this IT age. It was introduced by Claude Shannon in 1948, and is the short form for 'binary digit'. A bit has two states: 0 or 1. Shannon took the bit as the unit of information.


One bit is the quantity of information needed (it is the 'missing' or 'not-yet-available' information) for deciding between two equally likely possibilities (for example, whether the toss of a coin will be 'heads' or 'tails'). And the information content of a system is the minimum number of bits needed for a description of the system.

The term ‘missing information’ is assigned a numerical measure by defining it as the uncertainty in the outcome of an experiment yet to be carried out. The uncertainty may be high either because only one of a large number (Ns) of outcomes is possible, or, what is the same thing, the probability of a particular outcome is inherently low.

Suppose we have a special coin with heads on both sides. What is the probability that the result of a spin of the coin will be 'heads'? The answer is 100% or 1; i.e., certainty. Thus the carrying out of this experiment gives us zero information. We were certain of the outcome, and we got that outcome; there was no missing information.

Next, we repeat the experiment with a normal, unbiased, two-sided coin. There are two possible outcomes now (Ns = 2). In this case the actual outcome gives us information which we did not have before the experiment was carried out (i.e., we get the missing information).

Suppose we toss two coins instead of one. Now there are four possible outcomes (Ns = 4). Therefore, any particular experiment here gives us even more information than in the two situations above. Thus: Low probability means high missing information, and vice versa.

To assign a numerical measure to information, we would like the following criteria to be met:

1. Since (missing) information (I) depends on Ns, the definition of information should be such that, if we are dealing with a combination of two or more systems, Ns for the composite system should be correctly accounted for in the definition of information. For example, for the case of two dice tossed together or successively, Ns should be 6 x 6 = 36, and not 6 + 6 = 12.

2. Information I for a composite or 'multivariate' system should be a sum (and not, say, a multiplication) of the information for the components comprising the system.

The following relationship meets these two requirements:

Ns ~ baseI.

Let us see how. Suppose system X has Nx states and the outcome of an experiment gives information Ix. Let Ny and Iy be the corresponding quantities for system Y. For the composite system comprising of X and Y, we get NxNy ~ baseIx baseIy. Since Ns = NxNy, we can write

Ns ~ base(Ix + Iy).

Taking logarithms of both sides, and writing Ix + Iy = I, we get

logNs ~ I log(base)

or

I ~ logNs / log(base).

What kind of logarithm we take (base = 10, 2, or e), and what proportionality constant we select, is a matter of context. All such choices differ only by some scale factor, and units. The important thing is that this approach for the definition of information has given us a correct accounting of the number of states, 'bins', or classes (i.e. by a multiplication (NxNy) of the individual states), and a correct accounting of the individual measures of information (i.e. by addition).

All the cases considered above are equiprobability cases: When the die is thrown, the probability P1 that the face with ‘1’ will show up is 1/6, as are the probabilities P2, P3, .. P6 that '2', '3', .. '6' will show up. For such examples, the constant probability P is simply the reciprocal of the number of possible outcomes, classes, or bins; i.e., Ns:

P = 1 / Ns.

Substituting this in the above relation, we get

I ~ log (1/P) / log (base).

Introducing a suitable proportionality constant c, we can write

I = c log (1/P).

This is close to the SHANNON FORMULA for missing information. Even more pertinently, this equation is similar to the famous Boltzmann equation for entropy:

S = k log W.

Entropy has the same meaning as missing information, or uncertainty. More on this next time.

Saturday, March 24, 2012

20. The Anthropic Principle


Even slightly different values for some of the fundamental constants of Nature would have led to entirely different histories of the cosmos, making our emergence and existence impossible. Why do these parameters have the values they have? According to a ‘weak’ version of the anthropic principle:
The parameters and the laws of physics can be taken as fixed; it is simply that we humans have appeared in the universe to ask such questions at a time when the conditions are just right for our life. 
Life as we know it exists only on planet Earth. Here is a partial list of necessary conditions for its existence:

1. Availability of liquid water is one of the preconditions for our kind of life. Around a typical star like our Sun, there is an optimum zone (popularly called the ‘Goldilocks zone’), neither so hot that water would evaporate, nor so cold that water would freeze, such that planets orbiting in that zone can sustain liquid water. Our Earth is one such planet.

2. This optimum orbital zone should be circular or nearly circular. Once again, our Earth fulfils that requirement. A highly elliptical orbit would take the planet sometimes too close to the Sun, and sometimes too far, during its cycle. That would result in periods when water either evaporates or freezes. Our kind of life needs liquid water all the time.

3. The location of the planet Jupiter in our Solar system is such that it acts like a ‘massive gravitational vacuum cleaner’, intercepting asteroids that would have been otherwise lethal to our survival.

4. Planet Earth has a single relatively large Moon, which serves to stabilize its axis of rotation.

5. Our Sun is not a binary star. Binary stars can have planets, but their orbits can get messed up in all sorts of ways, entailing unstable or varying conditions, inimical for life to survive and evolve.


It is not only that the planet we live on is conducive to our existence; even the universe we live in (with its operative set of laws of physics) is so. The 'cosmological' or 'strong' version of the anthropic principle says that:
Our universe has the fundamental constants and the laws of physics that are compatible with our existence; had they been different (i.e. inimical to our existence), we would not be here, discussing the principle.
The chemical elements needed for life were forged in certain stars, and then flung far into space through supernova explosions (cf. Part 18). This required a certain amount of time. Therefore the universe cannot be younger than the lifetime of the stars. The universe cannot be too old either, because then all the stars would be ‘dead’. Thus, according to the cosmological anthropic principle, life exists only when the universe has the age that we humans have measured it to be, and has the physical constants that we measure them to be.

Rees (1999), in the book Just Six Numbers, listed six fundamental constants which together determine the universe we see. Their mutual values are such that even a slightly different set of these six numbers would have been inimical to our emergence and existence. Consideration of just one of these constants, namely the strength of the strong nuclear interaction (which determines the binding energies of nuclei), is enough to make the point. It can be roughly defined as that fraction of the mass of an atom of hydrogen which is released as energy when two hydrogen atoms fuse to form an atom of helium. Its value is 0.007 (in certain units), which is just right (give or take a small acceptable range) for any known chemistry to exist; and no chemistry means no life:

Our chemistry is based on reactions among the 90-odd elements. Hydrogen is the simplest among them. Many of the other elements in our universe got synthesised by fusion of hydrogen atoms. This nuclear fusion depends on the strength of the strong nuclear interaction, and also on the ability of a system to overcome the intense Coulomb repulsion between the fusing nuclei. Existence of intense temperatures is one way of overcoming the Coulomb repulsion. A small star like our Sun has a temperature high enough for the production of only helium from hydrogen. As explained in Part 18, the other elements in the periodic table have been made in the much hotter interiors of stars larger than our Sun. The value 0.007 for the strong interaction determined the upper limit on the mass number of the elements we have here on Earth and elsewhere in our universe. A value of, say, 0.006, would mean that the universe would contain nothing but hydrogen, making impossible any chemistry whatsoever. And if it were too large, say 0.008, all the hydrogen would have disappeared by fusing into heavier elements. No hydrogen would mean no life as we know it; in particular there would be no water without hydrogen.

Similarly for the other fundamental constants of our universe.


But why? Why does the universe have these values for the fundamental constants, and not some other set of values? A fallout of Hawking’s model for our universe (cf. Part 1 and Part 19) is that even the strong anthropic principle acquires validity, provided it is stated properly and in the context provided by the M-theory. The new statement of the strong version goes something like this:
Out of the various possible universes, our universe just happens to have the fundamental constants and physical laws it has; other universes (which we cannot observe) have different laws of physics and different values for the fundamental constants. Our existence in our universe has been possible because it is compatible with our laws of physics and fundamental constants; other universes may or may not be conducive to life of any kind.
Watch this great  video for visualizing the 11 dimensions, and for getting a feel for much else in modern cosmology:


Saturday, March 17, 2012

19. Why Are the Laws of Nature What They Are?


This used to be a very profound question till recently. Developments in physics during the last few decades have now made it rather trivial and trite.

Such questions used to be in the domain of philosophy, and human history can boast of a truly dazzling succession of great philosophers. But the question now is: What is the true worth of an otherwise great philosopher who was/is innocent about the finer points of quantum mechanics?

Modern physics has come up with plausible answers to fundamental questions over which philosophers fretted for centuries. No wonder, Hawking & Mlodinow wrote this in 2010, somewhat facetiously perhaps:
Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.
An answer to the question 'Why are the laws of Nature what they are?' comes from M-theory (cf. Part 14). According to it, there are actually 11 dimensions. We see only four because the rest of them have got 'curled up' so much that they are not visible to us. There are ~10500 different modes of curling up, meaning that that many different universes are possible. One of them is the universe we inhabit. The apparent laws of a universe depend on how the extra dimensions in that universe got curled up. We say ‘apparent laws’, because the more fundamental laws are those of the M-theory.

Thus, there are multiple universes, or MULTIVERSES, each with its own set of apparent laws. We just happen to be living in a universe with a certain set of laws and a certain set of values for the fundamental constants. If the laws of a universe are not conducive to emergence and evolution of life, living beings cannot possibly exist in that universe, discussing such questions.

In Newtonian physics, the past was visualized as a definite series of events. Not so in quantum physics. No matter how thoroughly and accurately we observe the present, the unobserved past, as also the future, is indeterminate, and exists only as a ‘spectrum of possibilities’. This means that our universe does not have just a single past or history. Since the origin of the universe was a quantum event, Feynman’s sum-over-histories formulation for going from spacetime point A to spacetime point B occupies centre-stage (cf. Part 4). But we have knowledge only about the present state of the universe (point B), and we know nothing about the initial state A. Therefore, as emphasized by Hawking, we can only adopt a ‘top down’ approach to cosmology, wherein every alternative history of the universe exists simultaneously, and the histories relevant to us are those which, when summed up, have a high probability of giving us our present universe (point B).

The picture that emerges is that many universes emerged spontaneously (simultaneously or otherwise). Most of these universes are not relevant to us because their apparent laws are not conducive to our emergence and survival. The M-theory offers ~10500 possibilities of start-up universes. We have to single out those which correspond to the curling up of exactly those dimensions which we find to be the case for the universe we inhabit. Further, we have to select those histories which reproduce, for example, the observed mass and charge of the electron, and other such observed fundamental parameters.


WHAT IF THE M-THEORY DOES NOT GET DUE VALIDATION? The multiverse idea would be still intact; via the cosmic-inflation theory (cf. Part 17). The inflation episode is an integral part of modern cosmology.


It is now time to recapitulate some points I have made in these 19 posts. First I gave purely classical arguments to explain how our universe could emerge out of 'nothing', without a violation of the principle which says that the total mass/energy is always conserved, and that nothing extra can get created. This classical argument is simple to understand, but is, at best, only a crude statement. The real explanation has to come from the laws of quantum mechanics, because these laws govern all natural phenomena. The vacuum state in quantum field theory is not at all a state of 'nothingness'. It has an energy of its own. Our universe emerged out of vacuum as a quantum fluctuation, without violating the principle of conservation of energy/mass. And the M-theory and the cosmic-inflation theory are powerful explanations for why our universe has the laws it has. Our universe got created in accordance with the laws of physics (or rather because of them), without the help of a Creator.

Euclidean geometry holds true in our universe; i.e., ours is a flat-geometry universe. Which is just as well. As explained in an accessible language in a recent book 'A Universe from Nothing', only a flat-geometry universe can satisfy the requirement that the sum total of positive and negative contributions to the overall energy of the universe add up to zero. The energy-conservation law was not violated when our universe emerged out of 'nothing'. The total energy is still zero.

Watch this video for more:


The next set of posts in this series will explain how life emerged out of non-life, without any help from a Creator or Designer. It was all a matter of evolution of 'complexity' in an expanding and cooling (and therefore gradient-creating) universe. At our terrestrial level, the steady ingress of solar energy into our ecosphere has been the local factor creating gradients or non-equilibrium situations. The natural tendency to seek equilibrium configurations often leads to new patterns and structures. This is how 'complexity' evolves. The emergence of life in one such example of what thermodynamically open systems can achieve. There is nothing divine or mystical about that.

From now on the narrative will become more and more life-centric and anthropocentric. In the next post I shall discuss the Anthropic Principle.

Saturday, March 10, 2012

18. We Are Star Stuff


Matter is much older than life. Billions of years before the sun and earth even formed, atoms were being synthesized in the insides of hot stars and then returned to space when the stars blew themselves up. Newly formed planets were made of this stellar debris, the earth and every living thing are made of star stuff (Carl Sagan).
About ten million years after the Big Bang, enough cooling and expansion had occurred to fill the universe with a mist of particles, containing mostly hydrogen and some helium, as also some types of elementary particles (including neutrinos), some electromagnetic radiation, and perhaps some other, unknown, particles. The universe was just cold, dark, and formless at that stage.

Then, when enough cooling had occurred, some quantum-mechanical primordial fluctuations in the densities of the particles resulted in a clumping of some of the particles, rather like the nucleation that precedes the growth of a crystal from a fluid. The presence of such clumped particles brought the gravitational forces into prominence, resulting in a cascading effect. Portions of the mist began collapsing into large swirling clouds. Over a period of a few hundred million years, huge galaxies, each containing billions of young stars of various sizes, formed and began to shine. The formless darkness of the initial period was gone.

The large superstars among these were strongly bright spheres, the brightness coming from the nuclear fusion of hydrogen and helium in their interiors, made possible by the prevailing extreme temperatures and pressures. This is how many of the heavier elements got formed in the interiors of these large stars.

The emergence of heavier elements by the process of nuclear fusion continued steadily until the element iron (Fe) started forming. The iron nucleus is the most stable of them all, having the largest binding energy per nucleon. [Protons and neutrons inside the nucleus are jointly called nucleons; and 'binding energy' is defined as the amount of energy required to extract a nucleon from inside the nucleus and take it far away from it]. Therefore iron cannot fuse with one or more nucleons and release radiative energy of the nuclear process; such a process would not lower the potential energy and the free energy. Consequently, the presence of iron acts as a 'poison' for the nuclear fusion process. Thus the appearance of iron marked the beginning of the end of the available nuclear fuel, and therefore the end of the life of the star. In due course, the smaller among such stars simply ceased to shine, shrinking into cold and dead entities.

But a very different fate awaited the larger stars. No longer able to sustain their size because of the progressively decreasing processes of nuclear fusion of elements, they began to collapse under their own immense gravitational pull. A rapid change occurred in their interiors. Under the immense squeezing generated by the collapse, the iron-element core imploded. This resulted in a new state of matter as the electrons and the protons in the atoms were squeezed together. The dominant process of interaction was the electroweak interaction: p+ + e- n0 + νe (i.e., protons and electrons combined to produce neutrons and electron-neutrinos).

Thus, this collapse led to a compression of the star to an extremely dense ball of pure neutron matter, with the neutrino cloud bursting outwards, resulting in an explosion of the outer shell of the star (the 'SUPERNOVA EXPLOSION'). This is how the synthesized elements (up to the atomic number (Z) for iron), residing in the outer shell of the star, were scattered into the universe, accompanied by a brilliant flash of light.


A consequence of such supernova explosions (which still occur from time to time, and illuminate the galaxies with brilliant flashes of light) was the emergence of clouds of dust and gas and the debris containing heavy elements. These clouds encircled the galaxies in spiraling arms. THE INTENSITY OF THE SUPERNOVA EXPLOSIONS AND THE TEMPERATURES INVOLVED WERE SO HIGH THAT ELEMENTS HEAVIER THAN IRON WERE ALSO SYNTHESIZED AND SCATTERED INTO SPACE.

The chemical elements in our bodies have come from that star stuff: In the outer portions of the spirals occurred a condensation of the dust, the clouds and the debris, resulting in the formation of the second generation of (smaller) stars (including our Sun), as also planets, moons, comets, asteroids, etc.


Our solar system was formed when the universe was ~9 billion years old. In the initial period, our Earth underwent several violent upheavals (bombardment by comets and meteors, as also huge earthquakes and volcanic eruptions). By the time the Earth was ~2.5 billion years old, its continents had formed. Life appeared in due course, which further influenced the ecosphere in a major way. In particular, free oxygen (as opposed to oxygen chemically bound with other elements) was liberated as a waste product by the algae that consumed carbon dioxide present in the atmosphere and in the oceans.

Two billion years ago, our Earth was extremely radioactive as well. The heavier-than-iron elements produced in the outer shell of the exploding stars during the supernova explosion were/are radioactive, as their binding energy per nucleon was lower than that of iron: Such elements can increase their binding energy per nucleon (and thus attain a more stable state) by undergoing nuclear fission, either spontaneously or with the assistance of free neutrons. Uranium was among the heaviest elements produced during the last few seconds of the supernova explosion. Thus this element was a part of the Earth right from the beginning.

Watch this video for an interesting visualization of our cosmic and terrestrial history:

Saturday, March 3, 2012

The Secret Life of Flowers


I asked the rose how long
was its life.
The bud heard
and softly smiled.
(Mir Taqi Mir)



Click HERE and enjoy this remarkable video
about the secret life of flowers.


Savour the thought that you have inherited such a beautiful Earth.

This is OUR Earth.
Let us worry about what we are doing to it.

17. The All-Important Cosmic Inflation Interlude


Our universe has been expanding. This means that if we extrapolate backwards in time, there must have been an instant when there was (almost) a point universe. That was the Big Bang moment, followed by continual expansion. But for explaining our observations of the cosmos, something additional had to be postulated, namely a very brief period of exponentially rapid expansion ('inflation') very soon after the Big Bang. After this inflation interlude our universe settled to a far slower expansion rate.

Alan Guth was the originator of the inflation postulate, which solves, among other things, the Horizon Problem (cf. Part 8) and the Flatness Problem (Part 15).


For postulating inflation, Guth was inspired by the analogy of a 'first-order'phase transition. As an example, consider the freezing phenomenon in water. As water is cooled, its free energy changes. There comes a temperature (the freezing point, 0oC) below which a different phase, namely ice, has a lower free energy than liquid water. Therefore, as demanded by the second law of thermodynamics, a phase transition should occur from liquid water to ice. Yes, but that is not the full story.

Suppose you take extremely pure water, and also ensure that there are no disturbances like vibrations etc. Then you find that you can 'supercool' it; i.e., it continues to be a liquid even below 0oC. This happens because each phase of H2O is stable in a certain range of temperatures, and, since it is a first-order phase transition, there is no reason that 0oC be the temperature where one stability range ends and the other begins. There is a small range of temperatures around 0oC in which both phases are stable, meaning that liquid water and ice can coexist. But below 0oC ice does have a lower free energy than liquid water. Therefore even the slightest disturbance to supercooled water can make it undergo rapidly the arrested phase transition to ice. When this happens, the trapped excess free energy gets released. We call it 'latent heat'.

Something similar happened to the nascent universe. As it was cooling after the Big Bang, it got trapped in a supercooled or metastable state. On further cooling, a sudden, explosive, phase transition occurred. Here the equivalent of the 'latent heat' released was the positive vacuum energy (of the 'false vacuum' of the metastable state). The volume of the tiny universe increased by a factor of at least 1078 during the inflation, which occurred during the early part of the electroweak epoch: It started ~10-36 seconds after time-zero, and ended ~5 x 10-33 seconds later.


 Let us put in some numbers. Even for the entire inflation period, ∆t ~ 10-33 sec only. The uncertainty principle says that ∆Eth/(4π). Since h/(4π) ~ 10-27 erg sec, we get ∆E ~106 erg, or ~109 GeV. Just about any value of ∆E larger than this is also possible as a quantum fluctuation (resulting in the appearance and disappearance of 'virtual particles') if ∆t is appropriately smaller than 10-33 sec. Normally this would be of no serious significance if the virtual particles could disappear within the time ∆t. But space was doubling in length every 10-37 seconds during inflation, so THE MOMENTARY INHOMOGENEITIES CREATED BY QUANTUM FLUCTUATIONS GOT YANKED APART RAPIDLY, AND WERE FROZEN IN SPACE WHEN INFLATION ENDED; there was no going back.

The 'false vacuum' energy was of essentially the same nature as that coming from Einstein's cosmological constant, or dark energy.

Predictions of this cosmic-inflation model are in conformity with the characteristic density inhomogeneities recorded in 2006 in the CMB map (for a flat-geometry universe at the end of the inflation epoch).


The expansion of space during the inflation interlude was much faster than the speed of light. This is possible. According to the special theory of relativity (cf. Part 10), nothing can travel through space at a speed faster than that of light. But there is no limit on the speed with which space itself can expand.

Let us note two more things here:

(i) There was and is gravitational interaction among the frozen fluctuations, and the gravitational potential makes a negative contribution to the total energy (cf. Part 2), so the total energy content of the universe was and is close to zero.

(ii) 380,000 years after the Big Bang the evolution of the structure frozen at the end of the inflation episode led to what we see in the CMB map today. In due course, this structure evolved into the present configuration of galaxies, clusters of galaxies, life, people.

You can appreciate why the inflation postulate is such an integral part of modern cosmology. There is more to it, still. INFLATION ALSO EXPLAINS WHY THERE SHOULD BE A MULTIVERSE (i.e. many universes)

To understand that, let us hark back to the phase transitions in water, but this time let us consider boiling instead of freezing. Above 100oC, vapour or steam is a more stable phase of H2O than the liquid phase. And it is again a first-order phase transition, meaning that there is a range of temperatures in which liquid water and steam can coexist. Because of thermal fluctuations and other local variations, bubbles of steam start appearing even below 100oC, and they do so with increasing frequency as the temperature is increased, till the whole system starts boiling.


Something similar happened during cosmic inflation, and this phenomenon is called 'chaotic inflation'. There was a time interval during which inflation occurred, and at any instant during this interval a different universe could inflate and go its separate way, like a bubble of steam in the above picture.


So, not just our universe, but a whole lot of universes, emerged before the end of the inflation era, each with its own value for the cosmological constant and other fundamental constants and laws of physics. There is a multiverse, rather than just one (i.e., our) universe.