Pages

Saturday 25 August 2012

42. The Secondary Chemical Interactions

The covalent, ionic, and metallic interactions I described in Part 41 are called the primary interactions. They result in strong chemical bonds. A number of secondary interactions or bonds among atoms also exist, which are substantially weaker than the primary interactions. Particularly ubiquitous and important among these is the hydrogen bond. 

Take the example of water, H2O or H-O-H. The oxygen atom forms covalent bonds with the two hydrogen atoms. Each such covalent bond (O-H) has two electrons associated with it, one coming from hydrogen and one from oxygen. The electron distribution around the hydrogen nucleus in such a bond is not like that in a symmetrical bond like C-C in the structure of diamond. The oxygen nucleus has a charge number (Z) equal to 8, which is much more than the charge number 1 of H, so it hogs (attracts towards itself) a larger share of the electron charge cloud associated with the covalent bond; we say the oxygen atom is very electronegative. This makes the nucleus of the hydrogen atom somewhat less shielded by its electron than when there was no bonding of any kind. For similar reasons, the oxygen nucleus and its charge cloud of electrons are together a little more negative than they would be in an isolated atom of O. The end result is that the O-H bond in a water molecule is like a little dipole. There are two such bonds in the H-O-H molecule, so there are two positive ends and a negative end. The upshot is that, because of this charge separation, the entire water molecule has a net 'dipole moment'.

The water molecules, being dipoles, tend to orient themselves such that a positive end (a hydrogen end) of one molecule points towards the negative end (the oxygen end) of another molecule. So we speak of hydrogen bonds, denoted in this example by O-H…O.



The most crucial aspect of the hydrogen bond in the evolution of chemical and biological complexity is that it is of intermediate strength, not as strong as the covalent bond, and yet not as weak as the so-called van der Waals interaction (or the London dispersive interaction):

The van der Waals interaction is very weak, and it is always present between any two atoms. Quantum-mechanical fluctuations in the electronic charge cloud around an atom can result in a transient charge separation or dipole or multipole moment, and the electric field of this multipole induces a multipole moment on any neighbouring atom. This results in a small attraction between the two atoms, the so-called van der Waals attraction, interaction, or bond.



The energy required to break a chemical bond is a measure of its strength. The melting point of a solid is an indicator of the strength of the weakest bonding in it. The covalent bond is the strongest, with a typical bonding energy of ~400 kilocalories (kcal). The ionic bond is typically half as strong as the covalent bond. The metallic bond shows a wide range of strengths, two extreme examples being the bonding in mercury on one extreme, and the bonding in tungsten on the other. The strength of a hydrogen bond is typically 14 kcal. And van der Waals bonding involves energies below 1 kcal.

The most relevant fact for our purpose here is that the energy involved in hydrogen bonding is typically only ~10 times larger than the energy of thermal fluctuations, but is still much lower than the energy of a typical covalent bond.

At typical temperatures at which biological systems exist, it is difficult for thermal fluctuations to break covalent bonds, but there is a fairly good chance that they can break hydrogen bonds.

We have seen above that water is an aggregate of tiny dipoles. We say that it is a polar material. By contrast, there are a large number of ‘hydrocarbons’ which are nonpolar materials. [A hydrocarbon is a compound made predominantly of hydrogen and carbon atoms.] In contrast to the O-H bond in water, which is a bond with a dipole moment, the C-H bond in a hydrocarbon is largely nonpolar: The two electrons forming the C-H covalent bond are shared almost equally between C and H (there are quantum-mechanical reasons for this). Thus, a C-H bond hardly results in the creation of a dipole, and therefore it does not readily form a hydrogen bond with a water molecule.

Now suppose we mix a nonpolar fluid with a polar fluid like water. Segregation will occur. The nonpolar molecules will tend to huddle together because they cannot take part in the hydrogen bonding of water. They have a kind of ‘phobia’ for water molecules, and so we speak of the hydrophobic interaction. Since the hydrogen bond is of intermediate strength, the hydrophobic interaction is also of intermediate strength.


There are many types of organic compounds that are predominately of hydrocarbon (i.e. nonpolar) structure, but have polar functional groups attached to them. Examples of this type are cholesterol, fatty acids and phospholipids. Such molecules have a nonpolar or hydrophobic end, and a polar or hydrophilic end. When put in water, they self-aggregate such that the hydrophilic ends point towards water, and the hydrophobic ends get tucked away, avoiding interfacing with water. This is why oil does not mix with water.



By contrast, alcohol and water mix so readily that no stirring is needed; both are polar liquids. As the king said: ‘I do not care where the water flows, so long as it does not enter my wine!’

Beautiful high-symmetry self-assemblies like micelles, liposomes, and bilayer sheets may ensue because of the hydrophobic interaction. Art without artist!



The second law of thermodynamics for open systems is the only self-organization principle there is; subject to the constraints of the first law of thermodynamics, of course. Much of the symmetry we see in Nature is a consequence of this law (Wadhawan 2011).

Here is an interesting video on the polarity of water molecules:

Saturday 18 August 2012

41. From Atoms to Molecules



How did life originate on Earth? We have to take the chemistry and biochemistry route to answer this question. In Part 18 I traced the sequence of events which led to the creation of atoms in our universe. From atoms to molecules was the next stage in the cosmic evolution of complexity. And molecular evolution or chemical evolution preceded the emergence and evolution of life.

The chemical symbol H is used for an atom of hydrogen, which is the first element in the periodic table of elements. It has a nucleus, which is just a proton in this case, and there is an electron orbiting around the nucleus. The electron has a negative charge, exactly equal in magnitude to the positive charge of the proton. Taking this quantity as the unit of charge, we say that an H atom has a charge number 1 (Z = 1). Taking the mass of the proton as the unit of mass, we say that H has a mass number 1 (A = 1). The electron is ~2000 times lighter than the proton.

Element number 2 in the periodic table is helium (chemical symbol He). There are two protons in its nucleus, and two electrons orbiting around the nucleus. There are also two neutrons in the nucleus. Neutrons are so called because they are charge-neutral. The mass of a neutron is only slightly greater than that of a proton. So, for the He atom, Z = 2, and A = 4.

Life on Earth is based on organic chemistry, i.e., the chemistry of the carbon atom, denoted by the symbol C. For this atom, Z = 6, and A = 12.

A molecule of hydrogen is denoted by the symbol H2. It consists of two nuclei of hydrogen, and there are two electrons orbiting around them. Why does hydrogen ‘prefer’ to exist as H2, rather than as H? Because H2 is more stable that H. Why? Consider the two electrons of H2. Quantum mechanics tells that they have no individuality; they are indistinguishable. Let us consider any of them. Since positive and negative charges attract one another, this electron stays close (but not too close) to the two nuclei. [But for the Heisenberg uncertainty principle of quantum mechanics, the electrons of all the atoms would have gone right into their nuclei, and you and I would not be here, discussing chemical evolution of complexity!] Naturally, the positive charges on the two nuclei of H2 are better than only one positive charge in H, when it comes to exerting an attractive force on the electron. Thus H2 is more stable (it has a lower internal energy) than H because the former is a more strongly bound entity. Thus H atoms form H2 molecules spontaneously, because by doing so the overall free energy gets reduced (the second law of thermodynamics demands that the free energy be as small as possible).




Formation of H2 from two atoms of H is an example of 'spontaneous increase of chemical complexity': More information is needed for describing the structure and function of H2, than of H. An H2 molecule has a higher degree of complexity than an H atom because more information is needed for describing the molecule than the atom.

What is the nature of the bonding between the two atoms of H2 or H-H? It is described as covalent bonding. Each of the two H atoms contributes its electron to the chemical bond between them, and the two electrons in the bonding region belong to both the nuclei.

Another kind of chemical bonding is the so-called electrovalent bonding (also called ionic bonding). It is the bonding that occurs between oppositely charged ions. Take sodium chloride (NaCl). For the Na atom, Z = 11, and for the Cl atom, Z = 17. The laws of quantum mechanics are such that an atom of Na is more stable if it is surrounded by only 10 electrons, instead of 11. Similarly, Cl is more stable if it has 18 electrons, rather than 17. They can solve the problem together by getting readily ‘ionized’; i.e., an Na atom can become a positively charged ion Na+ by losing an electron (called the valence electron), and a Cl atom can become a negatively charged ion Cl- by gaining an electron.  The two oppositely charged ions can lower the overall potential energy (and therefore the free energy) by coming close to each other, thus forming an ionic bond between them.


The third important and generally strong type of bonding is metallic bonding. It occurs in metals like aluminium (Al), copper (Cu), silver (Ag), gold (Au), etc. Take the case of Al. For it, Z = 13. But like an atom of Na considered above, it is more stable if it has just 10 electrons around the nucleus. So Al atoms, when in the close vicinity of many other Al atoms, lose their three valence electrons to a common pool, and these valence electron become the common property of all the Al ions. A lump of Al metal is held together by this cloud of negatively charged electrons, attracted to, and compensating for, the positive charges on the Al ions.



The covalent, electrovalent, and metallic bonds described above are the so-called primary bonds. They are strong bonds. Diamond, for example, consists entirely of covalently bonded carbon atoms, and is an extremely hard material. In metals also the atoms are quite strongly bonded to one another, as are the atoms in a crystal of sodium chloride in which the electrovalent interaction dominates.

There are a number of other types of bonds or interactions which are substantially weaker than the primary bonds, but are very important for biological systems in particular, and 'soft matter' in general. I shall describe them in the next post.

Saturday 11 August 2012

40. Cosmic Evolution of Information


It is perhaps a sobering thought that we seem so inconsequential in the Universe. It is even more humbling at first – but then wonderfully enlightening – to recognize that evolutionary changes, operating over almost incomprehensible space and nearly inconceivable time, have given birth to everything seen around us. Scientists are now beginning to decipher how all known objects – from atoms to galaxies, from cells to brains, from people to society – are interrelated (Chaisson 2002).
At the moment of the Big Bang, the information content of the universe was zero, assuming that there was only one possible initial state and only one self-consistent set of physical laws. When spacetime began, the information content of the quantum fields was nil, or almost nil. Thus, in the beginning, the effective complexity (cf. Part 38) was zero, or nearly zero. This is consistent with the fact that the universe emerged out of nothing.


As the early universe expanded, it pulled in more and more energy out of the quantum fabric of spacetime. Under continuing expansion, a variety of elementary particles got created, and the energy drawn from the underlying quantum fields got converted into heat, meaning that the initial elementary particles were very hot and increasing in number rapidly, and therefore the entropy of the universe increased rapidly. And high entropy means that the particles require a large amount of information to specify their coordinates and momenta. This is how the degree of complexity of the universe grew in the beginning.



Soon after that, quantum fluctuations resulting in density fluctuations and clumping of matter made gravitational effects more and more important with increasing time. And the present extremely large information content of the universe results, in part, from the quantum-mechanical nature of the laws of physics. The language of quantum mechanics is in terms of probabilities, and not certainties. This inherent uncertainty in the description of the present universe means that a very large amount of information is needed for the description.

But why does the degree of complexity go on increasing? To answer that, I have to refer to the concept of algorithmic probability (AP) introduced in Part 34 while discussing Ockham’s razor. Ockham’s razor ensures that short and simple programs or 'laws' are the most likely to explain natural phenomena, which in the present context means the explanation of the evolution of complexity in the universe. I explained this by introducing the metaphor of an unintelligent monkey, typing away randomly the digits 1 and 0, each such sequence of binary digits offering a possible 'simple program' for generating an output that may explain a set of observations.

The quantum-mechanical laws of physics are the simple computer programs, and the universe is the computer (cf. Part 23). But what is the equivalent of the monkey, or rather a large number of monkeys, injecting more and more information and complexity into the universe by programming it with a string of random bits? According to Seth Lloyd (2006), ‘quantum fluctuations are the monkeys that program the universe’.

The current thinking is that the universe will continue to expand, and that it is spatially infinite (according to some experts). But the speed of light is not infinite. Therefore, the causally connected part of the universe has a finite size, limited by what has been called the ‘horizon’ (Lloyd 2006). The quantum computation being carried out by the universe (cf. Part 23) is confined to this part. Thus, for all practical purposes, the part of the universe within the horizon is what we can call ‘the universe'. As this universe expands, the size of the causally connected region increases, which in turn means that the number of bits of information within the horizon increases, as does the number of computational operations. Thus the expanding universe is the reason for the continuing increase in the degree of complexity of the universe.


The expansion of the universe is a necessary cause (though perhaps not a sufficient cause) for all evolution of complexity, because it creates gradients of various kinds: The expansion of the universe is a necessary cause (though perhaps not a sufficient cause) for all evolution of complexity, because it creates gradients of various kinds: 'Gradients forever having been enabled by the expanding cosmos, it was and is the resultant flow of energy among innumerable non-equilibrium environments that triggered, and in untold cases still maintains, ordered, complex systems on domains large and small, past and present’ (Chaisson 2202). The ever-present expansion of the universe gives rise to gradients on a variety of spatial and temporal scales. And, ‘it is the contrasting temporal behaviour of various energy densities that has given rise to those environments needed for the emergence of galaxies, stars, planets, and life (Chaisson 2002).
In the grand cosmic scenario, there was only physical evolution in the beginning, and it prevailed for a very long time. While the physical evolution still continues, the emergence of life started the phenomenon of biological evolution:
Although it is difficult to say why the universe is so organized, the measured universal expansion since the Big Bang of space continues to provide a “sink” (a place) into which stars as sources can radiate: A progenitive cosmic gradient, the source of the other gradients, is thus formed by cosmic expansion. For the foreseeable future the geometry of the universe’s expansion continues to create possibilities for functionally creative gradient destruction, for example, into space and in the electromagnetic gradients of stars. Once we grasp this organization, however, life appears not as miraculous but rather another cycling system, with a long history, whose existence is explained by its greater efficiency at reducing gradients than the nonliving complex systems it supplemented (Margulis and Sagan 2002).

Saturday 4 August 2012

39. Evolution of Complexity in the Universe


In the last several posts I have introduced many of the basic concepts and jargon from the field of complexity science:









 
Inapplicability of reductionism and constructionism (Laplacian certainty) to complex systems.













 



I shall introduce more concepts as we go along, but let us now start our journey of tracing the evolution of complexity from the Big Bang onwards.



Immediately after the Big Bang the information content of our universe was nil. There was just a single force field or radiation field, with no alternative states, so the missing information was nil (recall the Shannon-information equation I = c log (1/P) in Part 21; when P = 1, we get I = 0.).

Very soon, structure appeared and the information content, or the degree of complexity, started increasing.

Chaisson (2001) identified three eras in the cosmic evolution of complexity.

1. In the beginning there was only radiation, with such a high energy density that there was hardly any structure or information content in the universe; it was just pure energy.

2. As the universe expanded and cooled, a veritable phase transition, or bifurcation in the phase-space trajectory, occurred, resulting in the emergence of matter coexisting with radiation. This marked the start of the second era, in which a high proportion of energy resided in matter, rather than in radiation.

3. The third era was heralded by the onset of 'technologically manipulative beings', namely humans.

An important way of defining the degree of complexity was introduced by Chaisson (2001), and it is different from the information based definition I have given so far in terms of either the algorithmic information content (AIC) or the effective complexity. He emphasized the importance of a central physical quantity for understanding cosmic evolution, namely FREE-ENERGY RATE DENSITY, or specific free energy rate, denoted by Φ. Chaisson emphasized the fact that 'energy flow is the principal means whereby all of Nature’s diverse systems naturally generate complexity, some of them evolving to impressive degrees of order characteristic of life and society'.

The flow refers to rates of input and output of free energy. If the input rate is zero, a system would sooner or later come to a state of equilibrium, marking an end to the evolution of complexity. If the output rate is zero, there would be disastrous consequences. Both input and output flow rates have to be nonzero and mutually compatible.

The energy per unit time per unit mass (quantifying the CHAISSON COMPLEXITY Φ) has the units of power. Other similar quantities in science are: luminosity-to-mass ratio in astronomy; power density in physics; specific radiation flux in geology; specific metabolic rate in biology; and power-to-mass ratio in engineering.

Chaisson estimated the values of this parameter for a variety of systems. The results are amazing, and important. Here are some typical estimated values:

Galaxies (Milky Way)             :        0.5 ergs per second per gram
Stars (Sun)                            :        2
Planets (Earth)                      :        75
Plants (biosphere)                 :        900
Animals (human body)           :        20,000
Brains (human cranium)        :        150,000
Society (modern culture)       :        500,000

Thus the degree of complexity of our universe can be seen to be increasing rapidly. And we humans are responsible for much of this increase. When we emerged on the scene (through Darwinian evolution), we brought with us a relatively large brain and the ability to develop spoken and written language. Development of powerful computers followed in due course, as also immense telecommunication networks. Information build-up and flow is the stuff we thrive on.


There are no indications of life anywhere else in our universe. Leave aside creatures with intelligence comparable to or surpassing that of humans, even the most primitive extra-terrestrial life has not been found. Therefore, from the vantage point of increase of complexity, emergence of humans has turned out to be something of cosmic importance.


The concept of the free-energy-rate-density measure of complexity and its evolution is very useful. An alternative description can be given in terms of our original definition of degree of complexity in terms of the algorithmic information content (AIC). I shall do that in the next post.