Pages

Saturday, 3 May 2014

Science, Scientists, and Scientific Temper in Society

The role professional scientists should play in countering the utterly harmful, unscientific trends in society is brought out in this article. Some India-specific suggestions are also made for strengthening the fight against superstition and excessive irrationality.

Introduction

It appears that so far as the average professional scientist is concerned, there is not much correlation between having had a career in science and the possession of scientific temper by the scientist. There are many scientists, even good and successful ones, who lack scientific temper when it comes to day-to-day actions and thinking.

The public expression or exhibition of lack of scientific temper by any eminent scientist has a far more serious effect on society than that by other intellectuals. Therefore the reasons behind this behaviour of many scientists need to be investigated and discussed. This article does that by engaging such scientists at their own turf. The situation in India is also discussed briefly, and some proposals are made which can go a long way in promoting the cause of scientific temper in our society.

The Scientific Method

Science is about investigating natural phenomena by following the so-called 'scientific method'. In the Wikipedia this method is described as follows: 'The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary describes the scientific method as: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses"'.


The basic scientific approach is as follows. Suppose there is a set of observations about a natural phenomenon which we wish to explain. The scientific method for doing this is the following 8-fold way:

1. A minimum necessary set of axioms. There is an agreed, minimum necessary, set of axioms, which are taken as givens (their validity is either a matter of assumption, or has been established already).

2. Logic. There is an agreed set of rules for logical reasoning.

3. Hypothesis. The logical rules for reasoning, as well as the axioms, are used along with a hypothesis (or model) for describing and interpreting the observations we humans have made about the natural phenomenon under investigation.  It is not important how the hypothesis is arrived at, because it is always going to be tested thoroughly and repeatedly. And there can even be more than one competing hypotheses for explaining the same set of observations or material evidence.

4. Agreed meaning of each word. Every word used for making any statement in science should have the same agreed meaning for everybody. This requirement becomes particularly important when concepts like 'consciousness' are discussed or investigated. In the scientific method, a useful trick often employed wittingly or unwittingly is to define concepts in terms of things that are observable or, better still, measurable.

5. Verification by objective and reproducible observation. A hypothesis must be able to explain the observations in a logically consistent way, and it must successfully stand the test of repeated experimental verification. If its success is only partial, we try to modify and improve it, and then check against the observations again. That is how we arrive at the best, i.e. the most successful, hypothesis at a given point of time in our history.

6. Predictive capability of the hypothesis. A validated hypothesis is an example of 'induction', i.e. inference of a general or universal conclusion from a number of singular or individual observations.  Our confidence in its validity grows if it not only explains what is already observed, but also enables us to 'deduce' correctly some predictions about what more can be expected to be observed about the natural phenomenon under investigation. Thus both induction and deduction are parts of the scientific method.

7. Elevation of a hypothesis to the status of a theory. A hypothesis (or a (mutually consistent) set of hypotheses) that has repeatedly stood the test of experiment, and that can successfully predict and explain a whole range of experimental observations, gradually acquires the status of a theory.

8. The falsifiability requirement. During the entire process of: (i) statement of the research problem, (ii) use of logical reasoning, and (iii) drawing of conclusions from the data and the reasoning, the most important constraint usually put in by the scientific method is that only falsifiable statements can be made. The term 'falsifiable statement' was introduced by Karl Popper (2005). I explain its meaning with the help of an example.

Consider the following statement S1 (Wudka 1998):

S1: 'The moon is populated by little green men who can read our minds and will hide whenever anyone on Earth looks for them, and will flee sufficiently quickly into deep space whenever a spacecraft comes near '. This statement is so worded that no one can ever observe the postulated green men and demonstrate that the statement is false; so the statement is unfalsifiable (and therefore not permitted in scientific discourse).

Next, consider the following statement:

S2: 'There are no little green men on the moon '. This is a falsifiable statement. All you have to do to prove it false is to show material evidence for the existence of even one green man. Berry (2010) attributes the following famous statement to Einstein:  'Many experiments may prove me right, but it takes only one to prove me wrong'.

Only falsifiable statements are permitted in the scientific method. Therefore S1 is an unscientific statement or theory, and S2 is a scientific statement or theory.

In work beginning in the 1930s, Popper gave falsifiability a renewed emphasis as a criterion for acceptable statements in science. He also pointed out that not all unfalsifiable claims are fallacious; they are just unfalsifiable.  As long as proper skepticism is retained and proper evidence is given, even an unfalsifiable claim can be a legitimate form of reasoning (but not of what finally becomes a part of science). We should never assume that we must be right simply because we cannot be proved wrong.





Why did Popper emphasize the falsifiability requirement. It was an effort to tackle what he called 'the problem of induction'. As stated above, the process of doing science involves generalization from individual observations, and this is always fraught with uncertainty. How many observations or measurements should we make so as to be able to generalize correctly? Generally, all we can say is: the larger the number, the better. But there is always the possibility that the next observation (which we did not make) may go against the generalization. So we can only have low or high probabilities, but not certainties, in the induction process. The larger the number of observations which agree with the generalization, the more likely it is that the generalization is valid.

Similarly, the greater the variety of conditions in which the observations and measurements are made, the greater the probability that the inductive generalization is true. The question arises: Which variations in the conditions of observation and measurement are considered significant and relevant, and which ones are not. This is decided by the theory we believe in for the domain of investigation. If the theory is wrong, we are likely to be led astray, till somebody comes up with a better theory.

Thus, because of 'the problem of induction', strong or weak likelihood, rather than complete certainty, is what the inferred laws of science are all about. Popper emphasized the falsifiability requirement in an effort to minimize the chances of inductivism going wrong. At the centre of the scientific method is the act of making statements based on existing theories. By restricting ourselves strictly to making only falsifiable statements, we are ensuring that even a single observation or measurement that disagrees with the pre-supposed hypothesis or theory is enough to dismiss the generalization, namely the theory, we inferred by the process of induction.

Notice the intellectual humility of the scientist. Scientific spirit means an ever-present willingness to give up even our pet theories and opinions if the evidence demands so. Contrast this with what is said in most of the organized religions. In them, certain statements cannot be questioned, and there are statements or beliefs in them which are unfalsifiable.

Votaries of faith may be quick to point out that the choice for axioms, mentioned in the 8-fold way above, is also a matter of faith. No, it is not. To understand why, let us consider the example of quantum theory.

All natural phenomena are governed by the laws of quantum mechanics. Why the laws of Nature are what they are is something I have discussed elsewhere (Wadhawan 2012a). Another article of mine on the anthropic principle is also relevant in this context (Wadhawan 2012b). The laws of quantum mechanics are highly counter-intuitive for us humans. The quantum theory is based on certain assumed axioms, like any theory is. But the most important thing here is that the quantum theory is the most repeatedly and the most thoroughly tested theory ever. It is the best theory we have at present for understanding the world around us. If anybody does not agree, he/she is most welcome to come up with another theory, with its own set of axioms and logical structure. If the new theory is better supported by experimental evidence than the present quantum theory, science and scientists will have no compunctions whatsoever in abandoning the existing theory, and accepting the new one. This is not faith and reverence; it is, in fact, the negation of all that.


The Nature of Reality

Does the ongoing and cumulative activity of scientists lead to an unraveling of reality? Before answering this question it is important to be clear about the meaning of the word 'reality'.

The term 'reality' used in the question above normally stands for 'absolute reality'. There is often the assumption that all quest for truth really aims at unraveling and understanding absolute reality. But the fact is that there is no such thing as absolute reality. If you do not agree, just try defining it, using words that mean the same thing to everybody. I think you cannot.

As argued convincingly by Hawking and Mlodinow (2010), all that we can have is 'model-dependent reality' (MDR); any wider or deeper notion of reality is a baseless myth, if not worse. I explain.

Does something or somebody exist when we are not viewing it? There are two opposite models for answering this question, the subjective model (idealism) and the objective model (materialism). Which model of 'reality' is correct? Naturally the one that is self-consistent and most successful in terms of its predicted consequences. In my opinion, this is where materialism wins hands down. The materialistic model is that the entity exists even when nobody is observing it. This model is far more successful in explaining 'reality' than the opposite model. And we can do no better than build models of whatever there is to observe, understand, and explain.

Suppose 100 persons are asked to describe an object, including its colour, and all of them say that it is a chair. Further, suppose 98 of them say that it is a red chair, but the other two disagree about the colour seen by the majority. If further investigation shows that these two persons have a colour-blindness problem, the model of reality we humans build is that the object is a red chair.

But suppose it turns out that these two persons are not colour-blind, and no matter what tests we carry out, we are unable to explain why they do not see or describe the chair as red. We then go (tentatively) by the majority view, or consensus. Of course, any model of reality must change in the light of new data and insights. This is the approach we adopt in science for building up our knowledge. We build models and theories of reality, and we accept those which are most successful in explaining what we humans observe collectively.

A scientific model is a good model if:
it is elegant and self-consistent;
it contains no or only a few arbitrary or adjustable parameters;
it explains most or all of the existing observations; and
it makes detailed and falsifiable predictions.

That brings me to the M-theory (see Wadhawan 2012c) and the cosmic-inflation model in cosmology (see Wadhawan 2012d). Are they good models of reality? There are eminent scientists who vehemently attack both of them, and have even proposed alternative models. Nothing unusual about that. At the cutting edge of science the edge is blunt or nebulous, rather than sharp: Experts disagree on many issues, and fight it out. But out of this informed debate consensus emerges gradually, usually when additional ('issue clincher') data become available, or when some genius formulates a great new model. M-theory and the multiverse idea are the most accepted formulations we humans have at present, even though there are many arbitrary-looking parameters, and loose ends. In due course the models would get either established or rejected, but they are the best (i.e., most accepted, even beautiful) models of reality at present.

The cosmic-inflation model ties up so many loose ends in cosmology, and explains so many observations, that some form of it is likely to survive in any scientific version of cosmology. Reality is nothing deeper than the best available scientific model for it. Often a phenomenon or entity is so complex that no sensible model has been formulated yet. In such a case, we have to wait till science makes more progress and there is general agreement among experts.


Ockham's Razor and Information Theory

In the 8-fold way of the scientific method, the axioms play a basic role. The important question is: How many axioms should we have? I shall take up a case study for answering this question.

There is the so-called ‘Copenhagen interpretation’ (CI) of quantum mechanics, formulated by the great scientist Neils Bohr in 1927, jointly with Heisenberg (another venerated scientist) (see Faye 2008). According to the CI, humans and the equipment they use exist in a classical world which is different from the quantum world. A quantum state is a superposition of two or more states, but when it interfaces with the classical world (at the moment of measurement), there is a 'collapse' of the wave function (randomly) to one of the alternatives, and the other alternative states disappear. It should be noted that the CI was put in ‘by hand’ as an additional axiom or postulate of quantum mechanics. Was one more axiom justifiable? No.

The CI has been superseded by better interpretations, some of them without the need for introducing an additional axiom. Among the earliest scientists to challenge the CI was Hugh Everett III, who put forward his ‘many worlds’ idea as an alternative explanation. A good account of the latest position on this has been given by Hawking and Mlodinow (2010). But the influence of Bohr on quantum mechanics has been so strong and persistent that even today many scientists subscribe to the CI. The fact is that the 'many worlds' theory, or rather its modern version, namely the 'multiple universe' or 'multiverse' theory, has gained ascendance in science. The introduction of one more axiom in quantum theory by Bohr was unnecessary, and therefore undesirable, if not wrong. Let us see why.

The philosopher Ockham advocated the use of simplest possible explanations for natural phenomena: ‘Plurality should not be posited without necessity’. The proverbial Ockham’s razor cuts away complicated and long explanations (see Wadhawan 2010). Ockham declared that simple explanations are the most plausible.

But is it just a matter of philosophy? Not really; there is more to it. A rationalization is available now. Leibniz (1675) (cf. Chaitin 2001) was among the earliest known investigators of the question of how many axioms should be chosen in a theory. He argued that a worthwhile theory of anything has to be ‘simpler than’ the data it explains. Otherwise, either the theory is useless, or the data are ‘lawless’. The criterion ‘simpler than’ is best understood in terms of information theory, or rather its more recently developed offshoot, namely algorithmic information theory (AIT) (Chaitin 1987).

Following Chaitin (1987), let consider an example. Take the set of all positive integers, and ask the question: How many bits of information are needed to specify all these integers? The answer is an absurdly large number. But the fact is that this set of data has very little information content. It has a structure which we can exploit to write an algorithm which can generate all the integers, and the number of bits of information needed to write the algorithm is indeed not large. So the 'algorithmic information content' in this example is small.

One can generalize and say that, in terms of computer algorithms, the best theory is that which requires the smallest computer program for calculating (and hence explaining) the observations. The more compact the theory is, the smaller is the length of this computer program. Chaitin’s work has shown that Ockham's razor is not just a matter of philosophy, but has deep algorithmic-information underpinnings. If there are competing descriptions or theories of reality, the more compact one has a higher probability of being correct. Let us see why.

In AIT, an important concept is that of algorithmic probability (AP). It is the probability that a random program of a given length fed into a computer will give correctly a desired output, say the first million digits of Ï€. Following Bennett and Chaitin’s pioneering work done in the 1970s (see Chaitin 1987), let us assume that the random program has been produced by an unintelligent monkey. The AP in this case is the same as the probability that the monkey would type out the same bit string, i.e. the same computer program as, say, a Java program suitable for generating the first million digits of Ï€. The probability that the monkey presses the first key on the keyboard correctly is 0.5. The probability that the first two keys would be pressed correctly is (0.5)2 or 0.25. And so on. Thus the probability gets smaller and smaller very rapidly as the required number of correctly sequenced bits increases. The longer the program, the less likely it is that the monkey will crank it out correctly. We can generalize and say that the AP is the highest for the shortest programs or the most compact theories. The best theory is likely to have the smallest number of axioms.

In the present context, suppose we are having a bit-string representing a set of data, and we want to understand the mechanism responsible for the creation of that set of data. In other words, we want to discover the computer program (or the best theory), among many we could generate randomly, which is responsible for that set of data. The information-theoretic validation of Ockham’s philosophy comes from the fact that the shortest such program is the most plausible guess, because it has the highest AP.

The Ockham-razor idea has two parts: The principle of plurality, and the principle of parsimony, economy or succinctness. The former says that plurality should not be posited without necessity. And the latter says that it is pointless to do with more what can be done with less.

It is conceivable that the simplest theory is inadequate in certain aspects. The idea of Ockham's razor is that one should proceed to simpler and simpler theories until simplicity can be traded for greater explanatory power.


The God Hypothesis

Apart from axioms, another key component of the 8-fold way of the scientific method is the hypothesis put forward for explaining any natural phenomenon. Implicit in this application of the conventional scientific method is the validity of the causality principle: Every effect has a cause which precedes it, and this cause is the effect of another cause, and so on.

A fundamental question all of us ask is: What is the cause for the existence of the universe we live in? Suppose we put forward the hypothesis that our universe was created by God. Naturally, the next question is: What is the cause, of which God is the effect? In other words, who or what created God? The stock answer generally is: The cause-effect chain cannot go on indefinitely and we must stop somewhere, so we stop at the God hypothesis and say that God is the 'uncaused cause'.

Does that really help? If we are willing to accept that there can be an uncaused cause, we may as well say that the universe is an uncaused cause. So the God hypothesis is an unnecessary (or unwarranted) hypothesis. Ockham's razor cuts it off.

Many other arguments have been given which show that the God hypothesis is unwarranted (see Stenger 2008; Paulos 2008). This hypothesis explains away everything, and we end up learning nothing. It is almost like having a theory in which everything is axiomatically true and nothing needs to be proved or disproved.

Answering the Difficult Questions We Ask about Ourselves and about Our Universe

God or no God, some fundamental questions must be answered. Here are just three of them:

(i) How can our universe emerge out of 'nothing' without violating the principle of conservation of energy/mass?
(ii) How can life emerge out of nonlife?
(iii) How can intelligence emerge from non-intelligent beginnings?

I find that it is still not widely known that science has progressed so dramatically during the last few decades that it now has credible answers to these questions, as also to many other such 'difficult' questions.

The recent books by Hawking & Mlodinow (2010) and Krauss (2012) explain in a fairly accessible language how our universe emerged out of 'nothing'. The vacuum state in quantum field theory is not at all a state of 'nothingness'. It has a 'virtual' energy of its own. Our universe emerged out of vacuum as a quantum fluctuation, without violating the principle of conservation of energy/mass. The M-theory and the cosmic-inflation theory are powerful explanations for why our universe has the laws it has (cf. Wadhawan 2012e). Our universe got created without the help of a Creator. It has been found that Euclidean geometry holds true in our universe; i.e., ours is a flat-geometry universe. As explained by Krauss (2012), a flat-geometry universe can satisfy the requirement that the sum total of positive and negative contributions to the overall energy of the universe add up to zero. [The gravitational force is an attractive force, so it makes a negative contribution to the total energy of the universe. This matches (cancels out) the positive-energy term coming from all the matter and energy we see around us, so the total energy was and is zero.]

The question about the emergence of life out of nonlife is, in fact, the easiest of the three questions posed above. It is answered by a somewhat new branch of science called complexity science (Gell-Mann 1994; Wadhawan 2010). Real-life situations are usually so complex that it is not enough to have knowledge of the 'complete set of fundamental natural laws' for explaining them. It is often found necessary to formulate additional (empirical) laws as 'effective theories' (Hawking & Mlodinow 2010). An example is the gravitational force experienced by a macroscopic object on the surface of the Earth. The gravitational interaction is present between any two atoms, but we cannot formulate and solve the equations governing the gravitational interaction between every atom in the macroscopic object and every atom in the Earth. Instead, an effective theory is formulated in terms of the mass of the object and a few other numbers like the value of the gravity constant (g) at the surface of the Earth. Similarly, in chemistry we cannot hope to formulate and solve the totality of equations describing the interactions among all the positive and negative charges in a system. Instead, an effective theory involving concepts like valence deals with how chemical reactions occur.

This approach continues as we go up the ladder of increasing complexity. Details at one hierarchical level of complexity are 'summarized' or 'integrated over' to generate some effective parameters which are used for describing the details of the next higher level: from particle physics to macroscopic physics and chemistry; from chemistry to biology; and so on. An effective theory is essentially a framework we create for modelling certain observed phenomena, without describing in detail all the underlying processes.

Complexity science has thrown up some additional key concepts. One of them is that of 'emergence' (cf. Wadhawan 2012f). As the 'degree of complexity' of a system increases, sometimes new, unexpected, properties can emerge. An example is that of the second law of thermodynamics. Each molecule of a gas in a box obeys Newtonian dynamics, and its equations of motion have time-reversal symmetry. And yet, for the macroscopic system as a whole, the law of increasing entropy emerges, which implies a unidirectional flow of time: The entropy increases only in the direction of increasing time. The natural world abounds in such examples of emergence.

Another important feature of complexity science is that it compels us to take a fresh look at the causality principle. Consider a beehive. It is a complex system. It has 'swarm intelligence' (Wadhawan 2011a). No one is in command, not even the queen bee. Each bee follows some very simple 'local rules', and interacts with other bees in the hive. The final effect here is the emergent property of swarm intelligence: The beehive functions as a whole as a superorganism, with intelligence far in excess of that of any individual bee. What is the cause of this intelligence? Not the action of any one bee. The intelligence comes from the (ever-changing) interaction patterns among the bees.

In fact, the beehive is the archetypal example of a system in which it is meaningless to talk about causes and effects, or actions and reactions. Instead it is interactions, through and through. And it is not an isolated example. Complex systems are generally like that.

But the causality idea is well-entrenched in the human psyche. There is no need to abandon it altogether, of course. In fact, much of our conventional science is based on it. Logical reasoning in conventional science is one big chain of cause-effect-cause-effect-cause- . . . . interpretations. But conventional science is often quite unfit for tackling complexity-related, highly nonlinear, problems. Radically new thinking is needed for researching those systems for which any simplifying assumption can destroy the very essence of the system being investigated, or when it is impossible to model the system in terms of a manageable number of differential equations. We should be prepared to think in terms of interactions and correlations when necessary, rather than actions and reactions all the time. Such an approach helps us better understand the properties of complex systems, and keeps us away from philosophical absurdities like 'downward causality'.

Appearance of life out of nonlife is no big deal; it is just one more example of spontaneous emergence of order out of disorder in a thermodynamically open system, namely the cosmos in general and our ecosphere in particular (Wadhawan 2011b). Atoms, simple molecules, and then biomolecules evolved through the slow processes of chemical evolution. In due course, self-replicating molecules emerged, followed by the gradual appearance of prokaryotes and eukaryotes. At some stage in this era of chemical evolution of complexity, the era of biological (Darwinian) evolution also started, which is still operative and will remain so always.



 
Living systems are an example of an important class of complex systems, called complex adaptive systems (CASs) (see Wadhawan 2012g). These are systems that not only evolve like any other dynamical system, but also learn by making use of the information they have acquired. This learning requires, among other things, the evolution of an ability to distinguish between the random and the regular. CASs can undergo processes like biological evolution (or biological-like evolution). They do not just operate in an environment created for them initially, but have the capability to change the environment. For example, species, ant colonies, corporations, and industries evolve to improve their chances of survival in a changing environment. Similarly, the marketplace adapts to factors like immigration, technological developments, prices, extent of availability of raw materials, and changes in tastes and lifestyles etc. Some more examples of CASs are: A baby learning to walk; a strain of bacteria evolving resistance to an antibiotic; a beehive or ant colony adjusting to the decimation of a part of it; etc.

That leads us to an answer to the question of how intelligence has emerged out of nonintelligence. It is due to the emergence of swarm intelligence, plus the feature of adaptation to changing situations, typical of what any CAS would do. A recent book by Kurzweil (2012) has a daring title: How to Create a Mind. Within the present century itself we humans would have created artificial minds far superior to our own. Such is the power of the scientific method we have invented.


Scientists

All professional scientists are exposed to the logical rigour and discipline of the scientific method outlined above. One may think that this makes them far more rational in their thinking than the average non-scientist. This is not the case, in general. As the cynic said, 'science is what scientists do'. And scientists have their own share of prejudices, conditioning, and hidden agendas. Why is this so?


 

This question needs to be examined from several vantage points. I consider just a few here.

The present level of acceptance of Darwinian evolution by the American society is not too bad, but way back in 1994 this is how Dennett (1995) described it: 'A recent Gallup poll (June 1993) discovered that 47 percent of adult Americans believe that Homo sapiens is a species created by God less than ten thousand years ago'. He went on to make the point that the person most directly responsible for this misconception in the public mind was the eminent palaeontologist (and much else) Stephen Jay Gould, a scientist who did so much to make important corrections to classical Darwinism and neo-Darwinism.

Gould was a scientist of great standing, but deep inside he just could not get reconciled to the fact that life can come into existence without the hand of a benign Creator. As Dennett wrote in 1995: 'Gould's ultimate target is Darwin's dangerous idea itself; he is opposed to the very idea that evolution is, in the end, just an algorithmic process'. This was not just an expression of opinion by Dennett. He gave elaborate reasons and evidence to prove his assertion.

Incidentally, Dennett's (1995) book, Darwin's Dangerous Idea, is the greatest book I have read on Darwinism. Reading it was an uplifting experience (I almost said 'spiritual experience' (!), except that I do not have a proper idea of what 'spiritual' really means).

As I outlined above, modern cosmology, high-energy physics, and complexity science have credible answers to the creation questions. Complexity science, as we know it today, did not exist before the 1990s, and it is remarkable that Dennett (1995), a philosopher, had such an innate understanding of the crux of what complexity science is all about.

The need of the hour is to take complexity science to people, particularly all those scientists who have been exposed only to conventional, reductionistic, science so far. However, lack of adequate understanding of complexity science is not the only reason why many scientists are unwilling to let go of the God idea. There is an emotional need as well, similar to that of a child, namely the need for a sense of security. The God concept fulfills that. But desirability and emotional needs cannot be a substitute for the realities of cold, honest, logic.


The Question of Morality and Ethics

A stock argument of organized religions is that a belief in God is necessary for ensuring the prevalence of morality and ethics in society. A corollary of this is that a 'Godless' person is unlikely or less likely to be moral and ethical. There is no evidence for this presumption.

A belief in God also generally implies a belief in the existence of certain 'supernatural' phenomena. Brights International (http://www.the-brights.net/action/activities/organized/arenas/1/readings.html) is an organization that promotes 'naturalism', as opposed to 'supernaturalism'. Its research project 'Reality about Human Morality' has been running for several years, the overall thrust being to investigate the presumption that ethical systems and morals are imparted to humankind by some form of divine being or power. The present research findings of the project are summarized in the following carefully worded Statements 
(http://www.the-brights.net/action/activities/organized/arenas/1/area_b/studies.html):

Statement A: Morality is an evolved repertoire of cognitive and emotional mechanisms with distinct biological underpinnings, as modified by experience acquired throughout the human lifespan.

Statement B: Morality is not the exclusive domain of Homo sapiens; there is significant cross-species evidence in the scientific literature that animals exhibit 'pre-morality' or basic moral behaviours (i.e. those patterns of behaviour that parallel central elements of human moral behaviour).

Statement C: Morality is a 'human universal' (i.e. exists across all cultures worldwide), a part of human nature acquired during evolution.

Statement D: Young children and infants demonstrate some aspects of moral cognition and behaviour (which precede specific learning experiences and worldview development).

Each Statement is supported by extensive references to scientific studies.


Spirituality and 'Inner' Life

I have come across many scientists who say: 'I do not subscribe to any religion, but I am a spiritual person'. What exactly is spirituality? Here are a couple of definitions:

'The term "spirituality" lacks a definitive definition, although social scientists have defined spirituality as the search for "the sacred," where "the sacred" is broadly defined as that which is set apart from the ordinary and worthy of veneration. The use of the term "spirituality" has changed throughout the ages. In modern times, spirituality is often separated from Abrahamic religions, and connotes a blend of humanistic psychology with mystical and esoteric traditions and eastern religions aimed at personal well-being and personal development. The notion of "spiritual experience" plays an important role in modern spirituality, but has a relatively recent origin' (Wikipedia).

'Spirituality means something different to everyone. For some, it's about participating in organized religion: going to church, synagogue, a mosque, etc. For others, it's more personal: Some people get in touch with their spiritual side through private prayer, yoga, meditation, quiet reflection, or even long walks. Research shows that even skeptics can't stifle the sense that there is something greater than the concrete world we see. As the brain processes sensory experiences, we naturally look for patterns, and then seek out meaning in those patterns. And the phenomenon known as "cognitive dissonance" shows that once we believe in something, we will try to explain away anything that conflicts with it. Humans can't help but ask big questions  -  the instinct seems wired in our minds' 
(http://www.psychologytoday.com/basics/spirituality).

Shorn of the superfluous and logically untenable God concept (or the 'some higher power' concept), spirituality is mainly about the so-perceived 'enhancement' of the so-called 'inner life'. Each person has his inner life, pertaining to what his mind perceives, or imagines, or aspires for, but so what? I think it is no different from idle reverie. My inner life is different from yours, and all that really matters is the outer-life expression or manifestation of the 'inner life', and this outer-life manifestation is a natural phenomenon like any other, amenable to scrutiny by science.


Our brain is a physical organ, subject to the laws of physics. And our mind is what our brain does. I subscribe to the view that there is nothing wrong or unscientific about any efforts to make one's thinking more productive and innovative and original by meditation etc.; and there is nothing mystical about that. It is perfectly fine for a person to do meditation if that helps him achieve better mental health, and greater intuitive capabilities or originality.

One of the most innovative minds I know of is Ray Kurzweil (2012). Here is what he does for getting new, problem-solving ideas: 'Relaxing professional taboos turns out to be useful for creative problem solving. I use a mental technique each night in which I think about a particular problem before I go to sleep. This triggers sequences of thoughts that will continue into my dreams. Once I am dreaming, I can think  -  dream  -  about solutions to the problem without the burden of the professional restraints I carry during the day. I can then access these dream thoughts in the morning while in an in-between state of dreaming and being awake, sometimes referred to as "lucid dreaming"'. Fine. And very impressive.

The mind-body relationship is a subject of great importance. There are so many unexplored examples of what the mind can make the body do or endure. Scientific researchers should be duly skeptical on one hand, and open-minded on the other, when it comes to accepting or rejecting outlandish-looking claims. Reproducible verification has to be the final arbiter, always.


Scientific Temper in Society

Scientific temper is all about applying the scientific method, not only when doing science in the laboratory, but in everything we do anywhere. Scientists can play a major role here by striving to be role models of rationality for society.

But even if all the scientists did this conscientiously, there would still be a major hurdle in the way of promoting scientific temper in society. Natural phenomena are governed by the highly counter-intuitive laws of quantum mechanics, and we cannot expect everybody to master quantum field theory for appreciating how, for example, our universe arose out of 'nothing', i.e. without the intervention of a God or a Creator. Similarly, it is not easy to explain complexity science to one and all. But such problems can be tackled by proper parenting and education of children, as I explain next.


Good Parenting

Minds of young children are strongly influenced by what they learn from their parents (and teachers). Parents should aim at creating conditions in the family in which the child can grow to become an independent thinker. Every child has a right to get exposure to all streams of thought before making a choice.

If a child learns to have full confidence in science and the scientific method, he will not waste energy and time fighting what science has to say. Instead, he will take even the counter-intuitive quantum mechanics for granted, all the time fully assured of the fact that there is nothing dogmatic about the concepts and theories of science, and that even the most cherished ideas can be abandoned if the new evidence so demands.


 
What right do the parents have to impose their views on a child? The child should be able to make a choice after learning about the various streams of thought. Needless to say, parents often believe that a religious upbringing will instil moral values in the child. But the fact is that there is overwhelming evidence that there is no correlation between religion (or irreligion) and morality.

How moral is it to stifle the intellectual growth of a child in the name of religion? If you teach your child that something is true simply because the 'holy book' says so, you are destroying something very valuable, namely the urge to go on questioning things till a rational and sensible answer has been obtained.

Imagine a situation in which a child has imbibed the spirit of the scientific method, and has blossomed into a rational, mature person who realizes that there is no God up there to intervene and help us in case we mess up our affairs on this planet. And that Mother Earth is our collective responsibility, for which we should cooperate with one another, rather than waste our energy and resources in mindless conflicts in the name of religion.

Imagine a world in which human beings, after they grow up from childhood, are no longer children in their emotional get-up, but are mature, responsible, and mentally strong persons, who hold nobody but themselves accountable for all their actions. They do not need a father figure (God) to whom to go crying for help like a child does. They are noble and moral because it feels good to be so, rather than because they believe that God would punish them if they are not good.


Education of Children

Children learn not only from parents, but also from their school teachers. It is imperative that teachers should be role models of scientific temper. That calls for a very strict process of selection of teachers. And that, in turn, can happen only if even the selectors of teachers are selected carefully.

School teaching is a vitally important activity. Conditions have to be created so that the finest available brains are attracted to this profession. Why is it that a university teacher has a higher prestige and salary than a school teacher? We have to set our priorities right.


 A major component of the scientific method is the insistence on strictly logical reasoning. A fun thing for school children can be the teaching of the existence of logical fallacies (see, e.g., Gula 2007). Here is an example of the so-called ad hominem (circumstantial) logical fallacy (cf. Bennett 2012):

Person 1 is claiming Y.
Person 1 has a vested interest in Y being true.
Therefore Y is false.

Another common example of a logical fallacy is the so-called argument from prestige:

C. V. Raman was a great, prestigious, scientist.
He asserted that there is a God.
Therefore God exists.

The logical fallacy here is that for every C. V. Raman who was a believer, one can point out a Stephen Hawking who is not a believer. Opinions of a few scientists or others cannot prove or disprove any argument.

Familiarity with, and caution against, the huge repertoire of logical fallacies can fire the imagination of children, and can make them instinctively look for any lack of logic, not only in the reasoning of others, but also their own. A society in which even children are adept at pointing out logical fallacies in whatever they hear or read would hardly need any additional measures for spreading the culture of scientific temper.Needless to say, scientific temper and felicity with logic must be supplemented with a humanistic outlook, as also a deep concern and love for Mother Earth.








The Need to Prevent Misuse of Freedom of Speech

In India a peculiar situation prevails at present. An enormous amount of superstitious and other irrational sermonizing is occurring on television. This has a disastrous effect on young impressionable minds, and there is hardly any legal remedy available for tackling it.

We as a nation are very fond of saying that truth prevails ultimately (satyamev jayate). But very often, by the time truth prevails, a lot of irreversible damage has occurred already. In any case, in real-life situations, truth is seldom relevant, and what really matters generally is the perception of truth by the various interacting individuals. It hardly requires any intelligence to have faith in something, whereas understanding of scientific facts can often be a daunting task for the public at large. Therefore it is necessary to curtail superstition propaganda occurring in the name of freedom of speech and freedom of religion.


Under the Indian Constitution, promotion of scientific temper is a duty (a fundamental duty), whereas the freedom to carry out (and even promote) religious practices is a matter of right (fundamental right). This is an unequal fight between what is logical and rational and what may be illogical and irrational. We should amend the Constitution so that irreligion (which is the absence or antithesis of religion), backed by the scientific method, gets the same status and rights as the organized religions. If this is done, citizens would have the right to legally and successfully object to any public propaganda or sermons that make it difficult for them to promote scientific temper in society. Religious practices should be largely confined to the privacy of one's home, and should under no circumstances trample upon the rights of others who want to give their children the freedom to grow as freethinkers.


References

Bennett B (2012) Logically Fallacious: The Ultimate Collection of over 300 Logical Fallacies, eBookIt.com, Boston. ISBN -10: 1456607529; ISBN-13: 978-1456607524.

Berry E (2010) The Scientific Method, http://climateclash.com/2-the-scientific-method/.

Byrne P (2007) The Many Worlds of Hugh Everett, Scientific American, December issue, p. 98.

Chaitin G (1987) Algorithmic Information Theory, Cambridge University Press, Cambridge.

Chaitin G (2001) Exploring Randomness, Springer, Berlin.

Dennett D (1995) Darwin's Dangerous Idea: Evolution and the Meanings of Life, Penguin Books, London.

Faye J (2008) Copenhagen Interpretation of Quantum Mechanics, http://plato.stanford.edu/entries/qm-copenhagen.

Gell-Mann M (1994) The Quark and the Jaguar: Adventures in the Simple and the Complex, W. H. Freeman, New York.

Gula R J (2007) Nonsense: Red Herrings, Straw Men and Sacred Cows: How We Abuse Logic in Our Everyday Language, Axios Press, New York.

Hawking S and Mlodinow L (2010) The Grand Design: New Answers to the Ultimate Questions of Life, Bantam Press, London.

Krauss L M (2012) A Universe from Nothing: Why There is Something rather than Nothing, Free Press, New York.

Kurzweil R (2012) How to Create a Mind: The Secret of Human Thought Revealed, Penguin Books, New York.

Nehru J (1946) The Discovery of India, Penguin Books India, Delhi (the 2004 edition).

Paulos J A (2008) Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up, Hill & Wang, New York.

Popper K (2005) The Logic of Scientific Discovery, Taylor & Francis e-Library edition, London and New York: Routledge / Taylor & Francis e-Library.

Stenger V J (2008) God: The Failed Hypothesis. How Science Shows That God Does Not Exist, Prometheus edition, London.

Wadhawan, V K (2010) Complexity Science: Tackling the Difficult Questions We Ask about Ourselves and about Our Universe, LAP Lambert Academic Publishing, Saarbrucken.

Wadhawan V K (2011a) Swarm Intelligence,

Wadhawan V K (2011b) The Ultimate Causes of Cosmic Order and Structure,

Wadhawan V K (2012a) Why are the Laws of Nature what They are,

Wadhawan V K (2012b) The Anthropic Principle,

Wadhawan V K (2012c) Supersymmetry, String Theories, M-theory,

Wadhawan V K (2012d) The All-Important Cosmic Inflation Interlude,

Wadhawan V K (2012e) Why Are the Laws of Nature What They Are?


Wadhawan V K (2012f) Emergence,

Wadhawan V K (2012g) Complex Adaptive Systems,

Wudka  J (1998) What is the 'scientific method'?
REFERENCE ADDED ON 08 APRIL 2015:

http://skepdic.com/ticriticalthinking.html

It is about critical thinking.

No comments:

Post a Comment