Pages

Saturday, 30 March 2013

73. Computational Intelligence



Principles of Darwinian evolution have been exploited to great advantage by carrying them over to evolution inside a computer. A further advance has been that this ‘artificial’ evolution has been applied even for real-life environments for the machines in which the evolution takes place; intelligent or smart robots is the generic term used for such machines (Wadhawan 2007).

As discussed in Part 52, Lamarckian evolution does not have a respectable place in modern biology. The reason is the central dogma of microbiology, according to which information can flow only from DNA to RNA to proteins, and not in the opposite direction. But this restriction is unnecessary when evolution is occurring inside a computer. In fact, so far as artificial evolution is concerned, Lamarckian evolution can be advantageous in certain situations over Darwinian evolution.

Before discussing intelligent robots, let us first get familiar with what has come to be known as the field of computational intelligence (CI).

The underlying approach of a conventional computation is to work through a precise algorithm that works on accurate data. But there are innumerable complex systems that cannot be adequately tackled through such an approach. One should be able to work with partially accurate or insufficient or time-varying data, requiring the use of suitably variable or tenable software. One would like to work with computational systems that are fault-tolerant, and computationally intelligent, making adjustments in the software intelligently and handling imperfect or ‘fuzzy’ data the way we humans do (Nolfi and Floreano 2000; Zomaya 2006). The subject of computational intelligence caters to this requirement.


CI consolidated as a subject in the early 90s. Zadeh (1965, 1975) had introduced the notion of linguistic variables for making reasoning and computing more human-like. By computing with words, rather than numbers, one could deal with approximate reasoning. With increasing emphasis on the use of 'biomimetics' in computational science, Zadeh’s fuzzy-logic approach
was clubbed with 'artificial neural networks' (ANNs), 'genetic algorithms' (GAs), 'evolutionary' or 'genetic programming' (EP or GP), and 'artificial life' (AL) to define the field of CI.

FL, ANNs, GAs, GP, and AL constitute the five hard-core components of the subject of CI, although there are also a number of other peripheral disciplines (Konar 2005; Krishnamurthy and Krishnamurthy 2006 (cf. Zomaya 2006)).



(Figure adapted from Konar 2005. Also see Wadhawan 2007)

I shall give you a feel for each of these topics, beginning with FL in this post.

Fuzzy-logic systems

Conventional computation is based on precise logic, whereas humans are able to process information that is not always very precise or complete. Humans are able to employ ‘fuzzy logic’ in their thinking and analysis. Fuzzy logic has developed as a mathematical discipline for devising computational strategies that can deal with imprecise knowledge. The imprecise nature of the information available may result from our limited capability to resolve detail, or because the data are partial, noisy, vague, or incomplete.

FL involves inference and intuition, just like the logic used by humans in certain situations. One removes the restriction that propositions can only be either true or false; instead, they are allowed to be true or false to different degrees. For example: if A is (HEAVY 0.8) and B is (HEAVY 0.6), then A is ‘MORE HEAVY’ than B. This is in sharp contrast to classical binary logic, in which A and B may be either members of the same class (both HEAVY, or both NOT-HEAVY), or of different classes.


For dealing with such human-like logic, the important notion of linguistic variables was introduced by Zadeh (1965, 1975). This made it possible to compute with words, rather than numbers, enabling approximate reasoning. Fuzzy-rule-based systems (FRBSs) were introduced for dealing with uncertain and vaguely defined problems. One deals with IF-THEN rules, the antecedents and consequents of which consist of fuzzy-logic statements. Representation of knowledge is enhanced by the use of linguistic variables and their linguistic values, which are defined by context-dependent fuzzy sets. These sets are specified by gradual membership functions.

FL rules are normally of the form:

IF variable IS property THEN action

Here is how a temperature controller using a fan would be programmed:

IF temperature IS very cold THEN stop fan
IF temperature IS cold THEN turn down fan
IF temperature IS normal THEN maintain level
IF temperature IS hot THEN speed up fan

There is no 'ELSE' option because the temperature might be 'cold', 'normal' and 'hot' at the same time to different degrees.

The concept of 'expert systems' is based on an analogy with human experts, and usually has a large FL component. An expert system is a computer programme that holds and processes the information and expertise gained by humans in one or more domains. A fuzzy expert system uses a collection of fuzzy membership functions and rules, instead of Boolean logic. A typical rule for reasoning about available fuzzy data looks like this:

IF x is low and y is high, THEN z = medium

Zadeh also introduced the f.g-generalization. An f-generalization fuzzyfies any theory, technique, method or problem by replacing the corresponding crisp set by a fuzzy set. A g-generalization does the opposite; it granulates a set by partitioning its variables, functions and relations into granules or information clusters. f.g-generalization is a combination of these two. One ungroups an information system into components by some rules, and regroups them into clusters or granules by another set of rules. This can result in new types of information subsystems.


Saturday, 23 March 2013

72. Wolfram's 'New Kind of Science'


Wolfram's book A New Kind of Science (NKS) appeared in 2002. The Principle of Computational Equivalence (PCE) (cf. Part 71), enunciated in this book, is a major component of Wolfram's NKS approach to understanding natural phenomena. He dares to go where no scientist would venture readily, namely attacking research problems of immense complexity. One of the ways he does this is by constructing his computational universe, which is an huge repertoire of 'patterns' generated by running all conceivable cellular automata, and then 'mining' this universe for possible solutions to the problem at hand.
 

'There are typically three broad categories of NKS work: pure NKS, applied NKS, and the NKS way of thinking. . . Pure NKS is about studying the computational universe as basic science for its own sake —investigating simple programs like cellular automata, seeing what they do, and gradually abstracting general principles. Applied NKS is about taking what one finds in the computational universe, and using it as raw material to create models, technology and other things. And the NKS way of thinking is about taking ideas and principles from NKS — like computational irreducibility or the Principle of Computational Equivalence — and using them as a conceptual framework for thinking about things' (Wolfram 2012a).


The above figure gives a breakup of the various subjects in which NKS has been applied. The impact of Wolfram's book has been truly wide-ranging, with applied NKS emerging as the largest group of applications. I quote Wolfram (2012a) again: 'Let’s start with the largest group: applied NKS. And among these, a striking feature is the development of models for a dizzying array of systems and phenomena. In traditional science, new models are fairly rare. But in just a decade of applied NKS academic literature, there are already hundreds of new models: Hair patterns in mice. Shapes of human molars. Collective butterfly motion. Evolution of soil thicknesses. Interactions of trading strategies. Clustering of red blood cells in capillaries. Patterns of worm appendages. Shapes of galaxies. Effects of fires on ecosystems. Structure of stromatolites. Patterns of leaf stomata operation. Spatial spread of influenza in hospitals. Pedestrian traffic flow. Skin cancer development. Size distributions of companies. Microscopic origins of friction. And many, many more.'

The figure below gives a glimpse of the impact of NKS on art.


While there are many enthusiasts, there are also many critics of NKS (Jim Giles published in 2002 in Nature a review of the NKS book). Wolfram (2012b) has recently reviewed the various responses to his work. I find the attitude of several conventional scientists very intriguing, even disappointing. There are any number of extremely complex problems challenging us for a solution. The traditional approach in science has been to model the system under investigation in terms of a few differential equations, and solve them under suitable 'boundary conditions'. We feel elated if our model embodies the 'essential physics' of the problem, and even makes some predictions. And we feel absolutely thrilled if the predictions also turn out to be true. But the wicked thing about most of the real-life complex systems is that any simplifying assumption for modelling them can kill the very essence of the problem.

You can do two things when faced with such a situation. Either stay away from working on such research problems, or do what NKS suggests. Staying away is not a very good idea. For how long can you go on working only on simple or simplifiable research problems? Complexity requires a radically new approach to how science has to be done. NKS is one such approach.

Critics of NKS tend to snigger at what has been achieved by it. I would take them seriously if they had some better alternatives to offer. They have none.

A criticism levelled against Wolfram's NKS is that his CA lack the predictive power of theories developed around conventional, i.e. calculus-based, mathematics. Complex systems are unpredictable, except possibly that one can sometimes explain/predict the level of complexity in terms of the previous lower level of complexity. In any case, is this criticism really valid? Suppose you have succeeded in identifying some archived simple program from Wolfram's computational universe as providing a reasonably good match with the complexity 'pattern' observed in Nature. Such a simple program is clearly giving you a very good hint about the basic interactions involved. You can even create 'predictions' by tinkering with the simple program and generating the modified patterns, and checking them against experiment. If such a prediction gets confirmed reasonably well, you are on the right track so far as gaining an insight into the basics of the complex phenomenon is concerned. What more can you ask for? Getting on the right track is half the battle won. Just build on that great start, by any means.

Nevertheless, I quote from Wolfram (2012b):

'Another theme in some reviews is that the ideas in the book “do not lead to testable predictions”. Of course, just as with an area like pure mathematics, the abstract study of the computational universe that forms the core of the book is not something which in and of itself would be expected to have testable predictions. Rather, it is when the methods derived from this are applied to systems in nature and elsewhere that predictions can be made. And indeed there are quite a few of these in the book (for example about repeatability of apparent randomness) — and many more have emerged and successfully been tested in work that’s been done since the book appeared.

'Interestingly enough, the book actually also makes abstract predictions — particularly based on the Principle of Computational Equivalence. And one very important such prediction — that a particular simple Turing machine would be computation universal — was verified in 2007.'

Kurzweil (2005) remarked that even the most complex CA discussed by Wolfram do not have the evolution feature so crucial to the question of complexity. This may be because the CA discussed by Wolfram are not open systems. There is no influx of energy or negative entropy or information into the CA running simple programs. The NKS should be extended to overcome this deficiency. In fact, as we shall see in a later post, this is what Langton (1989) did to some extent in his pioneering work on adaptive computation.

An interesting comment about the efficacy or otherwise of the NKS as providing a theory of the evolution of the universe is that of Lloyd (2006):

'The idea of using cellular automata as a basis for the theory of the universe is an appealing one. The problem with this argument is that classical computers are bad at reproducing quantum features, such as entanglement. Moreover, as has been noted, it would take a classical computer the size of the whole universe just to simulate a very tiny quantum-mechanical piece of it. It is thus hard to see how the universe could be a classical computer such as a cellular automaton. If it is, then the vast majority of its computational apparatus is inaccessible to observation'.

The debate goes on.

What about the future of NKS? Wolfram (2012c) gushes with optimism and expectation. And the tribe of NKS enthusiasts continues to grow.

Want to attend a free  virtual conference about the latest in NKS? Please click here:




Here is recent lecture by Wolfram about our computational future: