Pages

Saturday 23 March 2013

72. Wolfram's 'New Kind of Science'


Wolfram's book A New Kind of Science (NKS) appeared in 2002. The Principle of Computational Equivalence (PCE) (cf. Part 71), enunciated in this book, is a major component of Wolfram's NKS approach to understanding natural phenomena. He dares to go where no scientist would venture readily, namely attacking research problems of immense complexity. One of the ways he does this is by constructing his computational universe, which is an huge repertoire of 'patterns' generated by running all conceivable cellular automata, and then 'mining' this universe for possible solutions to the problem at hand.
 

'There are typically three broad categories of NKS work: pure NKS, applied NKS, and the NKS way of thinking. . . Pure NKS is about studying the computational universe as basic science for its own sake —investigating simple programs like cellular automata, seeing what they do, and gradually abstracting general principles. Applied NKS is about taking what one finds in the computational universe, and using it as raw material to create models, technology and other things. And the NKS way of thinking is about taking ideas and principles from NKS — like computational irreducibility or the Principle of Computational Equivalence — and using them as a conceptual framework for thinking about things' (Wolfram 2012a).


The above figure gives a breakup of the various subjects in which NKS has been applied. The impact of Wolfram's book has been truly wide-ranging, with applied NKS emerging as the largest group of applications. I quote Wolfram (2012a) again: 'Let’s start with the largest group: applied NKS. And among these, a striking feature is the development of models for a dizzying array of systems and phenomena. In traditional science, new models are fairly rare. But in just a decade of applied NKS academic literature, there are already hundreds of new models: Hair patterns in mice. Shapes of human molars. Collective butterfly motion. Evolution of soil thicknesses. Interactions of trading strategies. Clustering of red blood cells in capillaries. Patterns of worm appendages. Shapes of galaxies. Effects of fires on ecosystems. Structure of stromatolites. Patterns of leaf stomata operation. Spatial spread of influenza in hospitals. Pedestrian traffic flow. Skin cancer development. Size distributions of companies. Microscopic origins of friction. And many, many more.'

The figure below gives a glimpse of the impact of NKS on art.


While there are many enthusiasts, there are also many critics of NKS (Jim Giles published in 2002 in Nature a review of the NKS book). Wolfram (2012b) has recently reviewed the various responses to his work. I find the attitude of several conventional scientists very intriguing, even disappointing. There are any number of extremely complex problems challenging us for a solution. The traditional approach in science has been to model the system under investigation in terms of a few differential equations, and solve them under suitable 'boundary conditions'. We feel elated if our model embodies the 'essential physics' of the problem, and even makes some predictions. And we feel absolutely thrilled if the predictions also turn out to be true. But the wicked thing about most of the real-life complex systems is that any simplifying assumption for modelling them can kill the very essence of the problem.

You can do two things when faced with such a situation. Either stay away from working on such research problems, or do what NKS suggests. Staying away is not a very good idea. For how long can you go on working only on simple or simplifiable research problems? Complexity requires a radically new approach to how science has to be done. NKS is one such approach.

Critics of NKS tend to snigger at what has been achieved by it. I would take them seriously if they had some better alternatives to offer. They have none.

A criticism levelled against Wolfram's NKS is that his CA lack the predictive power of theories developed around conventional, i.e. calculus-based, mathematics. Complex systems are unpredictable, except possibly that one can sometimes explain/predict the level of complexity in terms of the previous lower level of complexity. In any case, is this criticism really valid? Suppose you have succeeded in identifying some archived simple program from Wolfram's computational universe as providing a reasonably good match with the complexity 'pattern' observed in Nature. Such a simple program is clearly giving you a very good hint about the basic interactions involved. You can even create 'predictions' by tinkering with the simple program and generating the modified patterns, and checking them against experiment. If such a prediction gets confirmed reasonably well, you are on the right track so far as gaining an insight into the basics of the complex phenomenon is concerned. What more can you ask for? Getting on the right track is half the battle won. Just build on that great start, by any means.

Nevertheless, I quote from Wolfram (2012b):

'Another theme in some reviews is that the ideas in the book “do not lead to testable predictions”. Of course, just as with an area like pure mathematics, the abstract study of the computational universe that forms the core of the book is not something which in and of itself would be expected to have testable predictions. Rather, it is when the methods derived from this are applied to systems in nature and elsewhere that predictions can be made. And indeed there are quite a few of these in the book (for example about repeatability of apparent randomness) — and many more have emerged and successfully been tested in work that’s been done since the book appeared.

'Interestingly enough, the book actually also makes abstract predictions — particularly based on the Principle of Computational Equivalence. And one very important such prediction — that a particular simple Turing machine would be computation universal — was verified in 2007.'

Kurzweil (2005) remarked that even the most complex CA discussed by Wolfram do not have the evolution feature so crucial to the question of complexity. This may be because the CA discussed by Wolfram are not open systems. There is no influx of energy or negative entropy or information into the CA running simple programs. The NKS should be extended to overcome this deficiency. In fact, as we shall see in a later post, this is what Langton (1989) did to some extent in his pioneering work on adaptive computation.

An interesting comment about the efficacy or otherwise of the NKS as providing a theory of the evolution of the universe is that of Lloyd (2006):

'The idea of using cellular automata as a basis for the theory of the universe is an appealing one. The problem with this argument is that classical computers are bad at reproducing quantum features, such as entanglement. Moreover, as has been noted, it would take a classical computer the size of the whole universe just to simulate a very tiny quantum-mechanical piece of it. It is thus hard to see how the universe could be a classical computer such as a cellular automaton. If it is, then the vast majority of its computational apparatus is inaccessible to observation'.

The debate goes on.

What about the future of NKS? Wolfram (2012c) gushes with optimism and expectation. And the tribe of NKS enthusiasts continues to grow.

Want to attend a free  virtual conference about the latest in NKS? Please click here:




Here is recent lecture by Wolfram about our computational future:



No comments:

Post a Comment