Pages

Saturday, 25 May 2013

81. The Edge-of-Chaos Existence of Complex Adaptive Systems



Evolution thrives in systems with a bottom-up organization, which gives rise to flexibility. But at the same time, evolution has to channel the bottom-up approach in a way that doesn’t destroy the organization. There has to be a hierarchy of control – with information flowing from the bottom up as well as from the top down. The dynamics of complexity at the edge of chaos seems to be ideal for this kind of behaviour (Doyne Farmer).

I introduced complex adaptive systems (CASs) in Part 38. Then in Part 80 I described Langton's work on them; in particular his introduction of the phrase 'edge of chaos' for the line in phase space that separates ordered behaviour from chaotic behaviour, with complex behaviour sandwiched between these two extremes. In 2-dimensional phase space we need a line to separate the two regimes. In 3-dimensional space we need a plane or a membrane (a 2-dimensional object) for effecting this separation. In n-dimensional phase space we need an (n-1)-dimensional hyper-membrane for separating ordered behaviour from chaotic behaviour. The phrase 'edge of chaos' should be understood in this spirit.


We have seen in Part 68 that even a simple computational algorithm like the Game of Life is a universal computing device. The Game of Life is independent of the computer used for running it, and exists in the Neumann universe, just as other Class 4 CA do. They are capable of information processing and data storage etc. They are a mixture of coherence and chaos. They have enough stability to store information, and enough fluidity to transmit signals over arbitrary distances in the Neumann universe.

There are many analogies, not only with computation, but with life, economies, and social systems also. After all, they are all just a series of computations. Life, for example, is nothing if it cannot process information. And it strikes a right balance between too static a behaviour and excessively chaotic or noisy behaviour.


The occurrence of complex phase-transition-like behaviour in the edge-of-chaos domain is something very common in all branches of human knowledge. Kauffman (1969), for example, recognized it in genetic regulatory networks (cf. Part 49). In his work on such networks in the 1960s, he discovered that if the connections were too sparse, the network would just settle down to a ‘dead’ configuration and stay there. If the connections were too dense, there was a churning around in total chaos. Only for an optimum density of connections did the stable state cycles arise.

Similarly, in the mid-1980s, Farmer, Packard and Kauffman (1986) found in their autocatalytic-set model (cf. Part 46) that when parameters such as the supply of ‘food’ molecules, and the catalytic strength of the chemical reactions etc. were chosen arbitrarily, nothing much happened. Only for an optimum range of these parameters did a ‘phase transition’ to autocatalytic behaviour set in quickly.

Many more examples can be given (Farmer et al.1986): Coevolutionary systems, economies, social systems, etc. A right balance of defence and combat in the coevolution of two antagonistic species ensures the survival and propagation of both. Similarly, the health of economies and social systems can be ensured only by a right mix of feedbacks and regulation on the one hand, and plenty of flexibility and scope for creativity, innovation, and response to new conditions, on the other. The dynamics of complexity around the edge of chaos is ideally suited for evolution that does not destroy self-organization.

Based on the work of Kauffman, Langton, and many others, we can compile a number of analogies or correspondences from a variety of disciplines. The table below does just that. There is a gradation from (a) excessive order to (b) complex creativity and progress to (c) total anarchy and failure. In each case there is some kind of a ‘phase transition’ to the complexity regime as a function of some control parameter (e.g. temperature in the case of second-order phase transitions in crystals).

Cellular automata classes
1 & 2 4 3
Dynamical systems
Order complexity regime chaos
Matter
Solid phase-transition regime fluid
Computation
Halting undecidability regime nonhalting
Life
Too static life / intelligence too noisy
Genetic networks
No activity stable-state cycles chaotic activity
Autocatalytic sets of reactions
No autocatalysis autocatalysis no autocatalysis
Coevolution
No coevolution coevolution no coevolution
Economies, social systems
No creativity health and happiness anarchy

How do complex systems move towards the edge-of-chaos regime, and then manage to stay there? For complex adaptive systems (CAS), Holland (1975, 1995, 1998) provided an answer in terms of Darwinian evolution. Complex systems that are capable of sophisticated behaviour have an evolutionary advantage over systems that are too static and conservative or too turbulent. Therefore both the latter categories would tend to evolve towards the middle-course of complexity, if they are to survive at all. And once they are in that regime, any deviations or fluctuations would tend to be reversed by evolutionary factors.

The next question is: What do complex systems do when they are at the edge of chaos. Holland’s (1998) assertion about the occurrence of ‘perpetual novelty’ in a CAS essentially amounts to saying that the system moves around in the edge-of-chaos membrane in phase space. But that is not all. The moving around actually takes the system to states of higher and higher sophistication of structure and complexity. Learning and evolution not only take a CAS towards the edge-of-chaos membrane, they also make it move in this membrane towards states of higher complexity (Farmer et al. 1986 (cf. https://en.wikipedia.org/wiki/Self-organization). The ultimate reason for this, of course, is that the universe is ever expanding, and there is therefore a perpetual input of free energy or negative entropy into it.

Farmer gave the example of the autocatalytic set model (which he proposed along with Packard and Kauffman) to illustrate the point. When certain chemicals can collectively catalyze the formation of one another, their concentrations increase by a large factor spontaneously, far above the equilibrium values. This implies that the set of chemicals as a whole emerges as a new ‘individual’ in a far-from-equilibrium configuration. Such sets of chemicals can maintain and propagate themselves, in spite of the fact that there is no genetic code involved. In a set of experiments, Farmer and colleagues tested the autocatalytic model further by allowing occasionally for novel chemical reactions. Mostly such reactions caused the autocatalytic set to crash or fall apart, but the ones that crashed made way for a further evolutionary leap. New reaction pathways were triggered, and some variations got amplified and stabilized. Of course, the stability lasted only till the next crash. Thus a succession of autocatalytic metabolisms emerged. Apparently, each level of emergence through evolution and adaptation sets the stage for the next level of emergence and organization.

NB (4 December 2014):
For an update, please click here.



Saturday, 18 May 2013

80. Adaptive Computation



I described Neumann’s self-reproducing cellular automata (CA) in Part 69. Langton (1989) extended the CA approach to his work on artificial life (AL) by introducing evolution into the Neumann universe.

In the self-reproducing CA created by Langton, a set of rules (the so-called GTYPE) specified how each cell interacted with its neighbours, and the overall pattern that resulted was called the PTYPE. The local rules could evolve with time, rather than remaining fixed. His pioneering work was a fine example of evolutionary computation or adaptive computation.

 

Langton also correlated his work on AL with Wolfram’s four classes of CA (cf. Part 69). We saw in Part 35 on chaos how the dynamics described by the equations for modelling weather phenomena by Lorenz changes drastically with the values assigned to the three adjustable or 'control' parameters. In particular, the dynamics is chaotic only for a certain range of values of the control parameters, and nonchaotic or not fully chaotic for other values. Small values of control parameters give nonchaotic behaviour, similar to the dynamics described by Wolfram’s Class 1 and Class 2 CA. And sufficiently large values of the control parameters result in totally chaotic dynamics, which corresponds to Class 3 CA.

Langton investigated the introduction of a similar (only one) parameter into the rules controlling CA behaviour to check the above analogy more clearly, and particularly to investigate the connection between Class 4 CA on one hand, and partially chaotic systems on the other. After a number of trials, he came upon a parameter (λ) for the CA rules to correspond to the control parameter in an equation governing chaotic behaviour.

This λ was defined as the probability that any cell in the CA will be ‘alive’ after the next time step: In Part 68 I chose the colours black and white for distinguishing the two states of a CA cell, which we can now relate to ‘alive’ and ‘dead’; just like what was done for the Game of Life. For example, if λ = 0 in the rule governing the evolution of a particular set of CA, all cells would be white or dead after one time step. The same would be true if λ = 1. In fact, the CA dynamics is symmetrical about the value λ = 0.5, except for the fact that the colours black and white get interchanged.

In his computer experiments, Langton found that, as expected, λ = 0 corresponds to Wolfram's Class 1 rules. The same was true for very small nonzero vales of λ.

As this control parameter was increased gradually, Class 2 features started appearing at some stage, with characteristic oscillating behaviour. With increasing values of the control parameter, the oscillating pattern took longer and longer to settle down.

Taking λ = 0.5 resulted in totally chaotic behaviour, typical of the Wolfram Class 3.

Langton found that clustered around the critical value λ ≈ 0.273 were Class 4 CA.

Thus, as the control parameter increases from zero onwards, we see a transition from ‘order’ to ‘complexity’ to ‘chaos'.


In Part 6 I discussed how the variation of a control parameter like temperature can cause phase transitions to occur in a system such as water (from steam to liquid water to ice, on cooling). Langton realized that his control parameter λ plays a similar role in determining the dynamics of CA. It is like temperature. At low temperatures a material (such as H2O) is solid, in a crystalline state, which is an ordered state. At high temperatures we have a fluid state (liquid or vapour), which signifies chaos or disorder. Langton drew the analogy with such phase transitions for describing the Class 4 behaviour in CA which sets in for values of λ around 0.273.

The solid-fluid transition in water is actually a first-order or discontinuous phase transition. But a crystal can also undergo one or more solid-to-solid phase transitions. Such transitions may be of first order or second order. As Langton pointed out, a second-order phase transition is actually better for drawing the analogy with Class 4 CA.

A second-order phase transition is characterized by the occurrence of 'premonitory phenomena' (cf. Wadhawan 2000). That is, even before the phase-transition temperature Tc is reached, the material exhibits 'critical phenomena', and regions of the new phase start appearing and disappearing in the old phase. This is unlike a first-order phase transition, which occurs sharply (e.g. at the melting point of ice), and there are generally no critical fluctuations or premonitory phenomena (even though there is a range of temperatures in which the parent phase and the daughter phase coexist).

Langton’s control parameter λ for CA gives Class 4 behaviour not only for the value 0.273, but for a certain range of values around that number. And even for a specific value of the control parameter in this range, coherent structures may exist indefinitely, or over an arbitrarily large number of cells of the cellular automaton.

Langton gave this phase boundary in the Neumann universe the name edge of chaos.

We should remember, however, that the edge is not a sharp one. It is more like a thin or thick membrane in phase space, with chaotic behaviour on one side, and ordered behaviour on the other. Complex behaviour is at its most creative within the membrane. There is a gradation from chaos to complexity to order across the membrane.

More on this next time.


Saturday, 11 May 2013

79. John Holland's Bucket-Brigade Algorithm in His Model for Complex Adaptive Systems


In Holland’s model for adaptation and learning in complex adaptive systems (CASs), which I introduced in Part 78, the bulletin-board messages must compete for domination and survival. Moreover, since in the game for survival in real life, certain players form syntheses or alliances for mutual benefit, Holland added this feature in his model by assigning to each classifier a certain plausibility or probability factor. The classifiers with high plausibility stood a higher chance of being selected for bulletin-board display in a given cycle; the rest were likely to be ignored or eliminated. The Hebbian reinforcement and feedback mechanism (cf. Part 74) was used for determining the plausibility factor for a classifier. The plausibility was determined by performance. If the selection of a classifier resulted in better performance, as indicated by a positive feedback from the environment, its plausibility factor was enhanced. If the performance was poor, the plausibility factor was diminished, just like the weakening of the synaptic strength in the Hebbian model of the brain.


A classifier (cf. Part 78) interacts with many others, and is influenced by them. Therefore, if it does well, the credit is not entirely its own; a chain of classifier actions leads to the successful action of the final classifier. To take note of this, Holland deviated from the general Hebbian approach and, instead, drew inspiration from how a market economy functions. The market is driven by the powerful profit motive. Let us see what this marketplace metaphor is.

In a market, there is buying and selling. If a product (say a raw material, or a service) is good, it fetches a good price from the buyer. The act of buying depletes the funds of the buyer, but the buying is done with the expectation that, by doing value-addition to the product purchased, the buyer can sell it in the market at a good price, and thus not only recover the cost of the original product, but also make a profit. There is a chain of buyers and sellers at various levels, leading to the manufacture of the end product, ready for buying or consumption by the consumer, and the profit motive operates at each level of transaction. Thus, for a successful end product, although the payment of price is made by the consumer only to the final seller, all the players in the game get rewarded. And if an end product fails, nobody buys it and the loss (or absence of profit) percolates down the line to all the persons or firms involved.


Holland used similar ideas for his computer simulation of adaptation for evolving the plausibility factors for the classifiers. The messages competing for being posted on the bulletin board are like the goods and services for sale, and the classifiers generating those messages are like the firms that produce the goods and services. The classifiers not only sell, they also buy, because they scan all the messages on the board, and get activated if the corresponding IF condition is satisfied by a displayed message. When a classifier buys, it is like a firm making a bid, and the highest bidder wins the bid. And how does the classifier ‘pay’ for the purchase? By transferring some of its plausibility strength to the supplying classifier. And since the classifiers are a highly interconnected system, the profits and losses get dispersed, just like in the market economy. Since the environment (likened to the consumer of the end product) provides a Hebbian reinforcement for good performance by feedback, the ‘understanding’ evolves at all levels that if you make a good intermediate product, you get rewarded with a profit, and if your product in not good you stand to lose. Finally the plausibility factor of each classifier evolves to a value matching its true worth to the environment, and this happens without any top-down instructions.

Holland called this part of his modelled CAS the ‘bucket brigade’ algorithm: It passes the reward from each classifier to the previous classifier down the chain. The idea is similar to how synapses get strengthened in the Hebbian model of the brain. Something similar also happens when reinforcement of synaptic weights occurs when an artificial neural network (ANN) is trained for achieving the desired performance via gradual learning.


Exploitation vs. exploration

The bucket-brigade algorithm was lacking in one additional aspect of evolution of learning. It lacked the novelty feature. It strengthened the classifiers that the system already possessed. It lacked the ability to explore new possibilities. It had the exploitation feature, but lacked the exploration feature. This was easy to handle. All that Holland had to do was to bring in his genetic algorithms (GAs). Classifiers evolved via the GA approach. The crossover feature of GAs ensured that new possibilities were explored and, once in a while, fitter classifiers arose which could replace weaker ones.

To summarize the previous post and this one, the rule-based system, along with the bucket-brigade approach and the GA, not only learned from experience, but was also innovative and creative. The bottom-up formalism eliminated the need for a deductive approach wherein rules have to be imposed for deciding what is consistent and desirable, and what is not. Instead, processes of inference, learning, and discovery emerged by induction through the bottom-up route. This was probably the first successful model of emergence in a CAS.

This formalism holds the promise of a theory of cognition.


A number of successful computer simulations have been reported. The one that impressed Holland the most in those days was the work of his student Goldberg (1989). Goldberg’s Ph. D. thesis work demonstrated how Holland’s adaptive-systems formalism could be used successfully for controlling a simulated gas pipeline, an extremely complex problem.

Other examples are available in the literature.

Saturday, 4 May 2013

78. Modelling of Adaptation and Learning in Complex Adaptive Systems



I discussed complex adaptive systems (CASs) in Part 38. John Holland, whose genetic-algorithm (GA) formalism I described in Part 75, realized that a GA by itself was not an adaptive agent. An actual adaptive agent is playing games with its environment, which amounts to prediction (or thinking ahead) and feedback (Waldrop 1992).


The ability for thinking ahead requires the emergence and constant revision of a model of the environment. Of course, this thinking ahead or prediction is not a prerogative of the brain alone. It occurs in all CASs all the time (Holland 1995, 1998): They all evolve models that enable them to anticipate the near future.


That a brain is not required for doing this is illustrated by the case of many bacteria that have special enzymes that enable them to swim along directions along which there is an increasing concentration of glucose. Effectively, these enzymes seem to model a world in which chemicals diffuse outwards from their source. There is also the implicit prediction that if you swim towards regions of higher concentration, then more of something nutritious may be obtained. This ability has evolved through processes of Darwinian natural selection. Individuals which had a slight tendency towards this behaviour had an evolutionary advantage over those which were lacking it, and over time the ability became stronger and stronger through processes of natural selection and inheritance.


How do such models arise, even when there is no 'conscious' thinking involved? Holland’s answer was: Through feedback from the environment. Holland drew inspiration from Hebb’s (1949) neural-network model I described in Part 74. The neural network learns not only through sensory inputs, but also through internal feedbacks. Such feedbacks are essential for the emergence of the resonating cell assemblies in the neural network.

A second ingredient Holland put into his simulated adaptive agent was the IF-THEN rules used so extensively in expert systems. This enhanced the computational efficiency of the artificial adaptive agent.

Holland argued that an IF-THEN rule is, in fact, equivalent to one of Hebb’s cell assemblies. And there is a large amount of overlap among different cell assemblies. Typically a cell assembly involves ~1000 to 10000 neurons, and each neuron has ~1000 to 10000 synaptic connections to other neurons. Thus, activating one cell assembly is like posting a message on something like an ‘internal bulletin board', and this message is ‘seen’ by most of the other cell assemblies overlapping with the initially activated cell assembly. Those of these latter assemblies which are properly and sufficiently overlapping with the initial assembly would take actions of their own, and post their messages on the bulletin board. And this process will occur again and again.

What is more, each of the IF-THEN rules constantly scans the bulletin board to check if any of the messages matches the IF part of the rule. If it does, then the THEN part becomes operative, and this can generate a further chain of reactions from other rules and cell assemblies, each posting a new message on the internal bulletin board.

In addition to the role of cell assemblies and IF-THEN rules, some of the messages on the bulletin board come directly from sensory input data from the environment. Similarly, some of the messages can activate actuators, or emit chemicals, making the system affect the environment. Thus Holland’s digital model of the adaptive system was able to get feedback from the environment, as well as from the agents constituting the network; it also influenced the environment by some of its outputs.

Having done all this, the third innovation introduced by Holland was to ensure that even the language used for the rules and for the messages on the metaphoric internal bulletin board was not in terms of any human concepts or terminology. For this he introduced certain rules called ‘classifiers’:

GAs offer robust procedures that can exploit massively parallel architectures and, applied to classifier systems, they provide a new route toward an understanding of intelligence and adaptation.
                                                                                       John Holland

The rules and messages in Holland’s model for adaptation were just bit-strings, without any imposed interpretation of what a bit string may mean in human terms. For example, a message may be 1000011100, rather like a digital chromosome in his GAs. And an IF-THEN rule may be something like this: If there is a message 1##0011####10 on the board, then post the message 1000111001 on the board (here # denotes that the bit may be either 0 or 1). Thus this abstract representation of IF conditions classified different messages according to specific patterns of bits; hence the name ‘classifiers’ for the rules.

In this classifier system, the meaning of an abstract message is not something defined by the programmer. Instead, it emerges from the way the message causes one classifier rule (or a sensor input) to trigger another message on the board. Apparently, this is how concepts and mental models emerge in the brain in the form of self-supporting clusters of classifiers which self-organize into stable and self-consistent patterns.

There are thousands or tens of thousands of mutually interacting IF-THEN rules, cell assemblies, and classifiers. This may lead to conflicts or inconsistencies regarding action to be taken, or regarding the state a neuron can be in. Instead of introducing a conflict-resolution control from the outside (as in a typical top-down approach), Holland decided that even this should emerge from within. The control must be learned, emerging from the bottom upwards. After all, this is how things happen in real-life systems.

He achieved this by introducing some more innovations into his model. I shall describe these in the next post.