I discussed complex adaptive systems (CASs) in Part 38. John Holland, whose genetic-algorithm (GA) formalism I described in Part 75, realized that a GA by itself was not an adaptive agent. An actual adaptive agent is playing games
with its environment, which amounts to prediction (or thinking ahead) and
feedback (Waldrop 1992).
The ability for thinking ahead requires the emergence and constant
revision of a model of the
environment. Of course, this thinking ahead or prediction is not a prerogative
of the brain alone. It occurs in all CASs all the time (Holland 1995, 1998): They all evolve models that enable them to anticipate the near future.
That a brain is not required for doing this is illustrated by the case of
many bacteria that have special enzymes that enable them to swim along
directions along which there is an increasing concentration of glucose.
Effectively, these enzymes seem to model a world in which chemicals diffuse
outwards from their source. There is also the implicit prediction that if you
swim towards regions of higher concentration, then more of something nutritious
may be obtained. This ability has evolved through processes of Darwinian
natural selection. Individuals which had a slight tendency towards this
behaviour had an evolutionary advantage over those which were lacking it, and
over time the ability became stronger and stronger through processes of natural
selection and inheritance.
How do such models arise, even when there is no 'conscious' thinking
involved? Holland’s answer was: Through feedback from the environment. Holland drew inspiration from Hebb’s (1949) neural-network model I described
in Part 74. The neural network learns not only through sensory inputs, but also
through internal feedbacks. Such feedbacks are essential for the emergence of the
resonating cell assemblies in the neural network.
A second ingredient Holland put into his simulated adaptive agent was the
IF-THEN rules used so extensively in expert systems. This enhanced the computational efficiency of the artificial adaptive
agent.
Holland argued that an IF-THEN rule is, in fact, equivalent to one of
Hebb’s cell assemblies. And there is a large amount of overlap among different
cell assemblies. Typically a cell assembly involves ~1000 to 10000 neurons, and
each neuron has ~1000 to 10000 synaptic connections to other neurons. Thus,
activating one cell assembly is like posting a message on something like an ‘internal
bulletin board', and this message is ‘seen’ by most of the other cell
assemblies overlapping with the initially activated cell assembly. Those of
these latter assemblies which are properly and sufficiently overlapping with
the initial assembly would take actions of their own, and post their messages on the bulletin board.
And this process will occur again and again.
What is more, each of the IF-THEN rules constantly scans the bulletin
board to check if any of the messages matches the IF part of the rule. If it
does, then the THEN part becomes operative, and this can generate a further
chain of reactions from other rules and cell assemblies, each posting a new
message on the internal bulletin board.
In addition to the role of cell assemblies and IF-THEN rules, some of the
messages on the bulletin board come directly from sensory input data from the
environment. Similarly, some of the messages can activate actuators, or emit
chemicals, making the system affect the environment. Thus Holland’s digital
model of the adaptive system was able to get feedback from the environment, as
well as from the agents constituting the network; it also influenced the
environment by some of its outputs.
Having done all this, the third innovation introduced by Holland was to
ensure that even the language used for the rules and for the messages on
the metaphoric internal bulletin board was not in terms of any human concepts
or terminology. For this he introduced certain rules called ‘classifiers’:
GAs
offer robust procedures that can exploit massively parallel architectures and,
applied to classifier systems, they provide a new route toward an understanding
of intelligence and adaptation.
John
Holland
The rules and messages in Holland’s model for adaptation were just bit-strings,
without any imposed interpretation of what a bit string may mean in human
terms. For example, a message may be 1000011100, rather like a digital
chromosome in his GAs. And an IF-THEN rule may be something like this: If there
is a message 1##0011####10 on the board, then post the message 1000111001 on
the board (here # denotes that the bit may be either 0 or 1). Thus this
abstract representation of IF conditions classified different messages
according to specific patterns of bits; hence the name ‘classifiers’ for the
rules.
In this classifier system, the meaning of an abstract message is not
something defined by the programmer. Instead, it emerges from the way the
message causes one classifier rule (or a sensor input) to trigger another
message on the board. Apparently, this is how concepts and mental models
emerge in the brain in the form of self-supporting clusters of classifiers
which self-organize into stable and self-consistent patterns.
There are thousands or tens of thousands of mutually interacting IF-THEN
rules, cell assemblies, and classifiers. This may lead to conflicts or
inconsistencies regarding action to be taken, or regarding the state a neuron
can be in. Instead of introducing a conflict-resolution control from the
outside (as in a typical top-down approach), Holland decided that even this
should emerge from within. The control must be learned, emerging from the bottom upwards. After all, this is how
things happen in real-life systems.
He achieved this by introducing some more innovations into his model. I shall
describe these in the next post.
I located one reliable example of this fact through this blog website. I am mosting likely to use such information now. How To Become A Model in Nigeria
ReplyDelete