Pages

Saturday, 29 June 2013

86. Smart Structures: Blurring the Distinction between the Living and the Nonliving



Robots, discussed in Parts 84 and 85, are a subset of 'smart structures'. Smart bridges, smart surfaces, smart wings of aircraft, and smart cars are some other examples of smart structures.

In all probability there is no clear distinction between life and nonlife. This will become more and more apparent as we humans make progress in the field of smart structures (Wadhawan (2007): Smart Structures: Blurring the Distinction Between the Living and the Nonliving).


Smart or 'adaptronic' structures are defined as structures with an ability to respond adaptively in a pre-designed useful and efficient manner to changing environmental conditions, including any changes in their own condition. The response is adaptive in the sense that two or more stimuli or inputs may be received and yet there is a single response function as per design. The structure is designed to ensure that it gives optimal performance under a variety of environmental conditions.


Any smart structure, biological or artificial, typically has a host structure or ‘body’, which has an interface with a source of energy. The body houses or supports one or more sensors (e.g. ‘nerves’) and one or more actuators (e.g. ‘muscles’). The sensors and the actuators interface with a control centre or ‘brain’. Since both sensors and actuators interface with the control centre, they interface with each other also, albeit indirectly. In a good smart structure there is extensive and continuous feedback and communication of information among the various subunits.


The basic action plan of a smart structure is essentially as follows: The input of data by the sensors is analysed by the control centre. If the course of action is clear, the control centre signals the actuators to take the action. If the course of action is not clear, the control centre directs the sensors to collect additional data. This goes on cyclically, till the course of action is clear, and then the action is taken. The action taken depends on the overall purpose or objective.

To consider the example of a human (obviously a smart structure), if a person puts his / her hand on a hot surface, the tactile sensors send a signal to the brain, which then immediately directs the muscles to take the hand away from the hot surface. The purpose of the smart structure here is to survive and propagate.

Since the smartest structures around us are those designed by Nature through aeons of trial and error and evolution, it makes sense for us to emulate Nature when we want to design artificial smart structures. In fact, an alternative definition of a smart structure can be given as follows:

Smart structures are those which possess characteristics close to, and, if possible, exceeding, those found in biological structures.

Sensors

Sensing involves measurement followed by information-processing. The sensor may be a system by itself, or it may be a subsystem in a larger system (e.g. a complete smart system). In artificial smart structures, optical fibres constitute the most versatile sensors. A variety of ferroic materials also serve as sensor materials. Some notable examples are piezoelectric materials like quartz and PZT, and relaxor ferroelectrics like PMN-PT.

The concept of 'integrated sensors' has been gaining ground for quite some time. Typically, there is a microsensor integrated with signal-processing circuits on a single package. This package not only transduces the sensor inputs into electrical signals, but may also have other signal-processing and decision-making capabilities. There are several advantages of integrated sensors: better signal-to-noise ratio; improved characteristics; and signal conditioning and formatting.

Actuators

An actuator creates controllable mechanical motion from other forms of energy. Materials used for sensor and actuators in smart structures fall into three categories: ferroic materials; nanostructured materials; and soft materials.

Microactuators are the current rage. At present it is usually necessary that they be compatible with the materials and processing technologies used in silicon microelectronics. The two main types of microactuators are ‘mechanisms’ and ‘deformable microstructures’. The former provide displacement through rigid-body motion. The latter do this by mechanical deformation or straining.

Control systems

The use of computers is necessary for developing sophisticated control systems for smart structures (e.g. robots) which can learn and take decisions. There are various approaches to computational intelligence and evolutionary robotics. The evolution of distributed intelligence should be emphasized in this context; I shall discuss this in the next post.

Microelectromechanical systems (MEMS)

MEMS are the current rage, as they involve a high degree of miniaturization and integration of sensors, actuators, and control systems comprising the smart structure or system. Miniaturization and integration have many advantages: lower cost; higher reliability; higher speed; and capability for a higher degree of complexity and sophistication. Some authors equate smart structures with MEMS.


Modern applications of MEMS include those in biomedical engineering, wireless communications, and data storage. Some of the more integrated and complex applications are in microfluidics, aerospace, and biomedical devices.

Silicon tops the list of materials used for MEMS because of its favourable mechanical and electrical properties, and also because of the already entrenched IC-chip technology. More recently, there have been advances in the technology of using multifunctional polymers for fabricating 3-dimensional MEMS (Varadan, Jiang and Varadan 2001). Organic-materials-based MEMS are also conceivable now, after the invention of the organic thin-film transistor. In the overall smart systems involving MEMS, there is also use for ceramics, metals, and alloys, as also a number of ferroic or multiferroic materials.


I shall discuss robots of the future in a separate post. Suffice it to say here that, as Kurzweil predicts, we are approaching a 'technological singularity', beyond which robots will overtake us in all abilities, and technological progress will be so rapid as to outstrip our present ability to comprehend it. We shall transform ourselves and augment our minds and bodies with genetic alterations, MEMS, NEMS (nanoelectromechanical systems), and true machine intelligence. That would mark a complete blurring of the distinction between the 'living' and the 'nonliving'.




Watch this movie for a glimpse of what the future may be like:


Saturday, 22 June 2013

85. Evolutionary Robotics


Evolutionary robotics is a technique for the automatic creation of autonomous robots (Nolfi and Floreano 2000). It works in analogy with the Darwinian principle of selective reproduction of the fittest.


I described genetic algorithms (GAs) in Part 75. Typically, a population of artificial chromosomes or digital organisms (strings of command) is created in the computer controlling the robot. Each such chromosome encodes the control system (or even the morphology) of a robot. The various possible robots (actual, or just simulations) then interact with the environment, and their performance is measured against a set of specified criteria (the ‘fitness functions’). The fittest robots are given a higher chance of being considered for the creation of next-generation robots (specified by a new set of chromosomes drawn from the most successful chromosomes of the previous generation). Processes such as mutations and crossover etc. are also introduced, by analogy with biological evolution. The whole process of decoding the instructions on the chromosomes and implementing them is repeated, and the fitness functions for each member of the new robot population are computed again. This is repeated for many generations, till a robot configuration having a pre-determined set of performance parameters is obtained.


The goal here is to evolve robots with creative problem-solving capabilities. This means that they must learn from real-life experiences, so that, as time passes, they get better and better at problem-solving. The analogy with how a small child learns and improves as it grows is an apt one. The child comes into the world equipped with certain sensors and actuators (eyes, ears, hands, etc.), as well as a brain. Through a continuous process of experimentation (unsupervised learning), as well as learning from parents and teachers etc. (supervised learning), and also learning from the hard knocks of life, including rewards for certain kinds of action (reinforced learning), the child’s brain performs evolutionary computation. This field of research involves a good deal of simulation and generalization work. Apart from GAs, other techniques of computational intelligence are also employed.


In evolutionary robotics, the key question to answer is: Under what conditions is artificial evolution likely to select individual robots which develop new competencies for adapting to their artificial environment? Nolfi and Floreano(2000) discuss the answer in terms of three main issues:
(i) generating incremental evolution through interspecific and intraspecific competition;
(ii) leaving the system free to decide how to extract supervision from the environment; and
(iii) bringing the genotype-to-phenotype mapping into the evolutionary process itself.

Incremental evolution

In biological evolution, natural selection is governed by the ability to reproduce. But there are many examples of improvements (sophisticated competencies) over and above what is essential for sheer survival and propagation of a species. Emergence of the ability to fly is an example of this. A more complex example is that of the emergence of speech and language in humans. Obviously, such competencies, which bestow a survival and propagation advantage to a species, do arise from time to time.

As discussed in Part 78 Holland used GAs in his model for ensuring an element of perpetual novelty in the adapting population. He also incorporated, among other things, the notion of classifier systems. Classifier systems were also introduced by Dorigo and Colombetti (1994, 1997), who used the term robot shaping, borrowing it from experimental psychology wherein animals are trained to elicit pre-specified responses.

A simple approach to evolving a particular ability or competency in a robot is to attempt selection through a suitable fitness function. This works well for simple tasks. For a complex task, the requisite fitness function turns out to be such that all individuals fail the test, and evolution grinds to a halt. This is referred to as the bootstrap problem (Nolfi and Floreano 2000).

Incremental evolution is a way out of this impasse. Define a new task and the corresponding fitness function or selection criterion that is only a slight variation of what the robot is already able to do. And do this repeatedly. The sophisticated competency can thus be acquired in a large number of incremental stages (Dorigo and Colombetti 1994, 1997). Such an approach has the potential drawback that the designer may end up introducing excessive constraints.

A completely self-organized evolution of new competencies, as happens in biological evolution, has to invoke phenomena like the 'coevolution of species' and the attendant 'arms races' within a species or between species (I shall discuss these in future posts). Coevolution can lead to progressively more complex behaviour among competing robots. For example, predator and prey both improve in their struggle for survival and propagation. In such a scenario, even though the selection criterion may not change, the task of adaptation becomes progressively more complex.
                                                                                
Extracting supervision from the environment through lifetime learning

A human supervisor can help a robot evolve new competencies. But it is more desirable if the robot can autonomously extract that high-level supervision from the environment itself. A large amount of information is received by the robot through its sensors, and it adapts to the environment through the feedback mechanisms. As the environment changes, new strategies are evolved for dealing with the change. The evolving robot also gets feedbacks on the consequences of its motor actions. Thus, the robot can effectively extract supervision from the environment through a lifetime of adaptation and learning about what actions produce what consequences.

One makes a distinction between ontogenetic adaptation and phylogenetic adaptation. The former refers to the adaptation resulting from a lifetime of learning, in which the individual is tested almost continuously for fitness for a task.  The latter involves only one testing for fitness, namely when it comes to procreating the next generation of candidate robots. Not much progress has been made yet regarding the exploitation of phylogenetic adaptation in evolutionary robotics.

Development and evolution of evolvability

In a GA the chromosomes code the information which must be decoded for evaluating the fitness of an individual. Thus there is a mapping of information from the genotype to the phenotype, and it is the phenotype which is used for evaluating the fitness. Genotypes of individuals with fitter phenotypes are given a larger probability for selection for procreating the next generation. The problem is that a one-to-one genotype-to-phenotype mapping is too simplistic a procedure. In reality, i.e. in real biological evolution, a phenotype may be influenced by more than one genes. The way genetic variation maps onto phenotypic variation in evolutionary biology is a highly nontrivial problem, and is commonly referred to as the representation problem. The term evolvability is used for the ability of random variations to result in improvements sometimes. As emphasized by Wagner and Altenberg (1996), evolvability is essential for adaptability. And incorporating evolvability into evolutionary robotics amounts to tackling the representation problem. Efforts have indeed been made to incorporate more complex types of mapping than a simple one-to-one mapping of one gene to one robotic characteristic (Nolfi and Floreano 2000).




Saturday, 15 June 2013

84. Behaviour-Based Robotics



The future of mankind is going to be affected in a very serious way by developments in robotics. Let us make friends with robots and try to understand them.

There are two main types of robots: industrial robots, and autonomous robots. Industrial robots do useful work in a structured or pre-determined environment. They do repetitive jobs like fabricating cars, stitching shirts, or making computer chips, all according to a set of instructions programmed into them.

Autonomous or smart robots, by contrast, are expected to work in an unstructured environment. They move around in an environment that has not been specifically engineered for them, and do useful and ‘intelligent’ work. They have to interact with a dynamically changing and complex world, with the help of sensors and actuators and a brain centre.


There have been several distinct or parallel approaches to the development of machine intelligence (Nolfi and Floreano 2000). The classical artificial-intelligence (AI) approach attempted to imitate some aspects of rational thought. Cybernetics, on the other hand, tended to adopt the human-nervous-system approach more directly. And evolutionary or adaptive robotics embodies a convergence of the two approaches.



Thus the main routes to the development of autonomous robots are:

  • behaviour-based robotics;
  • robot learning;
  • artificial-life simulations (in conjunction with physical devices comprising the robot); and
  • evolutionary robotics.
I have already discussed artificial life in Part 77. Let us focus on behaviour-based robotics here.


In the traditional AI approach to robotics, the computational work for robot control is decomposed into a chain of information-processing modules, proceeding from overall sensing to overall final action. By contrast, in behaviour-based robotics (Brooks; Arkin), the designer provides the robot with a set of simple basic behaviours. A parallel is drawn from how coherent intelligence (‘swarm intelligence) emerges in a beehive or an ant colony from a set of very simple behaviours. In such a vivisystem, each agent is a simple device interacting with the world with sensors, actuators, and a very simple brain.


In Brooks’ ‘subsumption architecture, the decomposition of the robot-control process is done in terms of behaviour-generating modules, each of which connects sensing to action directly. Like an individual bee in a beehive, each behaviour-generating module directly generates some part of the behaviour of the robot. The tight (proximity) coupling of sensing to action produces an intelligent network of simple computational elements that are broad rather than deep in perception and action.


There are two further concepts in this approach: ‘situatedness’, and ‘embodiment’. Situatedness means the incorporation of the fact that the robot is situated in the real world, which directly influences its sensing, actuation, and learning processes. Embodiment means that the robot is not some abstraction inside a computer, but has a body which must respond dynamically to the signals impinging on it, using immediate feedback. This makes evolution of intelligence in a robot more realistic than the artificial evolution carried out entirely inside a computer.

In Brooks' (1986) approach, the desired behaviour is broken down into a set of simpler behaviours (‘layers’), and the solution (namely the control system) is built up incrementally, layer by layer. Simple basic behaviours are mastered first, and behaviours of higher levels of sophistication are added gradually, layer by layer. Although basic behaviours are implemented in individual subparts or layers, a coordination mechanism is incorporated in the control system, which determines the relative strength of each behaviour in any particular situation.


Coordination may involve both competition and cooperation. In a competitive scenario, only one behaviour determines the motor output of the robot. Cooperation means that a weighted sum of many behaviours determines the robot response.

In spite of the progress made in behaviour-based robotics, the fact remains that autonomous mobile robots are difficult to design. The reason is that their behaviour is an emergent property (Nolfi and Floreano 2000). By their very nature, emergent phenomena in a complex system (in this case the robot interacting with its surroundings) are practically impossible to predict, even if we have all the information about the sensor inputs to the robot and the consequences of all the motor outputs. The major drawback of behaviour-based robotics is that the trial-and-error process for improving performance is judged and controlled by an outsider, namely the designer. It is not a fully self-organizing and evolutionary approach for the growth of robotic intelligence. And it is not easy for the designer to do a good job of breaking down the global behaviour of a robot into a set of simple basic behaviours. One reason for this difficulty is that an optimal solution of the problem depends on who is describing the behaviour: the designer or the robot? The description can be distal or proximal (Nolfi and Floreano 2000).

Proximal description of the behaviour of the robot is a description from the vantage point of the sensorimotor system that describes how the robot reacts to different sensory situations.

The distal description is from the point of view of the designer or the observer. In it, the results of a sequence of sensorimotor loops may be described in terms of high-level words like ‘approach’ or ‘discriminate’. Such a description of behaviour is the result of not only the sensory-motor mapping, but also of the description of the environment. It thus incorporates the dynamical interaction between the robot and the environment, and that leads to some difficult problems. The environment affects the robot, and the robot affects the environment, which in turn affects the robot in a modified way, and so on. This interactive loop makes it difficult for the designer to break up the global behaviour of the robot into a set of elementary or basic behaviours that are simple from the vantage point of the proximal description. Because of the emergent nature of behaviour, it is difficult to predict what behaviour will result from a given control system. Conversely, it is also difficult to predict what pattern of control configurations will result in a desired behaviour.

As we shall see in the next post, this problem is overcome in evolutionary robotics by treating the robot and the environment as a single system, in which the designer has no role to play. After all, this is how all complexity has evolved in Nature.