Pages

Saturday 15 June 2013

84. Behaviour-Based Robotics



The future of mankind is going to be affected in a very serious way by developments in robotics. Let us make friends with robots and try to understand them.

There are two main types of robots: industrial robots, and autonomous robots. Industrial robots do useful work in a structured or pre-determined environment. They do repetitive jobs like fabricating cars, stitching shirts, or making computer chips, all according to a set of instructions programmed into them.

Autonomous or smart robots, by contrast, are expected to work in an unstructured environment. They move around in an environment that has not been specifically engineered for them, and do useful and ‘intelligent’ work. They have to interact with a dynamically changing and complex world, with the help of sensors and actuators and a brain centre.


There have been several distinct or parallel approaches to the development of machine intelligence (Nolfi and Floreano 2000). The classical artificial-intelligence (AI) approach attempted to imitate some aspects of rational thought. Cybernetics, on the other hand, tended to adopt the human-nervous-system approach more directly. And evolutionary or adaptive robotics embodies a convergence of the two approaches.



Thus the main routes to the development of autonomous robots are:

  • behaviour-based robotics;
  • robot learning;
  • artificial-life simulations (in conjunction with physical devices comprising the robot); and
  • evolutionary robotics.
I have already discussed artificial life in Part 77. Let us focus on behaviour-based robotics here.


In the traditional AI approach to robotics, the computational work for robot control is decomposed into a chain of information-processing modules, proceeding from overall sensing to overall final action. By contrast, in behaviour-based robotics (Brooks; Arkin), the designer provides the robot with a set of simple basic behaviours. A parallel is drawn from how coherent intelligence (‘swarm intelligence) emerges in a beehive or an ant colony from a set of very simple behaviours. In such a vivisystem, each agent is a simple device interacting with the world with sensors, actuators, and a very simple brain.


In Brooks’ ‘subsumption architecture, the decomposition of the robot-control process is done in terms of behaviour-generating modules, each of which connects sensing to action directly. Like an individual bee in a beehive, each behaviour-generating module directly generates some part of the behaviour of the robot. The tight (proximity) coupling of sensing to action produces an intelligent network of simple computational elements that are broad rather than deep in perception and action.


There are two further concepts in this approach: ‘situatedness’, and ‘embodiment’. Situatedness means the incorporation of the fact that the robot is situated in the real world, which directly influences its sensing, actuation, and learning processes. Embodiment means that the robot is not some abstraction inside a computer, but has a body which must respond dynamically to the signals impinging on it, using immediate feedback. This makes evolution of intelligence in a robot more realistic than the artificial evolution carried out entirely inside a computer.

In Brooks' (1986) approach, the desired behaviour is broken down into a set of simpler behaviours (‘layers’), and the solution (namely the control system) is built up incrementally, layer by layer. Simple basic behaviours are mastered first, and behaviours of higher levels of sophistication are added gradually, layer by layer. Although basic behaviours are implemented in individual subparts or layers, a coordination mechanism is incorporated in the control system, which determines the relative strength of each behaviour in any particular situation.


Coordination may involve both competition and cooperation. In a competitive scenario, only one behaviour determines the motor output of the robot. Cooperation means that a weighted sum of many behaviours determines the robot response.

In spite of the progress made in behaviour-based robotics, the fact remains that autonomous mobile robots are difficult to design. The reason is that their behaviour is an emergent property (Nolfi and Floreano 2000). By their very nature, emergent phenomena in a complex system (in this case the robot interacting with its surroundings) are practically impossible to predict, even if we have all the information about the sensor inputs to the robot and the consequences of all the motor outputs. The major drawback of behaviour-based robotics is that the trial-and-error process for improving performance is judged and controlled by an outsider, namely the designer. It is not a fully self-organizing and evolutionary approach for the growth of robotic intelligence. And it is not easy for the designer to do a good job of breaking down the global behaviour of a robot into a set of simple basic behaviours. One reason for this difficulty is that an optimal solution of the problem depends on who is describing the behaviour: the designer or the robot? The description can be distal or proximal (Nolfi and Floreano 2000).

Proximal description of the behaviour of the robot is a description from the vantage point of the sensorimotor system that describes how the robot reacts to different sensory situations.

The distal description is from the point of view of the designer or the observer. In it, the results of a sequence of sensorimotor loops may be described in terms of high-level words like ‘approach’ or ‘discriminate’. Such a description of behaviour is the result of not only the sensory-motor mapping, but also of the description of the environment. It thus incorporates the dynamical interaction between the robot and the environment, and that leads to some difficult problems. The environment affects the robot, and the robot affects the environment, which in turn affects the robot in a modified way, and so on. This interactive loop makes it difficult for the designer to break up the global behaviour of the robot into a set of elementary or basic behaviours that are simple from the vantage point of the proximal description. Because of the emergent nature of behaviour, it is difficult to predict what behaviour will result from a given control system. Conversely, it is also difficult to predict what pattern of control configurations will result in a desired behaviour.

As we shall see in the next post, this problem is overcome in evolutionary robotics by treating the robot and the environment as a single system, in which the designer has no role to play. After all, this is how all complexity has evolved in Nature.

No comments:

Post a Comment