The Selfish Gene Read online

Page 8


  One of the most striking properties of survival-machine behaviour is its apparent purposiveness. By this I do not just mean that it seems to be well calculated to help the animal's genes to survive, although of course it is. I am talking about a closer analogy to human purposeful behaviour. When we watch an animal 'searching' for food, or for a mate, or for a lost child, we can hardly help imputing to it some of the subjective feelings we ourselves experience when we search. These may include 'desire' for some object, a 'mental picture' of the desired object, an 'aim' or 'end in view'. Each one of us knows, from the evidence of our own introspection, that, at least in one modern survival machine, this purposiveness has evolved the property we call 'consciousness'. I am not philosopher enough to discuss what this means, but fortunately it does not matter for our present purposes because it is easy to talk about machines that behave as if motivated by a purpose, and to leave open the question whether they actually are conscious. These machines are basically very simple, and the principles of unconscious purposive behaviour are among the commonplaces of engineering science. The classic example is the Watt steam governor.

  The fundamental principle involved is called negative feedback, of which there are various different forms. In general what happens is this. The 'purpose machine', the machine or thing that behaves as if it had a conscious purpose, is equipped with some kind of measuring device which measures the discrepancy between the current state of things, and the 'desired' state. It is built in such a way that the larger this discrepancy is, the harder the machine works. In this way the machine will automatically tend to reduce the discrepancy-this is why it is called negative feedback-and it may actually come to rest if the 'desired' state is reached. The Watt governor consists of a pair of balls which are whirled round by a steam engine. Each ball is on the end of a hinged arm. The faster the balls fly round, the more does centrifugal force push the arms towards a horizontal position, this tendency being resisted by gravity. The arms are connected to the steam valve feeding the engine, in such a way that the steam tends to be shut off when the arms approach the horizontal position. So, if the engine goes too fast, some of its steam will be shut off, and it will tend to slow down. If it slows down too much, more steam will automatically be fed to it by the valve, and it will speed up again. Such purpose machines often oscillate due to over-shooting and time-lags, and it is part of the engineer's art to build in supplementary devices to reduce the oscillations.

  The 'desired' state of the Watt governor is a particular speed of rotation. Obviously it does not consciously desire it. The 'goal' of a machine is simply defined as that state to which it tends to return. Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe that the missile is not under the direct control of a human pilot.

  It is a common misconception that because a machine such as a guided missile was originally designed and built by conscious man, then it must be truly under the immediate control of conscious man. Another variant of this fallacy is 'computers do not really play chess, because they can only do what a human operator tells them'. It is important that we understand why this is fallacious, because it affects our understanding of the sense in which genes can be said to 'control' behaviour. Computer chess is quite a good example for making the point, so I will discuss it briefly.

  Computers do not yet play chess as well as human grand masters, but they have reached the standard of a good amateur. More strictly, one should say programs have reached the standard of a good amateur, for a chess-playing program is not fussy which physical computer it uses to act out its skills. Now, what is the role of the human programmer? First, he is definitely not manipulating the computer from moment to moment, like a puppeteer pulling strings. That would be just cheating. He writes the program, puts it in the computer, and then the computer is on its own: there is no further human intervention, except for the opponent typing in his moves. Does the programmer perhaps anticipate all possible chess positions, and provide the computer with a long list of good moves, one for each possible contingency? Most certainly not, because the number of possible positions in chess is so great that the world would come to an end before the list had been completed. For the same reason, the computer cannot possibly be programmed to try out 'in its head' all possible moves, and all possible follow-ups, until it finds a winning strategy. There are more possible games of chess than there are atoms in the galaxy. So much for the trivial non-solutions to the problem of programming a computer to play chess. It is in fact an exceedingly difficult problem, and it is hardly surprising that the best programs have still not achieved grand master status.

  The programmer's actual role is rather more like that of a father teaching his son to play chess. He tells the computer the basic moves of the game, not separately for every possible starting position, but in terms of more economically expressed rules. He does not literally say-in plain English 'bishops move in a diagonal', but he does say-something mathematically equivalent, such as, though more briefly: 'New coordinates of bishop are obtained from old coordinates, by adding the same constant, though not necessarily with the same sign, to both old x coordinate and old y coordinate.' Then he might program in some 'advice', written in the same sort of mathematical or logical language, but amounting in human terms to hints such as 'don't leave your king unguarded', or useful tricks such as 'forking' with the knight. The details are intriguing, but they would take us too far afield. The important point is this. When it is actually playing, the computer is on its own, and can expect no help from its master. All the programmer can do is to set the computer up beforehand in the best way possible, with a proper balance between lists of specific knowledge, and hints about strategies and techniques.

  The genes too control the behaviour of their survival machines, not directly with their fingers on puppet strings, but indirectly like the computer programmer. All they can do is to set it up beforehand; then the survival machine is on its own, and the genes can only sit passively inside. Why are they so passive? Why don't they grab the reins and take charge from moment to moment? The answer is that they cannot because of time-lag problems. This is best shown by another analogy, taken from science fiction. A for Andromeda by Fred Hoyle and John Elliot is an exciting story, and, like all good science fiction, it has some interesting scientific points lying behind it. Strangely, the book seems to lack explicit mention of the most important of these underlying points. It is left to the reader's imagination. I hope the authors will not mind if I spell it out here.

  There is a civilization 200 light-years away, in the constellation of Andromeda. They want to spread their culture to distant worlds. How best to do it? Direct travel is out of the question. The speed of light imposes a theoretical upper limit to the rate at which you can get from one place to another in the universe, and mechanical considerations impose a much lower limit in practice. Besides, there may not be all that many worlds worth going to, and how do you know which direction to go in? Radio is a better way of communicating with the rest of the universe, since, if you have enough power to broadcast your signals in all directions rather than beam them in one direction, you can reach a very large number of worlds (the number increasing as the square of the distance the signal travels). Radio waves travel at the speed of light, which means the signal takes 200 years to reach earth from Andromeda. The trouble with this sort of distance i
s that you can never hold a conversation. Even if you discount the fact that each successive message from earth would be transmitted by people separated from each other by twelve generations, it would be just plain wasteful to attempt to converse over such distances.

  This problem will soon arise in earnest for us: it takes about four minutes for radio waves to travel between earth and Mars. There can be no doubt that spacemen will have to get out of the habit of conversing in short alternating sentences, and will have to use long soliloquies or monologues, more like letters than conversations. As another example, Roger Payne has pointed out that the acoustics of the sea have certain peculiar properties, which mean that the exceedingly loud 'song' of some whales could theoretically be heard all the way round the world, provided the whales swim at a certain depth. It is not known whether they actually do communicate with each other over very great distances, but if they do they must be in much the same predicament as an astronaut on Mars. The speed of sound in water is such that it would take nearly two hours for the song to travel across the Atlantic Ocean and for a reply to return. I suggest this as an explanation for the fact that some whales deliver a continuous soliloquy, without repeating themselves, for a full eight minutes. They then go back to the beginning of the song and repeat it all over again, many times over, each complete cycle lasting about eight minutes.

  The Andromedans of the story did the same thing. Since there was no point in waiting for a reply, they assembled everything they wanted to say into one huge unbroken message, and then they broadcast it out into space, over and over again, with a cycle time of several months. Their message was very different from that of the whales, however. It consisted of coded instructions for the building and programming of a giant computer. Of course the instructions were in no human language, but almost any code can be broken by a skilled cryptographer, especially if the designers of the code intended it to be easily broken. Picked up by the Jodrell Bank radio telescope, the message was eventually decoded, the computer built, and the program run. The results were nearly disastrous for mankind, for the intentions of the Andromedans were not universally altruistic, and the computer was well on the way to dictatorship over the world before the hero eventually finished it off with an axe.

  From our point of view, the interesting question is in what sense the Andromedans could be said to be manipulating events on Earth. They had no direct control over what the computer did from moment to moment; indeed they had no possible way of even knowing the computer had been built, since the information would have taken 200 years to get back to them. The decisions and actions of the computer were entirely its own. It could not even refer back to its masters for general policy instructions. All its instructions had to be built-in in advance, because of the inviolable 200 year barrier. In principle, it must have been programmed very much like a chess-playing computer, but with greater flexibility and capacity for absorbing local information. This was because the program had to be designed to work not just on earth, but on any world possessing an advanced technology, any of a set of worlds whose detailed conditions the Andromedans had no way of knowing.

  Just as the Andromedans had to have a computer on earth to take day-to-day decisions for them, our genes have to build a brain. But the genes are not only the Andromedans who sent the coded instructions; they are also the instructions themselves. The reason why they cannot manipulate our puppet strings directly is the same: time-lags. Genes work by controlling protein synthesis. This is a powerful way of manipulating the world, but it is slow. It takes months of patiently pulling protein strings to build an embryo. The whole point about behaviour, on the other hand, is that it is fast. It works on a time-scale not of months but of seconds and fractions of seconds. Something happens in the world, an owl flashes overhead, a rustle in the long grass betrays prey, and in milliseconds nervous systems crackle into action, muscles leap, and someone's life is saved-or lost. Genes don't have reaction-times like that. Like the Andromedans, the genes can only do their best in advance by building a fast executive computer for themselves, and programming it in advance with rules and 'advice' to cope with as many eventualities as they can 'anticipate'. But life, like the game of chess, offers too many different possible eventualities for all of them to be anticipated. Like the chess programmer, the genes have to 'instruct' their survival machines not in specifics, but in the general strategies and tricks of the living trade.

  As J. Z. Young has pointed out, the genes have to perform a task analogous to prediction. When an embryo survival machine is being built, the dangers and problems of its life lie in the future. Who can say what carnivores crouch waiting for it behind what bushes, or what fleet-footed prey will dart and zig-zag across its path? No human prophet, nor any gene. But some general predictions can be made. Polar bear genes can safely predict that the future of their unborn survival machine is going to be a cold one. They do not think of it as a prophecy, they do not think at all: they just build in a thick coat of hair, because that is what they have always done before in previous bodies, and that is why they still exist in the gene pool. They also predict that the ground is going to be snowy, and their prediction takes the form of making the coat of hair white and therefore camouflaged. If the climate of the Arctic changed so rapidly that the baby bear found itself born into a tropical desert, the predictions of the genes would be wrong, and they would pay the penalty. The young bear would die, and they inside it.

  Prediction in a complex world is a chancy business. Every decision that a survival machine takes is a gamble, and it is the business of genes to program brains in advance so that on average they take decisions that pay off. The currency used in the casino of evolution is survival, strictly gene survival, but for many purposes individual survival is a reasonable approximation. If you go down to the water-hole to drink, you increase your risk of being eaten by predators who make their living lurking for prey by water-holes. If you do not go down to the water-hole you will eventually die of thirst. There are risks whichever way you turn, and you must take the decision that maximizes the long-term survival chances of your genes. Perhaps the best policy is to postpone drinking until you are very thirsty, then go and have one good long drink to last you a long time. That way you reduce the number of separate visits to the water-hole, but you have to spend a long time with your head down when you finally do drink. Alternatively the best gamble might be to drink little and often, snatching quick gulps of water while running past the water-hole. Which is the best gambling strategy depends on all sorts of complex things, not least the hunting habit of the predators, which itself is evolved to be maximally efficient from their point of view. Some form of weighing up of the odds has to be done. But of course we do not have to think of the animals as making the calculations consciously. All we have to believe is that those individuals whose genes build brains in such a way that they tend to gamble correctly are as a direct result more likely to survive, and therefore to propagate those same genes.

  We can carry the metaphor of gambling a little further. A gambler must think of three main quantities, stake, odds, and prize. If the prize is very large, a gambler is prepared to risk a big stake. A gambler who risks his all on a single throw stands to gain a great deal. He also stands to lose a great deal, but on average high-stake gamblers are no better and no worse off than other players who play for low winnings with low stakes. An analogous comparison is that between speculative and safe investors on the stock market. In some ways the stock market is a better analogy than a casino, because casinos are deliberately rigged in the bank's favour (which means, strictly, that high-stake players will on average end up poorer than low-stake players; and low-stake players poorer than those who do not gamble at all. But this is for a reason not germane to our discussion). Ignoring this, both high-stake play and low-stake play seem reasonable. Are there animal gamblers who play for high stakes, and others with a more conservative game? In Chapter 9 we shall see that it is often possible to picture males as high-stake high-risk g
amblers, and females as safe investors, especially in polygamous species in which males compete for females. Naturalists who read this book maybe able to think of species that can be described as high-stake high-risk players, and other species that play a more conservative game. I now return to the more general theme of how genes make 'predictions' about the future.

  One way for genes to solve the problem of making predictions in rather unpredictable environments is to build in a capacity for learning. Here the program may take the form of the following instructions to the survival machine: 'Here is a list of things defined as rewarding: sweet taste in the mouth, orgasm, mild temperature, smiling child. And here is a list of nasty things: various sorts of pain, nausea, empty stomach, screaming child. If you should happen to do something that is followed by one of the nasty things, don't do it again, but on the other hand repeat anything that is followed by one of the nice things.' The advantage of this sort of programming is that it greatly cuts down the number of detailed rules that have to be built into the original program; and it is also capable of coping with changes in the environment that could not have been predicted in detail. On the other hand, certain predictions have to be made still. In our example the genes are predicting that sweet taste in the mouth, and orgasm, are going to be 'good' in the sense that eating sugar and copulating are likely to be beneficial to gene survival. The possibilities of saccharine and masturbation are not anticipated according to this example; nor are the dangers of over-eating sugar in our environment where it exists in unnatural plenty.