From the amoeba to Einstein, the growth of knowledge is always the same: we try to solve our problems, and to obtain, by a process of elimination, something approaching adequacy in our tentative solutions.
In the provocative essay "Universal Darwinism" (referred to at the end of chapter 2), Richard Dawkins maintains that if life were to be found elsewhere in the universe, we would have very good reasons to suspect that it had evolved as it did on Earth, that is, by natural selection. Let us recall that Dawkins's conclusion is based on the argument that the process of cumulative blind variation and selection is the only currently available scientific explanation that is in principle capable of explaining the emergence of the adapted complexity required for life.
But Dawkins's argument is not limited to adaptive biological evolution. We saw in the preceding chapters how selectionist explanations for puzzles of fit themselves have evolved in many other disciplines that are also concerned with understanding various forms of knowledge growth involving the spontaneous emergence of fit between two or more interacting systems. Scottish philosopher and psychologist Alexander Bain may have been the first to apply a Darwinian perspective to human thought (see chapter 9), but it was not until 100 years after the publication of Darwin's Origin that Donald T. Campbell envisioned a comprehensive, all-embracing role for selection theory. His selectionism emphasizes the psychological, scientific, and cultural growth of knowledge, but the recent selectionist discoveries in immunology and neurophysiology, and the applications of selection theory to the engineering of molecules and computer software finally have attracted the attention of the scientific community to the importance and potentially universal applicability of Darwin's selectionist insight.
Indeed, under certain conditions, the evolution of fit by way of cumulative blind variation and selection appears inevitable. Consider a population of self-replicating entities that vary in ways relevant to their reproductive success, and that inhabit an environment of limited space and resources that does not undergo large fluctuations from one generation of entities to the next. If these entities produce quite (but not always perfectly) accurate copies of themselves, after a few generations the winnowing effect of selection will be noticed as the population inexorably shifts toward a preponderance of new entities that better fit their environment. This, in a nutshell, is what the process of cumulative blind variation and selection is all about.
However, as we have now seen, these entities do not have to be restricted to living organisms and the genes they contain. They can be molecules, antibodies, neural synapses, behaviors, scientific theories, technological products, cultural beliefs, words, or computer programs. And selection does not have to be restricted to the natural and purposeless selection of Mother Nature, but may involve purposeful humans selecting for plants growing bigger tomatoes, cows giving more milk, scientific theories providing better predictions, automobile engines yielding greater efficiency, or molecules providing more powerful drugs. The robustness of the selection process was dramatically demonstrated by the findings of artificial life researchers such as Thomas Ray (see chapter 13) who were amazed at just how easy it is to get adaptive evolution happening on their computers. As long as the basic conditions of some (but not too much) variability, accurate (but not too accurate) replication, and a fairly stable environment prevail, the mechanistic, unforesighted evolution of fit appears inescapable.
The selectionist explanation for the emergence of adapted complexity and new knowledge may also appear inevitable on logical grounds. As Campbell put it, "increasing knowledge or adaptation of necessity involves exploring the unknown, going beyond existing knowledge and adaptive recipes. This of necessity involves unknowing, non-preadapted fumbling in the dark." Of course to this we must add the selection and retention of those occasional results of this fumbling that are found by hindsight to provide a better solution to the problem at hand.
When stated this way, particularly using the phrase "of necessity," a selection theory epistemology may be inevitable by tautology. That is, it appears to be true simply by definition (as is the statement "a bachelor is an unmarried male") with no possibility of being disproved and consequently replaced with a better theory of knowledge growth. A selectionist epistemology may be described tautologically; however, this does not mean either that it is false or that it cannot be stated in a more falsifiable way. For example, Campbell also made the bold claim that
A blind-variation-and-selective-retention process is fundamental to all inductive achievements, to all genuine increases in knowledge, to all increases in the fit of system to environment.
And it was observed that
This statement is clearly not analytic. One can easily imagine possible worlds in which genuine increases in knowledge are generated in some other way, by prayer for instance, or through the cultivation of omniscient meditative states. But we need not be so exotic in the search for alternative models of knowledge generation. In fact the received wisdom of Anglo-Saxon philosophy describes a world in which knowledge is generated otherwise. In that world knowledge is generated through direct passive-absorptive associations of ideas (passive induction). BVSR [blind variation and selective retention] as an analog for all knowledge generation was introduced specifically against such views, a point made most clearly by Popper.
Thus, it should be possible to test the bold selectionist hypothesis of Campbell and discover--if they exist--nonselectionist processes underlying the emergence of adapted complexity and knowledge growth. This may not be easy in disciplines such as the history of science and cognitive psychology, but the new selectionist explanations for antibody production and brain development are based on empirical research findings, not on empty tautologies.
Campbell would thus have us believe that all knowledge, all problem solving, all skills, all adaptive physiological and neural changes, all useful cultural beliefs and practices, and all scientific and cultural progress have at their roots cumulative blind variation and selection--phylogenetic (among organisms), ontogenetic (within organisms), or both--of the same general type proposed by Darwin to account for the evolution of species. The findings, theories, and rationale for selectionist explanations of puzzles of fit reviewed in this book suggest that such a universal selection theory should now be taken seriously.
Such a theory has definite elegance as well as undeniable audacity. One supporter of Campbellian universal selection theory remarked that it
. . . points the way toward a unified theory of knowledge generation based upon mechanical processes. The natural process by which variation and selective survival designed the species may be seen at the root of all adaptive design and knowledge as a subspecies of adaptive design. The vastness of the conception, uniting knowledge and adaptation, biology and epistemology, and artificial and natural intelligence is what continues to tempt Campbell and others to insist on the blind variation and selective retention approach to knowledge. In all instances of fit to system it is (to paraphrase Campbell) hindsighted selection and not foresighted variation that is the key to adaptive advance.
But perhaps Campbell goes too far. Is it really the case that only selectionist processes can create the puzzles of fit of adapted complexity?
As we saw in chapters 7 and 13, Pavlovian conditioning (possibly) and backpropagation neural networks (more clearly) provide examples of how adapted complexity may be achieved by instructionist processes. But we also noted that such processes are quite brittle in their operation since they cannot adapt to large, unpredictable changes in the context in which they work. So although Pavlovian conditioning may allow an animal to adapt behaviorally by in effect anticipating certain events, it cannot provide the animal with useful novel behaviors. And although a backpropagation neural network can adapt to fit the requirements of some very complex tasks, we have seen how it may instead find itself trapped at a seriously suboptimal location on its fitness landscape from which it cannot escape without randomly changing the starting weights of the neurodes (or making other structural modifications such as a change in the number of middle-level neurodes) and starting all over again, that is, resorting to blind variation and selection.
There are also mechanical examples of the achievment of fit by instruction. For example, it is possible to insert a brass key blank into a lock and use the markings left by the lock's interior to fashion a key to fit and operate the lock. Here the inside of the lock acts as a template that transmits information onto the key blank, which is then used to make the key fit the tumblers of the lock. The fact that a new automobile engine runs more smoothly and efficiently after an initial break-in period is another example of mechanical fit (involving the fit of pistons to cylinders and valve rods to camshafts) resulting from an instructionist process whereby a component's environment operates directly on it, and not by selection of previously generated variations, to cause it to fit better.
Let us consider two more examples of adapted complexity that appear to have roots in instructionist mechanisms. On my computer I have a communications program called Mosaic. With Mosaic up and running, my computer becomes a marvelously well-adapted instrument for finding and displaying information from the Internet in the form of text, images, and sounds. Your computer cannot do likewise without this or similar software. But if I take a diskette on which the program has been copied and insert it into your computer, I can provide processing instructions (in the form of software) that now enables your computer to do what mine can do (the version of Mosaic I use is available free, so this is legal). So in effect your computer has achieved new adapted complexity with no variation and selection on its (or our) part.
It is also possible to imagine how something similar may be possible in the future with human brains. We described in chapter 5 that all we know and can do appears dependent on patterns of interconnections among neurons. So if I could somehow determine how your brain is connected up and reproduce that same pattern in my brain, I would know what you know and be able to do what you can do. If you could speak both Chinese and English and I only English, I would become bilingual as well. (I would hope a way would be found so that only those connections underlying a particular desired competence would be reproduced in my brain, so I wouldn't have to give up the knowledge, skills, and personality I already possess.)
In both cases, computers and brains, increases in adapted complexity would result from an instructionist process. But if we look closer, we will see that we have simply described a process of transmission of information from one system to another, not the actual development of the knowledge. Once a complex computer program has been developed through a painstaking process of cumulative trial-and-bug elimination (as any programmer must admit), it can be transferred to and run on any compatible computer. But no new adapted complexity is generated in this process. And, of course, human knowledge has never been, and may never be, transmitted by having one brain directly instruct the synapses of another.
So we find ourselves considering selection once again. In its simplest form, selection can involve choosing among already provided alternatives with no way to modify further the given alternatives or create new ones. This is like approaching a locked room with 10 keys and not knowing which one will allow you to enter. If one key does not work, you can only choose another. And if none of them fits, you are out of luck. This simplest (and least powerful) selection process can be referred to as nonconstructive or nongenerative selection, since no new variations are generated and one is therefore limited to the variations already on hand. Ehrlich's original side-chain theory of antibody production (see chapter 4) was a nongenerative selection theory. And so appears Fodor's view of human hypothesis testing (with all hypotheses to be tested and consequently selected or rejected being innately provided), together with Piattelli-Palmarini's and Gazzaniga's application of selectionism to cognition, as discussed at the end of the previous chapter.
Much more powerful is constructive or generative selection, which involves the creation of new variations, whether they be organisms, antibodies, patterns of neural connections, behaviors, thoughts, concepts, or computer algorithms. By recombining elemental building blocks, the resulting variations have new, unpredictable properties that are not contained in any of the individual building blocks of which they are composed. In effect, variations with novel properties emerge from the recombination of the building blocks.
Complex three-dimensional configurations of proteins (their tertiary structures) emerge from sequences of animo acids, which in turn emerge from DNA sequences encoded in genes. And from proteins emerge organelles, from organelles emerge cells, from cells emerge organs, from organs emerge organ systems (such as the circulatory and nervous systems), from organ systems emerge organisms, and from organisms emerge societies and (for humans) social institutions. And from the simple yet marvelously coordinated activity of billions of individual neurons emerge human behavior, knowledge, and consciousness itself.
The emergence of new properties and complex systems is not restricted to the living world. In chemistry we see how combining sodium, an alkali metal that reacts violently with water, with chlorine, a deadly gas, produces sodium chloride, or ordinary table salt, that neither reacts violently with water nor is toxic. In computers, sequences of binary digits give rise to computer programs. The basic components of control systems, such as the cruise control device you may have in your car, are made up of rather simple input-output devices. Connecting them together in a special way (see chapter 8) gives the resulting control system the lifelike "willpower" to maintain a desired goal despite unpredictable disturbances.
This phenomenon of emergence is clearly very powerful stuff, and it provides the complexity without which life surely could not exist. But since the detailed properties of emergent variations--whether they be molecules, organelles, cells, organs, organ systems, organisms, societies, or social in-stitutions--cannot initially be predicted from knowledge of their component building blocks, the only way that newly emerged entities can lead to adapted complexity is through a process of blind variation and hindsighted selection.
But we can make selection more powerful still. Not being content with a single-step selection process, we can instead take the best of the variations, vary them, and then select the best of the new generation, repeating the process over and over again. This, of course, is constructive cumulative selection (figure 16.1). This process of selecting and fine-tuning the occasional accidently useful emergent system turns out to be so powerful that we should not be surprised that the adaptive processes of biological evolution, antibody production, learning, culture, and science all employ it, and that its power is now being explicitly exploited in the design of organisms, drugs, and computer software by one of evolution's most complex and adaptive creations--the human species.
So Campbell does go too far in his radical selectionism if he insists that all mechanisms leading to fit must be selectionist in their current operation. But he appears to be right on the mark if by new knowledge he means emergent knowledge. And, of course, any instructionist mechanism capable of generating new adapted complexity is itself an emerged adaptive system that must owe its own origin to the prior emergence and consequent selection of candidate instructionist systems.
Another useful perspective on selection can be had by considering the process of cumulative blind variation and selection as a search-and-construction procedure for finding solutions to novel problems.
Figure 16.2 provides representations of four different types of problem spaces or fitness landscapes. All four graphs represent functions with the value of y (height on the vertical dimension) determined by the value of x (lateral position on the horizontal dimension). Solving each problem requires finding the value of x that yields the highest (or at least close to the highest) value of y, starting out with no knowledge of the relationship between the two values, that is, no prior knowledge of the function as represented in the relevant graph.
In the first landscape, we have essentially a problem of finding the needle in the haystack, since only a single value (or at most a very limited range of values) of x provides a nonzero value for y. An example of such a problem would be that of a thief trying to figure out the personal identification number (PIN) of a stolen bank card, where only one four-digit combination will work. Since this problem has but one solution, the best that the thief can do is to try each and every one of the 10,000 possible four-digit numbers between 0000 and 9999. But this could be considerably shortened if the result from each trial could somehow be used to provide a clue as to what number should be tried next. Unfortunately, the all-or-none nature of this problem means that the thief has no way to use the results of prior trials to zero in on the target, other than making sure not to try past failures a second time. No gradual accumulation of knowledge is therefore possible, since each trial is no more or less likely than the previous one to provide the solution. So this is a form of noncumulative selection. Nonetheless, if the thief has enough time and patience, he should eventually succeed (although perhaps not before the card is reported stolen and canceled). Unfortunately, there might not be enough time left in the universe to find a solution using this method for problem spaces that are very large and multidimensional. So the only way a solution can be found is through a series of blind guesses and hindsighted selection. The essentially noncumulative character of the problem makes this at best a tedious and at worst an impossibly long process.
In the second panel we notice a broad peak to the solution, so that trials in the neighborhood of the peak provide better results than those farther away. An example of such a problem would be finding the amount of fertilizer that maximizes a certain crop yield, where too little fertilizer leads to diminished yields, as does too much. Now it is not necessary to try all possible quantities of fertilizer, since if it is found that more fertilizer is better, one will continue to try still more until the yield begins to drop again. Not much blind variation is involved in this problem, except perhaps for the guess of the initial quantity. Such single-peak problems can often be solved quite readily using what are called hill-climbing techniques or algebraic methods based on differential calculus, that is, solving for the x value whose associated y value has a slope of zero if the function relating y to x is known. Indeed, we discussed in chapter 13 and in the previous section that backpropagation neural networks achieve their fit by a type of instructionist hill-climbing technique. A variation and selection procedure can also work quite well, and much more quickly than in the previous problem, as long as new guesses about the value of x take advantage of the knowledge of the partial successes of previous trials.
Things become both more interesting and difficult when we consider a two-peak problem space as shown in the third panel. Now we have a local maximum (the lower peak), in addition to the global maximum representing the overall best solution, so we run the risk of getting stuck on the local maximum and never finding a better (or the best) solution. An example of such a problem would be finding the optimum dosage of a drug to treat hypertension, where a low dosage has some effect in lowering blood pressure, a high dosage has the greatest effect, and intermediate dosages have little effect. The only way to escape the trap of a local peak is to try some new, random values to see if they lead to a higher peak. Since it is not known where that peak may be located, one can only take a blind jump off the local peak and hope that it leads to a better solution. The cumulative-variation-and-selection approach to this problem would be to start with a wide-ranging population of values of x and allow them to recombine, reproduce, and mutate so that it is unlikely that the global peak would be missed.
But even this two-peak example is much simpler than many if not all of the real-world problems that we and other species encounter. The fourth panel represents a very complex problem space with many jagged peaks and valleys. On first consideration, finding the value of x that provides the best solution might seem as difficult as the first problem considered above. Regardless of the complexity of the landscape, constructive and cumulative blind variation and selection may be successful in finding a solution to this and similar problems if knowledge concerning the best solution of previous trials is applied to construct new variations. This is what is done in the biological world with sexual reproduction, and it is also used in genetic algorithms and genetic programming as described in chapter 13. By taking values that on previous trials provided the best solutions and using them to breed new values, patterns that may be too complex for the human eye to perceive can be exploited and a solution reached. By repeatedly breeding (with mutation or sexual recombination to ensure a continued source of new variations) from the best solutions found so far, knowledge obtained from past trials is preserved, and at the same time new blind variations are introduced in each generation that continue to grope for still better solutions.
Representing problems as fitness landscapes can be useful for understanding how a selectionist procedure finds solutions. It can be somewhat misleading, however, giving the impression that the solutions to be searched all exist before the search takes place--in other words, that nonconstructive selection is involved. Instead, these solutions exist only as potentialities, and each one must first be constructed before it can be evaluated. This is obvious in biological evolution where pairs of sexually reproducing organisms whose offspring, no matter how many, represent only a minute fraction of the possible genetic recombinations of the parents' genomes. And when to this is added various possible mutations and other errors such as extra or missing chromosomes, the possibilities that can be constructed and evaluated by selection become infinite. The importance of the construction and consequent emergence of new forms to test and select led Henry Plotkin, an important advocate of universal selection theory, to use the phrase "generate-test-regenerate" to describe what is referred to here as constructive cumulative selection.
Despite the very powerful generation-and-search procedures that constructive cumulative selection provides, it does have its limits. Notably, it cannot be guaranteed to find the optimum solution for every problem. But it has now been well demonstrated that many problems exist for which constructive cumulative selection can find, if not the best, at least useful solutions to a broad range of problems that are orders of magnitude more complex than the puny one-dimensional examples considered here, such as those related to the evolution of living organisms in complex and hostile environments, and finding better scientific theories to account for puzzling physical and biological phenomena. Constructive cumulative blind variation and selection can provide just the right blend of conservative old knowledge and risky innovation to push back the frontiers of knowledge. These two features as they relate to adaptive biological evolution (although they are relevant to all selectionist achievements of adapted complexity) are well characterized by Plotkin:
One is that it takes the logical form of induction, generalizing into the future what worked in the past. That is, the successful variants are fed back into the gene pool where they will be available for sampling by future organisms. This is the conservative, pragmatic part of the heuristic. The other is the generation of novel variants by chance processes. This is the radical, inventive component of the heuristic. It is nature's way of injecting new variants into the system in order, possibly, to make up for the deficiencies that may occur if what worked in the past no longer does so because the world has changed. When John Odling-Smee and I first wrote about this in the 1970s, we noted: "In effect the g-t-r [generate-test-regenerate] heuristic `gambles' that the future will be the same as the past. At the same time it hedges its bets with aleatoric (chance) jumps, just in case it is not."
Although it may be far from perfect, no other general-purpose construct-and-search procedure has yet shown itself to be as capable for such a broad range of problems, and none other is able to explain the remarkable achievements of fit we continually encounter in both natural and humanmade environments.
Universal selection theory draws heavily on biological evolution for its inspiration. Although the evolution of living forms is only one instance of a selectionist process resulting in adapted complexity, it provides the foundation and inspiration for all other selectionist theories of the emergence of fit. Biology has also lived longer with selectionist thinking than any other discipline. For these reasons, recent developments in biology that cast doubts on the fundamental role of natural selection in the emergence of the adapted complexity of living organisms are of considerable interest to those attempting to extend the selectionist perspective to other fields. If cumulative blind variation and selection is found to be lacking as an explanation for the emergence of design in organic evolution, an extension of selectionist principles to other achievements of adapted complexity would be suspect. We will therefore now confront five such would-be challengers to natural selection: punctuated equilibrium, directed mutation, exaptation, symbiosis, and self-organization.
According to classic Darwinian selection, biological evolution proceeds through the accumulation of very small changes over long periods of time. Gradual change is essential, since it is the only way that blind variation is likely to come up with improvements for selection. Whereas it is always possible that a large genetic change (or macromutation) may result in a fitter organism, for example, the transformation of an organism completely insensitive to light to one with a functioning eye in one generation, the laws of probability are almost certain to make large random changes less adaptive rather than more. As Dawkins explains:
To "tame" chance means to break down the very improbable into less improbable small components arranged in series. No matter how improbable it is that an X could have arisen from Y in a single step, it is always possible to conceive of a series of infinitesimal graded intermediates between them. However improbable a large-scale change may be, smaller changes are less improbable. And provided we postulate a sufficiently large series of sufficiently finely graded intermediates, we shall be able to derive anything from anything else, without astronomical improbabilities.
The problem is, however, that the fossil record does not provide clear evidence for the gradual change of even one species into another. Darwin recognized the incompleteness of the fossil record, but believed that it was only a matter of time before these intermediate "missing links" would be found to provide hard evidence for the gradual emergence of new species over time. That these fossil gaps remain despite many new fossil finds has been taken by some as an indication that Darwin's emphasis on the gradualism of evolution was mistaken, and that evolution proceeds not by slow, gradual changes but rather by large and dramatic jumps, or saltations.
True saltationists are not easy to find among modern evolutionary biologists, since it is generally recognized that large, blind macromutations from parent to offspring are almost certain to be maladaptive. But a well-known antigradualist perspective is present today in the theory of "punctuated equilibrium," developed by Gould and Eldredge.
These researchers theorize that instead of continuous gradual change over time, the evolution of a species is marked by long periods of no or little change (stasis) interrupted occasionally by short periods of relatively rapid evolutionary change (punctuations). This may be a somewhat different picture of evolution than originally conceived by Darwin, but it is not inconsistent with the gradualism that is an essential part of natural selection. Although punctuated equilibrium describes relatively rapid change, this change still takes place over very long time periods, in the range of many thousands of years to much longer.
What is characteristic of punctuated equilibrium, then, is not the belief in adaptive macromutations arising in a single generation, but rather the long periods of stasis. But these periods need not be considered mysterious since they may simply be an indication that the species was already well adapted to its environment, and that the environment was not undergoing any rapid changes that would have created new selection pressures requiring new adaptations. So actually nothing in the theory of punctuated equilibrium is in any way fundamentally inconsistent with Darwin's conception of evolution.
Similar compatibility is not the case, however, for another view of evolution that has attracted considerable interest and led to much recent controversy. In 1988 John Cairns, a well-respected molecular biologist and cancer researcher, published with two associates a paper in the prestigious British journal Nature that threatened to undermine the basic tenets of Darwinian evolution.
Cairns and his colleagues claimed to have found evidence that E. coli was able somehow to direct its mutations to achieve adaptive changes when placed in a new, challenging environment. This research involved placing bacteria that could only metabolize glucose in an environment where only a foreign sugar (lactose) was available. Here the stressed bacteria continued to duplicate and, as would be expected, some of the descendants contained mutations that permitted them to metabolize the new sugar. This in itself is not surprising, since the genetic change necessary to transform an E. coli from a glucose- to a lactose-eating bacterium is quite small, and in a large colony it would be expected that at least some of the naturally occurring mutants would have stumbled on it by sheer blind chance. But these scientists reached the highly unorthodox conclusion that instead of being produced randomly, the bacteria were somehow able to produce the adaptive mutations at a much higher frequency than other, nonadaptive mutations. In other words, they believed that their studies provided evidence that "bacteria can choose which mutations they should produce" which would "provide a mechanism for the inheritance of acquired characteristics."
As would be expected, these statements immediately elicited both considerable interest and controversy, since the central dogma of biology was being challenged, that is, that changes in the environment cannot direct (instruct) changes in the genome. Some researchers rejected this conclusion out of hand, but others were impressed enough to attempt to find possible mechanisms by which the environment could somehow instruct the genome to produce just the right mutations to allow it to digest the new sugar. Cairns himself proposed that environmental changes could affect changes in proteins that could consequently instruct the DNA to make certain adaptive changes in the genes, in flagrant violation of the central dogma.
However, it may well be that this and other explanations for directed or "instructed" mutation are not necessary after all. Australian microbiologist Donald MacPhee and his colleagues provided evidence that, when placed in a medium of lactose, the mutations produced by glucose-metabolizing E. coli are indeed produced blindly. What seems to happen under the stressed condition of a glucose-poor environment is not a specific increase in the rate of adaptive mutations, but rather a general increase in the overall mutation rate due to inhibition of the mechanism that usually checks and repairs the genetic errors that arise during the normal functioning of the bacterium. So while mutations continue to be produced blindly, the higher rate of genetic change allows the bacteria to stumble on the adaptive genetic change more quickly than they would if left in their normal glucose-rich environment.
But let us continue to imagine for a moment that a bacterium was able to change just those genes regulating metabolism in just the right way to allow for the digestion of a foreign sugar. If this were the case, it would be yet another example of a puzzle of fit demonstrating that the bacterium had somehow acquired the ability to sense a new sugar in its environment and alter its genome to digest it. But then we would be led to ponder how this adapted complexity could have originated in the first place, with cumulative blind variation and selection as a prime candidate to explain the source of this remarkable ability that somehow permitted the bacterium to instruct its genome to make the required changes to digest the new, strange food that was being served.
Although no convincing evidence exists that adaptive changes in genes can be directed by the environment in a Lamarckian manner, the findings of Cairns and MacPhee and their respective colleagues are important. If organisms are able to increase their mutation rate in the presence of new environmental stresses but keep mutations in check when these stresses are absent, it would enable organisms to exert a certain degree of control over evolution that is absent from the classic neo-Darwinian perspective. Instead of producing mutations at a constant rate regardless of environmental conditions, organisms may produce more mutations and therefore more varied offspring just when such innovative variation is necessary to keep the species extant.
This view ascribes to the evolutionary process decidedly more "intelligence" than does the neo-Darwinian perspective. It nonetheless preserves the required blindness of genetic variations. What is altered is only the rate of production of these variations. This sensitivity of mutation rate to environmental stress could simply be the result of a stress-related breakdown of genetic repair mechanisms. Or it could be the result of a more sophisticated active mechanism that itself had evolved by natural selection, since individuals that by chance produced more genetic variability under difficult environmental conditions would have been more likely to leave better adapted progeny than those insensitive to environmental stress.
The work of Cairns and MacPhee concerned the metabolism of different types of food. It is not difficult to imagine how other types of biological functions could also be involved, such as thermoregulation. For example, as temperatures dropped at the onset of an ice age, mammals would undergo stress as did Cairns's bacteria when placed in an environment where no useful food was available. This would lead to an increase in the mutation rate during reproduction, resulting in a second generation of animals with greater variation in the length and texture of their coats. Those particular descendants having, by chance, longer and thus warmer coats would suffer less from the cold environment, resulting in lower mutation rates and consequently less variation in the coats of their third generation, extra-hairy offspring. But those second-generation animals with short coats would maintain a higher rate of mutation, so that at least some of their offspring would likely have warmer coats then their parents.
This hypothesis has some interesting consequences. As in the ice-age example, by varying the mutation rate, a species would adapt more quickly to changing environmental conditions. It is also of interest to realize that such stress-dependent mutation rates would result in occasional short periods of relatively rapid (although still gradual) evolutionary change separated by longer periods of little or no change during periods of environmental stability. And this is exactly what Gould, Eldredge, and their associates refer to as punctuated equilibrium.
Another challenge to natural selection is posed by those advocating exaptation as a major mechanism of biological evolution. Although we briefly discussed exaptation in chapter 11, some additional remarks are appropriate here, as this perspective has received considerable attention as a potential rival to the selectionist account of adaptive evolution.
It will be recalled from chapters 5 and 11 that exaptation refers to the emergence of some feature of an organism that fits a current function, but did not originally evolve for this use. We considered how bird and insect wings, now used for flight, appear to have have evolved originally to aid in cooling and heating. Darwin's own example was how the lungs of terrestrial animals evolved from the swim bladder of fishes. Exaptation also refers to the emergence of a current adaptedly complex characteristic that arose for no functional reason, probably as a nonselected correlate of some other adapted characteristic, but turns out later to be useful for some function.
The concept of exaptation is particularly valuable in understanding how the necessary gradualism of natural selection can account for the evolution of complex adaptations that would appear to be functionally useless in their incipient forms. Let us consider again the bird's wing. If wings evolved gradually over a long period of time, the first protowings would have been nothing but stubby protuberances from the backs of protobirds. Since they would have been ill suited for flight, the question arises as to why and how they would have begun to evolve in the first place.
Exaptation, by disentangling current use from the original selection pressures, makes this understandable. The protowings may not have been of much use for flight, but could have easily aided in eliminating excess body heat when stretched out in a shady breeze, and increasing warmth when extended to catch the rays of the morning sun. Selection for larger and larger wings for more effective thermoregulation would have also laid the groundwork for flying, although the first aeronautically useful wings probably mainly provided some protection from falling from trees or other high places. Once past a certain size, the switch in the primary role of wings from thermal equipment to flying equipment would have been made, and further refinements would have increased their fitness for flying.
But although some see exaptation as a challenge to natural selection and as a part of a "new theory of evolution," it actually provides no competition at all as an explanation of adapted complexity. For wings to have evolved as useful instruments for flight, they first had to evolve as useful instruments of thermoregulation. The only explanation for how this could have happened, aside from the astronomical improbability of a single lucky mutation, is by natural selection. In addition, further refinements were undoubtedly necessary to fashion wings for efficient flight. Once again, the gradual and cumulative selection of blind variations in wing shapes is the only process currently understood that can account for the evolution of such design. Therefore,
to identify a feature as an exaptation does not mean that feature "is not the product of natural selection"--only that it did not always serve its present function. Indeed, most human behaviors serving biological functions that enhance fitness should probably be regarded as exaptations, modified through natural selection for subsequent specialization. The manual dexterity enabling specialized human hand movements now used in food gathering, eating, caretaking, tool use, and communicational expressions derives from the ability of prosimian ancestors to grasp branches. Human emotional expressions evolved from ritualized facial and vocal displays in other ancestral species, display derived from behaviors that originally had no signaling function. . . . The use of stereotypical vocal patterns in human mothers' speech to infants is also undoubtedly an exaptation, with evolutionary origins in ancestral non-human primate vocalizations used for very different purposes. The claim that such human behaviors are exaptations rather than adaptations is a claim about origins, which does not require rejection of an adaptationist explanation for the current fit between these behaviors and the biological functions they serve.
So whereas exaptation permits new, unanticipated twists and turns in the evolution of a species as new roles and consequently new selection pressures are found for old features, it relies on natural selection for all initial adaptations and consequent refinements of these adaptations for new functions. Exaptation may make it easier for natural selection to do its stuff, but it is most certainly not an alternative explanation for the adapted complexity of naturally occurring design.
Other individuals interested in accounting for the products of evolution but who are not particularly enamored of Darwinian natural selection emphasize the role of cooperation among organisms. One particularly outspoken advocate of this view is American biologist Lynn Margulis. She has offered strong criticism of the Darwinian selectionist account of evolution, stating, "It's totally wrong. It's wrong like infectious medicine was wrong before Pasteur. It's wrong like phrenology is wrong. Every major tenet of it is wrong."
Margulis does not doubt that natural selection occurs, but objects both to the gradualness of evolution and its emphasis on competition. In 1965 she proposed that eukaryotes, the nucleated cells that make up all organisms except bacteria, originated from the fusion of two different types of bacteria. Although her proposal was initially greeted with considerable skepticism, the theory of the symbiotic evolution of eukaryotes is now widely accepted among biologists.
But if symbiosis did occur, how does this make Darwinian theory "totally wrong"? To be sure, Darwin knew nothing of the genetic basis of life and so he could hardly have imagined that new organisms could emerge from the incorporation of the genetic material of one genome into the genome of another. Also, the cooperation between different species that characterizes evolutionary symbiosis does constrast with his very competitive view of evolution. But it must be kept in mind that the basic core of Darwinian theory is its reliance on hindsighted selection from a population of blindly constructed variations, and in this respect Margulis's symbiotic theory is Darwinian at its core. The variations available, however, are no longer limited to genetic mutations and recombinations within the gene pool of one species, but include genetic combinations across species. And these initial interspecific (between-species) genetic transfers must have occurred blindly, with probably almost all of them harmful to one or both organisms (as occurs when the genetic material of an influenza virus invades human cells) or of no consequence. So by pure chance it must have happened one lucky day that the accidental pooling of the genetic material or organelles of two types of unicellular organisms provided a new cell with survival and reproductive advantages over both of its constituents. But if one is to suggest that anything other than chance was involved in forming such symbiotic relationships, then one will be in the unenviable position of having to explain the origin of the knowledge required to circumvent chance.
Indeed, if such symbiotic evolutionary processes have in fact taken place, it would make Darwinian selectionism a more powerful, rather than a less powerful, explanation for adaptive evolution. Evolution would no longer be limited to fiddling with the genes of separate species by mutation and sexual recombination. Instead, a whole new world of possibilities becomes available in the mixing and matching of genes, organelles, and cells across species. A species would not have to evolve photosynthesis for itself if it could incorporate the photosynthetic know-how of another species, which is now believed to have occurred. But there still is no way that a primitive one-celled organism could possibly "know" which other type of cell or organism it should incorporate or borrow from to improve its chance of survival and reproduction. It could only await the blind forces of nature to bring a promising partner within reach, or await the invasion of an adventuresome virus bearing genetic material from a previously infected cell. Although the potential benefits of certain symbiotic unions would be great, it would not be possible to avoid the very same blind variation and selection that characterizes the natural selection that Margulis attacks so vigorously--at least not without the guidance of an intelligent and beneficent matchmaker working behind the scenes.
But there may well be something more to adaptive evolution than natural selection after all. The second law of thermodynamics is the well-known law of increasing entropy, which states that in an isolated system (that is, a system that can neither gain nor lose energy or matter) we can expect order to decrease, energy to become less available, and a stable (and lifeless) equilibrium to be reached. But in an open system able to draw on sources of outside energy, the situation can be dramatically different. The evolution of life itself is the most striking example of a naturally occurring increase in complexity. But inanimate objects and systems can also demonstrate naturally emerging complexity in certain situations. Anyone who has marveled at the intricate symmetrical beauty of a snowflake, observed the coordinated ballet of grains of rice in a simmering pot of water, or encountered the organized fury of a tornado has noticed that complexity can also arise spontaneously in the inanimate world. And this spontaneous emergence of complexity, or self-organization as it is now usually called, has recently attracted the attention of a wide range of scientists, from physicists and biologists to cognitive scientists and economists.
That organized complexity can emerge spontaneously in inanimate systems may have far-reaching implications for understanding the origin of life and its continuing evolution. One of the major difficulties in coming up with a convincing nonmiraculous account is explaining how inanimate matter could have organized itself into the very first self-replicating life forms. The degree of complexity required for this first step has seemed to many biologists to be just too unlikely to be due to the random forces of nature. Darwin himself was reluctant to advance a nonmiraculous argument, and in the last paragraph of later editions of the Origin refers to the power of life "having been originally breathed by the Creator into a few forms or into one."
So if the blind laws of nature operating on inanimate entities could with high probability lead to the emergence of complex, self-organized molecules and networks of molecules, then the origin of life itself, as well as its continued evolution, becomes somewhat less of a mystery. This is the message that American biochemist and biophysicist Stuart Kauffman has been delivering for the past dozen or so years, and provides in detail in his influential book, The Origins of Order.
It is now recognized that the laws of physics acting on nonliving entities can lead to spontaneous complexity, but nothing in these laws can guarantee adapted complexity of the type seen in living organisms, that is, the ubiquitous biological puzzles of fit. Of all the complex systems and structures that may self-organize due to the forces of nature, there can be no assurance that all or any of them will be of use for the survival and reproduction of living organisms. Selection, therefore, must choose among these various complex systems the ones with characteristics better suited to survival and reproduction, and eliminate others. As Kauffman remarked, "evolution is not just `chance caught on the wing.' It is not just a tinkering of the ad hoc, of bricolage, or contraption. It is emergent order honored and honed by selection."
The study of self-organizing systems is among the newest and most ambitious scientific ventures of the late twentieth century, and its discoveries may ultimately have a major impact on evolutionary theory and our understanding of the emergence of life itself. But from our present viewpoint it is difficult to see how self-organization could ever replace, as opposed to complement, natural selection. It may help to jump-start natural selection by blindly offering up a variety of already complex systems from which to choose. But it is only after-the-fact selection that can eliminate the non-viable complex systems and retain the viable ones.
To return one final time to the major theme of this book, selection theory has become an important part of many different disciplines either to explain puzzles of fit, as in the immune system, or to create them, as in genetic programming and directed molecular evolution. Where previous providential and instructionist theories of adaptive change and knowledge growth have been found to be inadequate, selection theory provides a truly naturalistic and nonmiraculous account of puzzles of fit, whether this fit occurred with or without the assistance of humankind. The evolution of theories in many different disciplines from providential through instructionist to selectionist is a provocative suggestion of the superiority of selectionism. And this recent movement in so many different fields of inquiry constitutes what may be considered a second Darwinian revolution.
In a number of respects this revolution is unlike the first. The first made a dramatic début on 24 November 1859 with Darwin's The Origin of Species selling out its entire first edition of 1250 copies the first day it was offered for sale. It involved a single discipline, biology, and a single question, the evolution of species. And Darwin himself (if not all of his fellow Darwinians) was insistent that processes other than natural selection--such as the instructionist Lamarckian processes of use and disuse--were involved in the emergence of adapted organic structures and behaviors.
In contrast, this second Darwinian revolution cannot be limited to any one significant event. Its roots lie even deeper than the theory of organic evolution. It involves parallel and simultaneous developments in many different disciplines, in much the same way that different species evolve simultaneously in conformance with the demands of their particular environments. The second revolution also appears to be moving in an exclusively selectionist direction in accounting for the emergence of truly novel knowledge. And although selectionism currently rules in evolutionary biology and immunology, it remains very much a minority viewpoint in the other fields surveyed in the preceding chapters.
But if the history of science has anything at all to tell us, it is that our current theories eventually become less than satisfactory. Theories that seem to be well founded and clear improvements over previous ones are eventually seen as inadequate and are replaced by newer, more encompassing perspectives in the way that Newtonian physics gave way to Einstein's relativity and Bohr's quantum mechanics. This continued development in science depends on the relentless criticism of both currently accepted and newly proposed theories in the form of continuing efforts to discover how they are inadequate and how they can be improved.
From this historical perspective, it would appear highly unlikely for selectionism to be the final explanation for the emergence of all puzzles of fit. Although it has already endured for a remarkably long period of time in evolutionary biology, we would normally expect it eventually to be surpassed. Newer and better theories might then explain everything in a given field that selection theory can, plus other phenomena that it cannot, even if at this moment it is exceedingly difficult to imagine what such a replacement would look like. That such a theory would have to be selected among competing ones (including selection theory itself) poses somewhat of a paradox to those who want to overturn selectionist accounts of knowledge growth.
But since universal selection theory is itself a blind variation, although one that has so far resisted the arguments that have been fatal to many providential and instructionist accounts of adapted complexity, I must welcome the criticism and selection pressure that this book and its thesis will undoubtedly provoke. Such scrutiny is absolutely necessary to lead us to understandings that may go beyond selectionism, if indeed that is possible. I do hope, however, that such criticism will not be based on faulty understandings of universal selection theory and the claims being made for it, and consequently result in a return to providential and instructionist theories that have been shown to be inadequate to the job of accounting for the emergence of adapted complexity.
In the meantime a strong case can be made that universal selection theory provides the best explanation for both naturally and artificially produced puzzles of fit. It relies on patient, iterative cycles of blind variation and selection that over the course of time can result in biological adaptations and new species, functional human cultures, technological breakthroughs, and scientific revolutions. And what is perhaps most appealing from the naturalist perspective of modern science, it provides this explanation without miracles--except for the illusory miracle of how such an inherently blind, stupid, wasteful, and sluggish process can be found at the very foundation of life, its marvelous design, and all the subsequent knowledge that life in its human form has generated.
Popper (1979, p. 261).
Campbell (1974b, p. 147).
This observation of the apparent tautological nature of biological evolution as survival of the fittest has been made many times. For some responses, see Maynard Smith (1969), Stebbins (1977), and Alexander (1980).
Campbell (1974a, p. 421).
For philosophers, an analytic statement is one that, like a tautology, is true by definition and necessity such as "a triangle is a closed figure having three straight sides," and a synthetic statement is true or false according to some condition of the world that it describes such as "all birds have feathers."
Gamble (1983, p. 359).
Gamble (1983, p. 362).
Mosaic, for Macintosh, MS-DOS Windows, and X-Windows environments can be obtained from the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign through FTP from ncsa.uiuc.edu.
If antibody production depended solely on the recombination of the variable B lymphocyte genes followed by selection, this would be single-step generative selection. But the continued hypermutation of antibodies and continued selection make antibody production a more powerful cumulative generative selection process (see chapter 4).
If there already is some knowledge concerning the solution, this simply means that some of the values of x need not be tested or that other values should be tested first. But this does not change in any substantial way the nature of the problem, since a range of x values still must be investigated.
See Goldberg (1989, chapters 1 & 2) for a highly readable account of how cumulative blind variation and selective retention can discover building blocks to find solutions in complex problem spaces.
Plotkin (1994, p. 84).
Plotkin (1994, p. 139).
Dawkins (1986, pp. 317-318).
This, however, has not dissuaded some creationists from embracing saltationism, since if adaptive macromutations were the rule in evolution, this, they argue, could only be the result of a providential creator.
See, for example, Eldredge (1985), and Gould (1980, chapter 17).
See Dawkins (1986, chapter 9) for additional discussion about how the theo-ry of punctuated equilibrium is entirely consistent with the Darwinian view of evolution.
Cairns, Overbaugh, & Miller (1988).
Cairns, Overbaugh, & Miller (1988, p. 145).
The view that evolution could involve a type of control process by which greater variability is produced in response to environmental stress was to my knowledge first proposed by Powers (1989b, pp. 124-127), whose seminal work on applying control systems theory to understanding the behavior of living organisms was introduced in chapter 8.
However, Gould (1993, pp. 109-120) believes that Darwin got this sequence backward.
Piattelli-Palmarini (1989, p. 6).
Fernald (1992, p. 395).
Margulis (quoted in Kelly, 1994, p. 365).
Kauffman (1993, p. 644).
This point was brought to my attention by Henry J. Perkinson.
- What do you know about the physical characteristics of bats that use echolocation?
- How do large ears help them find their food?
- Explain in your own words how bats use echolocation. EVALUATION