Without Miracles

14 The Artificial Selection of Organisms and Molecules


It is wonderful what the principle of selection by man, that is the picking out of individuals with any desired quality, and breeding from them, and again picking out, can do. Even breeders have been astounded at their own results. . . . Man, by his power of accumulating variations, adapts living beings to his wants--may be said to make the wool of one sheep good for carpets, of another for cloth, &c.

--Charles Darwin[1]

Ever since the appearance of the first life forms on our planet, organisms have influenced each other in their evolution and resulting adaptations. They compete for sunlight, food, shelter, and mates, with the most successful passing down the accumulated knowledge of their genomes to the next generation. Species involved in parasite-host or predator-prey relationships may evolve sophisticated offensive and defensive equipment and behaviors in a continuing evolutionary arms race. Other species have come to depend on each other for survival as symbiotic relationships evolved, such as when flowering plants offer sweet nectar to insects in return for the insects' dissemination of the plants' pollen.

Although relative latecomers to this scene, humans driven by their continuing and increasing need for food, shelter, clothing, fuel, beauty, and amusement, have had a particularly dramatic effect on the evolution of some species. In this chapter we will see how that influence predates by many thousands of years our knowledge of evolution, and how our current understanding of natural selection coupled with advances in biotechnology have provided us with unprecedented power to create new and useful organisms and molecules.

The Human Selection of Plants and Animals

For most of its existence, our species lived much like other mammals, roaming its habitat and obtaining food by hunting, scavenging, and gathering edible leaves, nuts, seeds, fruits, and tubers. But this nomadic way of life began to change as humans discovered the advantages of taking more direct control over the growth and breeding of plants and animals. The first domestication of food plants probably took place between 10,000 and 13,000 b.c. in southeast Asia where crops such as rice and beans were planted and harvested. Two other sites where agriculture appears to have developed independently were the Fertile Crescent, which includes parts of present-day Iraq, Iran, Turkey, Lebanon, Israel, and Jordan, and south central Mexico. In the Fertile Crescent animals were also domesticated, such as the horse, donkey, camel, and sheep.[2]

This change in occupation from hunting and gathering to cultivating food crops and raising livestock had important consequences for human cultural evolution. In contrast to moving from one temporary camp to another, agriculture permitted the establishment of permanent villages and ultimately cities, city-states, and empires. Increasing agricultural productivity freed a significant portion of the population from the demands of growing food and allowed them to take up scientific, technological, religious, and artistic endeavors. But in addition to these sweeping and comparatively rapid cultural changes, another slower one was taking place--we were beginning to direct the evolution of increasing numbers of plant and animal species.

No understanding of evolution or genetics was necessary to begin this transformation of useful organisms; neither was a conscious and purposeful selection of plants and animals. That is because the practice of agriculture itself necessarily imposes new environmental conditions on plants and animals. Crops are planted at certain times of the year using specific methods of cultivation. As in natural selection, plants that by chance are better suited to these human-made conditions grow well and produce more seeds for the next planting. For example, most wild plants produce seeds that fall to the ground and are dispersed at maturity, but a few hold on to their seeds, facilitating their gathering. By selecting and then sowing more seeds from the latter, a selection pressure was created toward the evolution of plants with so-called non-shattering seeds that were easier to harvest.

But although such early artificial selection may have well been accomplished unintentionally, even these prehistoric peoples no doubt noticed that, among living plants and animals, like begets like. Horses give birth to horses, not ducks. From the seeds of a fig tree more fig trees sprout, not palm trees. And human couples reproduce children who bear an obvious resemblance to their parents. So as these people chose seeds to plant the next season's crop from the largest and most productive stems of wheat, barley, corn, or rice, and allowed the largest, strongest, best-tasting, or gentlest horses, cattle, or camels to mate, they were effectively exploiting the mechanisms of cumulative blind variation and selection to produce plants and livestock that were better and better adapted to human requirements.

Indeed, it was the observation that domesticated plants and animals changed under the selection pressures imposed by humans that led Charles Darwin to his theory of natural selection. The first chapter of the Origin is entitled "Variation Under Domestication," and here Darwin presents many examples of how domesticated plants and animals changed over time to become better adapted to human needs. He explains these changes as "man's power of accumulative selection: nature gives successive variations; man adds them up in certain directions useful to him. In this sense he may be said to make for himself useful breeds."[3] When he realized that such selection pressures also exist in nature without meddling by human agents, his theory was born.

The theory made explicit the principles of evolution that agriculturists the world over had been unwittingly using for millennia. For some obscure reason nature serves up variations, and only certain ones are selected by humans for breeding the next generation. These ancient farmers had no way to control the amount or direction of these naturally occurring variations, but by eliminating the undesirable and breeding from the desirable, they could control just about any observable or measurable characteristic of plant or animal.

The amazing success of early plant breeders was demonstrated by Native Americans who over the course of 4000 years transformed a stingy grass into one of the world's most productive food crops. When Europeans first set foot on the shores of the New World

Adapted maize [corn] cultivars extended from the southern part of South America to the north shore of the St. Lawrence River; from sea level to elevations of 3,355 m (11,000 ft.). Types included flint, flour, pod, and popcorn as well as red, blue, black, yellow, white, and variegated kernels. There is no doubt about the competence in plant breeding of the American Indians, as it took the development of F1 hybrid maize of modern genetics to exceed the performance capability of Indian maize. It is understandable that European settlers grew to respect the maize crop and its developers, for maize is said to be the greatest gift from the Indians.[4]

But although Native Americans and other early agriculturists were successful in breeding plants and animals suited to their conditions and needs, it was not until the insights of Darwin into evolution and those of Gregor Mendel concerning the genetic basis of inheritance that plant and animal breeding could begin to be put on a firm scientific foundation. This foundation permitted rapid advances in breeding techniques beginning in the 1900s.

Since plant and animal breeding is a form of adaptive biological evolution, it depends on the three components of variation, selection, and reproduction. Advances therefore involve one or more of these components. By increasing variation, the probability of finding a variant with desired characteristics is increased. Breeders do this with techniques such as raising large numbers of plants or animals, cross-breeding different varieties to produce new hybrids, and using irradiation and chemicals to increase the occurrence of mutations.

Other techniques that improve the accuracy and efficiency of selection include procedures involving physical and chemical comparisons coupled with statistical and screening methods to eliminate unsuitable specimens quickly. For example, to develop a new variety of wheat that is resistant to high levels of salinity, high levels of salt are applied to the soil. This will kill almost all of the affected wheat plants, but almost certainly a few plants will survive. These survivors can then be used to breed a line of salt-tolerant plants. Screening (the breeder's term for selecting certain individuals from a large population of specimens) is now possible in the test tube where it is referred to as in vitro selection. Individual cells from plants may be subjected to certain chemical toxins. Those that are able to grow despite the presence of a toxin are mutants that are naturally resistant. Using this method, corn with 10 to 100 times more resistance to certain herbicides has been developed.[5]

Advances in molecular biology have also had an impact on breeding. Once a particular gene has been identified that is associated with a desired characteristic, breeders no longer have to wait until the plant or animal matures to make their selection. They can do it by examining the genes (or inserted gene markers) of the cells of immature plants and animals, thus cutting in half the time normally required to determine whether a newly bred plant or animal has the desired genetic characteristic. With this technique, genes that cause corn to produce kernels with high oil content have been identified, and this corn is now being cross-bred with other varieties having other desirable characteristics such as high yield or standability. Genetic screening allows the rapid selection of only those plants that retain the gene for high oil content.

Other methods have been developed to enhance reproduction. The realization that each cell of a plant or animal contains all the genes necessary for the production of a new, identical organism led to attempts to reproduce organisms using single cells. This asexual propagation of normally sexually reproducing plants and animals is called cloning, and it permits useful varieties to be preserved with little or no genetic variation over many generations. Where sexual reproduction cannot be avoided (at this writing, no mammal clones have yet been produced), artificial insemination permits a bull with desirable genes to impregnate many more cows than he could ever inseminate using traditional bovine love-making techniques. And the in vitro fertilization of eggs from a desired cow with the sperm of a desired bull and their transplantation into surrogate mother cows for gestation and birth allow a single particularly desirable couple to produce a large herd of animals, all of them siblings.

Genetic Engineering

These twentieth-century advances that permit more control over the variation, selection, and reproduction of living organisms greatly facilitated the breeding of domesticated plants and animals. However, none of these methods makes it possible to produce the desired variations directly. Variation can be increased; more accurate and quicker selection methods can be employed; and reproduction of desired varieties can be increased. But the breeder is still limited to selecting among those chance variations that the organisms themselves provide.

To gain more direct control over desired characteristics, breeders had to await developments of the second half of this century in the relatively new field of molecular biology. After Watson and Crick's discovery of the structure of the DNA molecule in 1953 and the subsequent breaking of the genetic code by which sequences of nucleotides orchestrate the construction of proteins, it was only a matter of time before scientists developed methods to reach deep into the center of living cells and manipulate their genes.

Many different techniques are now employed in genetic engineering, which is also known as gene splicing and recombinant DNA technology.[6] They all involve manipulating an organism's genes. One technique involves introducing a gene from a plant or animal cell into a bacterium. Many bacteria have tiny rings of DNA, known as plasmids, that are more accessible and easier to manipulate than the more tightly packed genes in the chromosomes. When a gene that produces a desired protein is removed from a plant or animal cell and spliced into a bacterium's plasmid, the gene will continue to produce its designated protein product in its new bacterial setting. Provided with a nutrient-rich environment, the bacterium quickly grows and divides, producing more cells that also contain the foreign gene and that therefore also produce the desired protein, which can then be harvested from these bacterial chemical factories. With this technique, bacteria have been genetically engineered to produce insulin, human growth hormone, and the anticancer drug interferon, among thousands of other important substances.

But what about introducing foreign genes into the chromosomes of plants and animals to alter their characteristics and to create brand new organisms, such as the fire-breathing chimera of Greek mythology, which combined the head of a lion, the body of a goat, and the tail of a snake? Unfortunately, manipulating chromosomal DNA is a more difficult affair. For certain plants, the aid of Agrobacterium tumefaciens is enlisted. This soil bacterium has been doing genetic engineering of its own for millions of years by inserting genes from its plasmids into the chromosomes of plant cells. The foreign DNA causes the plant cells to form tumors that produce unusual compounds (opines) that serve as food for the bacteria. Thus, this bacterium is a naturally occurring genetic engineer that has evolved the ability to alter plant cells to provide food for itself. This ability has been exploited by human genetic engineers who can now splice desired genes (for example, one that provides resistance to a pathogenic virus) into Agrobacterium's plasmid, which inserts it into a plant cell's chromosomal DNA. The targeted plant cell is then coaxed into developing into a complete fertile plant that will pass on the engineered DNA and accompanying viral resistance to its progeny.

For manipulating the genes of animals and those plants for which this technique will not work, such as the major food crops of wheat, rice, and corn, a messenger virus is used, or the desired gene is directly injected into the nucleus of a plant cell or into a one-cell animal embryo. In the case of animals, the genetically altered embryos are implanted into the uteri of surrogate mothers, and the resulting transgenic animals can be used to breed more progeny by traditional methods, with each offspring containing the altered gene. The first transgenic animal was produced in this manner in 1981 when a rabbit gene was inserted into a mouse embryo.

Although genetic engineering is a very young field, it has already had some impact on food production and promises to have much more in the future. About half the cheese produced in the United States uses an enzyme created by bacteria containing a cow gene. Similarly, recombinant bovine somatotropin (rBST) is grown in bacteria, isolated, and injected into dairy cows, increasing milk production up to 20%. The Flavr Savr tomato, genetically engineered to stay fresh longer before spoiling, made its appearance in American supermarkets in May 1994. And extensive research is under way by firms such as Calgene, Monsanto, and DuPont to genetically engineer crops resistant to herbicides, harmful insects, viruses, bacteria, and fungi.

Whereas genetic engineering now provides means for directly manipulating genes, these techniques have not eliminated the use of cumulative trial-and-error research from the design of new and valuable organisms. First of all, to know which gene to insert into an organism, the gene's function must be known. One way to determine the function of a particular gene is to expose a large number of organisms to a mutagen, select those that differ in some interesting way from normal organisms, and then establish through DNA sequencing which gene was mutated. If, for example, a mutated bacterium is unable to replicate its DNA, and the genes that were changed by mutation can be determined, it will be known that these genes must in some way be involved in DNA replication. This method allows no control over what mutations will result (thus they remain blind variations), and organisms are consequently selected on the basis of some particular characteristics. This technique can be effective for large populations of single-cell organisms whose genomes are relatively small (E. coli contains about 3000 genes in its genome), but it is less so for larger organisms with larger genomes (the common fruit fly has 20,000 genes and the mouse about 200,000). For these organisms, researchers traditionally relied on the study of naturally occurring mutations, but a new technique allows them to manipulate the genes of their choice.

Currently applied to mice, with which we share more than 90 percent of our genes, targeted gene replacement (also called homologous recombination), developed by Italian-born Mario Capecchi at the University of Utah School of Medicine, makes it possible to create mice with a mutation in any gene of interest.[7] But this relatively precise genetic control is still largely a hit-and-miss process since it is successful in only a very small percentage of treated cells. It is much more likely that the introduced gene will either not be incorporated into the cell's chromosomes, or that it will be, but at the wrong location. By incorporating two additional marker genes into the foreign gene, one that provides resistance to a certain drug and the other that provides sensitivity to another drug, it is possible to screen the treated cells using the two drugs and easily find the one cell in a million whose genome had been altered in the desired way. Only these cells are selected and injected into a mouse embryo, eventually leading to transgenic mice with the precise desired mutation. The mice are examined for physical or behavioral abnormalities that provide clues to the function of the altered gene. So once again we see that a selectionist screening procedure is necessary to separate successfully genetically engineered cells from the others. Similar screening methods employing antibody or enzyme detection are applied in many of the genetic engineering techniques described earlier.[8]

But even the identification of the function of individual genes does not permit one easily to create new organisms with desired characteristics for the simple reason that many of the most important traits are controlled by combinations of hundreds or perhaps even thousands of genes. It is therefore necessary to test various combinations of genes to determine which one provides the desired characteristic, such as early maturity, high oil content, height, or drought resistance, while maintaining other beneficial traits. It is for this reason that despite the rapid progress in genetic engineering, much more research must be done before plants can be grown that combine the advantages of both corn and soybeans, or animals are created that both give milk like cows and grow wool like sheep.

The Evolution of Drugs and Methods of Drug Design

Regardless of its remarkable achievements, the continued presence of a plethora of human illnesses is a constant reminder that biological evolution does not fashion perfectly adapted organisms. Indeed, we now understand infectious diseases as Darwinian competition between us and various pathogenic parasites, bacteria, and viruses. Many of these organisms have evolved remarkably effective methods for infecting human hosts, and have certain important advantages over us, such as much larger population sizes, prolific reproduction, and high rates of mutation. But although vastly outnumbered, we can and do fight back against our microbial adversaries using our intelligence and the knowledge and technology that it has generated.

The discovery, refinement, and administration of drugs is our most important weapon in this continuing battle. Drugs have been used since ancient times, with the Greeks and Romans prescribing opium to relieve pain, the Egyptians taking castor oil for worms, the Chinese consuming liver to treat anemia, and twelfth-century Arabs ingesting sponges (which have a high iodine content) to treat goiter. But since these early users of drugs had virtually no understanding of chemistry and physiology, these and other agents could have been developed only by the crudest trial-and-error methods. Perhaps healers noted that an individual with a certain malady recovered after ingesting a certain food or substance, and tried it on another person suffering from the same illness. Or they may have observed animals eating certain plants when they appeared to be suffering from ill health. Such methods probably led to the discovery of certain useful drugs such as quinine to treat malaria and the aspirin-like substance in the bark of willow trees to treat pain and fever. However, the lack of a systematic approach to drug research, coupled with ignorance of the structure and functioning of the body, inevitably resulted in many ineffective and harmful drug treatments.

The Beginning of Scientific Drug Development

Important advances in drug development began about 1800, although they too often resulted from serendipity rather than systematic scientific research. In the 1790s English physician Edward Jenner heard that dairymaids who contracted the relatively mild cowpox disease seldom suffered from the much more serious and often fatal smallpox. Consequently, he suspected that cowpox somehow provided immunity against smallpox, and developed a smallpox vaccine from the pus of cowpox sores. In the 1870s French chemist Louis Pasteur observed that chickens inadvertently injected with weakened cholera bacteria developed immunity to this disease; he later developed vaccines against anthrax and rabies. And in 1928, Scottish bacteriologist Sir Alexander Fleming observed that a mold that accidentally contaminated a culture of Staphylococcus appeared to stop the bacterium's growth, and thereby discovered penicillin.[9]

The development of these and other modern drugs was greatly facilitated by knowledge of the causes of disease. It was only through his awareness that infectious diseases were caused by microorganisms that Fleming was able to recognize the importance of the antibiotic produced by the penicillium mold. Recent discoveries of the key role of enzymes and various receptors in cellular activity permitted important advances in the development of a wide range of new drugs.

Finding Drugs by Large-Scale Random Screening

Drugs exert their effect by providing molecules that are able to fit and attach themselves to other molecules in the body, not unlike the way that an antibody fits an antigen, as discussed in chapter 4. For example, angiotensin-converting enzyme (ACE) acts as a catalyst to convert angiotensin I to angiotensin II, the latter being a vasoconstrictor, which reduces the diameter of blood vessels, thereby increasing blood pressure. The drug lisinopril has a distinctive molecular shape that tightly fits and effectively plugs up the active site of ACE, thereby inhibiting its activity in converting angiotensin I to the vasoconstricting angiotensin II. The net effect of this drug is therefore to reduce levels of angiotensin II, resulting in lowered blood pressure in hypertensive patients.

From the molecular perspective, finding an effective drug can be likened to finding a key for a lock. In what is now referred to as the classic method of modern drug design, as many as tens of thousands of synthetic and natural substances may be randomly tested in groups of 50 to 100 as potential keys to a desired lock such as ACE. If it is found that one of these substances binds with the target site, further screening is done to isolate the particular keylike compound, which is referred to as the lead compound or molecule.

Once a lead compound is found, much more work remains to be done in the form of additional modifications and more tests. Variations of the lead molecule are then synthesized and tested. Some of these variations may be less effective than the original molecule, and others may be much more potent. An example of the latter is etorphine, a variant of morphine that is 1000 times more potent than the morphine. But this is only the beginning. The ability of the molecule and its variations to reach the target site (its bioavailability) must be tested using living cells and tissue cultures. Then the toxicity of the drug and its side effects must be determined in trials with animals and humans. The object of all these modifications and testing is to find the compound that is most effective while having the least toxicity and fewest unwanted side effects. Modern drug development involves many sophisticated techniques for zeroing in on such compounds,[10] and the entire process is clearly one of a rather lengthy cumulative-variation-and-selection procedure. Because of the large number of compounds to be screened, followed by required rigorous testing on cells, tissues, animals, and finally humans, it may take more than 10 years of intensive research effort and many millions of dollars to develop a new drug and bring it to your medicine cabinet. This provides some explanation for the high price of many drugs.

Nonetheless, the initial in vitro chemical screening of large numbers of molecules is much more efficient (and safer) than having first to test all these compounds on animals or humans, particularly since pharmaceutical firms employ technology that largely automates the random screening process. Indeed, large-scale molecular screening can be seen as a type of vicarious blind variation and selection in which many types of molecules are tested, and only those that are found to have an affinity for the target enzyme or receptor are retained for additional testing and development.

But this method of molecular selection has an important limitation that becomes obvious when compared with the cumulative variation and selection of biological evolution (as well as the evolutionary computing techniques considered in the preceding chapter). Namely, only natural and synthetic compounds that are already on hand can be initially screened, as there is no way to create spontaneously new variations of successful molecules in the way that mutations and sexual genetic recombination provide new variations in biological evolution. In other words, the desired drug molecule must already be provided by the researchers, and no step in the selection process can automatically fine-tune those molecules that show some fit to the target molecule. Any subsequent fine tuning must be carried out painstakingly by chemists who attempt to determine the reasons for the selected molecule's success, and apply their knowledge of chemistry to improve it further through repeated rounds of variation and selective screening.

Structure-Based Rational Drug Design

It should also be noted that this classic method of drug development does not require initial knowledge of the actual structure of the interacting molecules. But since modern technologies involving X-ray crystallography and nuclear magnetic resonance spectroscopy can provide detailed information about the atomic structure of many molecules, this information is now being used in what is called rational drug design. If the classic approach to drug design can be likened to finding a key among tens of thousands of keys that will fit a particular lock, structure-based rational drug design is analogous to having information about the shape of the tumblers inside the lock. Since such knowledge would facilitate the making of a working key, information on the structure of a target site for a drug can facilitate the finding of an effective drug that will bind to the target.

But even with detailed structural information, designing an effective drug is not as straightforward as one might expect. A drug molecule cannot simply be ground into shape in the way that a key can be made from a blank. Instead it must be assembled from constituent atoms that will themselves fit together only in certain combinations and arrangements. To complicate matters further, a molecule's configuration may change dramatically when brought into proximity with another molecule, and atomic charges (which are not easily modeled) can affect the binding of one molecule to another. So whereas knowledge of the target's structure usefully constrains the number of candidate drug molecules, a considerable amount of trial and error is required to find a good fit. This search is now facilitated by three-dimensional computer displays of the target molecule that allow researchers to design molecules atom by atom to fit the simulated target site, in much the same way that wind tunnels and other computer-simulated environments facilitate the design of new products as noted previously. But since computer models are not perfect, promising compounds still have to be tried in a test tube and then on cells, tissues, live animals, and finally humans in clinical trials. Each stage in this long process represents yet another selective filter, with only those molecules that pass every test finally finding their way onto pharmacists' shelves. The cumulative variation and selection involved in this process is apparent in the description provided by three pioneers of structure-based drug design of their attempts to come up with a compound that would enhance the effect of certain anticancer and antiviral agents and be helpful in treating autoimmune disorders:

This iterative strategy--including repeated modeling, synthesis and structural analyses--led us to a handful of highly potent compounds that tested well in whole cells and in animals. Had a compound encountered difficulty in the cellular or animal tests (such as trouble passing through cell membranes), we would have revisited the computer to correct the deficiency. Then we would have cycled a modified drug through the circuit again.[11]

Recognizing both classic and structure-based drug design as types of evolutionary processes involving cumulative variation and selection helps us to understand the potential advantages of the latter approach over the former. In the classic method, thousands of compounds are first randomly tested, and the success of one compound provides little if any information on the potential usefulness of others still to be screened. Variation and selection occur to be sure, but there is no cumulative variation and selection in the initial screening procedure, so that every one of thousands of compounds must be tested. In the rational, structure-based approach, information concerning the structure of the target site is used to constrain the possible candidates. So instead of screening thousands of compounds, the researchers quoted above had to prepare only about 60 substances to find a promising drug, thereby saving considerable time, effort, and money. In addition, using the computer as a type of virtual test tube for testing drugs allows a vicarious means of variation and selection that can be more efficient and cost-effective than conducting all initial screenings chemically.

Directed Molecular Evolution: Selection in a Test Tube

But is it really necessary to know the structure of the target site to produce drugs efficiently? Biological evolution has come up with stunning achievements of design without any such knowledge at all. Of course, evolution has had over three and a half billion years. But if molecules themselves could somehow be bred using very large populations and rapid iterations of cumulative blind variation and selection, it should then be possible to direct the evolution of useful molecules for drugs and other purposes in much the same way that breeders traditionally directed the evolution of domesticated plants and animals.

The first demonstration that an evolutionary approach to molecular design was indeed possible was provided in the early 1960s by Sol Spiegelman of the University of Illinois at Urbana-Champaign.[12] Using a technique that causes strands of RNA (the molecular messenger of the genetic information archived in DNA) to replicate with a rather high error rate, Spiegelman began to breed a huge number of variations of a particular RNA molecule. For his selection criterion, he chose a rather simple one--speed of replication. Since he provided progressively shorter periods of time for replication, molecules that were the quickest at making copies of themselves were more likely to be selected as parents for the next generation of molecules. After 74 generations of this "serial transfer" experiment, Spiegelman had bred an RNA molecule that was 83% different from the original ancestor molecule and replicated itself 15 times faster. The artificial evolution of molecules in a test tube had been achieved.

Variations of this technique, referred to as directed molecular evolution, are beginning to be used to design molecules for drugs and other uses. Instead of selecting self-replicating molecules that replicate most quickly, a type of molecular obstacle course is set up that involves binding to a target molecule. A varied population of many millions of molecules is passed through a filtration column that contains the target molecule. Since this initial population is a random collection of molecules, only a very small percentage are likely to bind to the target molecule. But since the population is extremely large and varied, even an exceedingly small percentage of "hits" is virtually guaranteed to produce at least some molecules with an affinity for the target, and the vast majority that do not stick are simply washed down the drain (a watery version of Darwin's hammer). Only the relatively few molecules having enough affinity for the target molecule to adhere to it are retained, allowed to replicate (with a certain error rate to ensure additional variation), and passed through the filtration column once again--the process being repeated until a molecule that binds very tightly to the target is found. With this new, purely selectionist technique, a drug was developed that binds to the protein thrombin and consequently inhibits the formation of blood clots in patients who must be connected to heart-lung machines for surgery, or who must undergo blood dialysis because of kidney disease. Initial research in directed molecular evolution was restricted to RNA and DNA molecules, but work is currently under way to enable the same "irrational" technique of cumulative blind variation and selection to be applied to the evolutionary design of other types of molecules.

Although directed molecular evolution is still in its infancy, it has generated considerable excitement and activity in the biotechnology industry. One has only to consider the parallel with plant and animal breeding to understand why. As discussed earlier in this chapter, traditional selective breeding practices resulted in dramatic improvements in food crops and domesticated animals (at least from the perspective of human consumers). At best, such efforts may involve a population of thousands of plants that require months to mature and reproduce. In addition, screening (selection) may involve considerable time and effort, as when applying chemicals such as pesticides or salt, and having to wait and select the plants that are the least affected for the next round of breeding.

In contrast, breeding molecules using directed molecular evolution typically involves populations of 1013 to 1015 (ten million million to one thousand million million) molecules, each of which may take only an hour to reproduce. And selection can be as simple as passing the populations of molecules through a filtration column. Because of its promise, several companies such as Gilead, Ixsys, Nexagen, Osiris, Selectide, and Darwin Molecule are devoting their entire research program to directed molecular evolution, and Genentech, the grandfather of biotechnology firms, is also exploring this approach.[13] Researchers at Affymax even developed a test tube version of group sex in which E. coli genes from up to 10 bacteria are chopped up, randomly recombined, and reinserted into the bacteria, greatly increasing the odds that several favorable mutations will find their way into a few of the bacteria.[14]

It therefore seems quite likely that before long a new generation of powerful drugs and other substances will be available that evolved in the laboratory under the selective guidance of human researchers. But in contrast to the rational design of molecules, the researcher will (at least initially) have no real understanding of why a particular molecule is so successful at doing what it does, in the same way that traditional plant and animal breeders know nothing of the genes underlying the desirable characteristics that they select. It is of course likely that successfully evolved molecules will be analyzed and perhaps even improved by rational methods of design. But the potential power of cumulative blind variation and selection using large, heterogeneous populations of molecules is such that rational fine-tuning may not be necessary. Obviously impressed, Nobel prize-winning biochemist Manfred Eigen calls directed molecular evolution "the future of biotechnology."[15]

Three different technological revolutions have now taken place. In the nineteenth century the industrial revolution involved the exploitation of huge amounts of energy for manufacturing, agriculture, and transportation. The information revolution of the mid-twentieth century provided telecommunications and computer hardware and software to generate, analyze, manipulate, and transmit vast amounts of information. The most recent revolution, that involving biotechnology, now provides the means to manipulate the very core of living cells and direct the course of biological evolution for many species from bacteria to humans. It has also given us the ability to develop new drugs, vaccines, and gene therapies to fight disease and improve the length and quality of human life.

It was argued in chapter 10 that all technological development is dependent on cumulative blind variation and selection. But what is particularly intriguing about recent developments in the information and biotechnology domains is that these fields are now making explicit use of artificial evolution. The sequence of events leading to this development is noteworthy: evolution of the human brain by cumulative natural selection; the use of this brain's evolved capacity for thought (itself a form of vicarious cumulative variation and selection) to fashion an understanding of the nature and power of natural selection; and the application of this knowledge to solve problems by exploiting forms of artificial selection to direct evolution in agricultural plots, barns, computers, and test tubes.

The potential of these techniques for improving the human condition is immense and includes more productive crops and livestock, controlling and eliminating human disease, slowing the aging process, and engineering microbes able to transform the products of industrial pollution into harmless and even useful substances. But there is a potential darker side, such as crops that are able to produce their own pesticides that ultimately poison the birds and other animals that feed on them, the mutation of new supercrops into superweeds that are able to drive out native plants, and the release of novel pathogenic microbes into the environment against which humans and animals have no natural defenses. In addition, highly charged moral issues are raised when one begins to tinker with the human genome.

This third technological revolution brings with it the unprecedented ability to direct evolution itself. The human species has already had a tremendous impact on many life forms and physical features of the earth. Our harnessing of the very evolutionary process that created us will no doubt have a much greater impact.[16] Whether it will be positive or negative for the long-term survival of our and other species is, of course, the big question that only time and evolution itself can answer.

[1]From Darwin's 1858 letter to American biologist Asa Gray. Reprinted in Bajema (1983, pp. 191-192).

[2]Much of the information presented in this section on plant breeding was obtained from Stoskopf (1993).

[3]Darwin (1859/1966, p. 30).

[4]Stoskopf (1993, p. 5).

[5]See Stoskopf (1993, p. 439).

[6]See Murrell & Roberts (1989) for an introduction to genetic engineering from which much of the following information on the subject was obtained. The special report on medicine and health published by the Wall Street Journal on May 20, 1994, also provides a useful collection of articles on the methods, products, promises, and problems of genetic engineering.

[7]See Capecchi (1994).

[8]See Salmond (1989, pp. 56-61).

[9]In his delightful book on accidental scientific discoveries, Roberts (1989) provides many other examples of serendipitous drug discoveries.

[10]Silverman (1992, especially chapter 2) describes many of these techniques for modifying lead compounds in the attempt to find better drugs.

[11]Bugg, Carson, & Montgomery (1993, p. 94).

[12]Much of the information presented here on Spiegelman and directed molecular evolution is taken from Joyce (1992).

[13]Kelly (1994, p. 301).

[14]Flam (1994).

[15]Quoted in Kelly (1994, p. 302).

[16]See Kelly (1994) for an interesting account of the increasing use and influence of artificial forms of evolution.  nSM/]<-ԠO4,,>]֍]ʢ֋}ԙ qx}Θ-,m-tӞ̠lKԧ#-Нgcؑ؆M 'N#G3.?ڔ5f-Qnd]ߖ!~"