Without Miracles

7 The Adaptive Modification of Behavior


The environment made its first great contribution during the evolution of the species, but it exerts a different kind of effect during the lifetime of the individual, and the combination of the two effects is the behavior we observe at any given time. Any available information about either contribution helps in the prediction and control of human behavior and in its interpretation in daily life. To the extent that either can be changed, behavior can be changed.[1]

In certain respects operant reinforcement resembles the natural selection of evolutionary theory. Just as genetic characteristics which arise as mutations are selected or discarded by their consequences, so novel forms of behavior are selected or discarded through reinforcement.[2]

--B. F. Skinner

In addition to the inherited instinctive behaviors demonstrated by animals as discussed in chapter 3, we cannot fail also to notice behaviors that are modified to fit the circumstances of each animal during its own lifetime. Biological evolution can account for the emergence of adapted instincts through the natural selection of organisms having useful behaviors. It is simply too slow, however, to generate new behaviors adapted to rapidly occurring changes in the environment. To keep pace with these environmental changes, organisms must be able to learn or acquire new behaviors during their lifetimes.[3]

The ability to modify behavior adaptively is most impressive among the more complex animals such as birds and mammals. In temperate forests these creatures must search for new sources of food as the seasons change and learn to avoid enemies and physical dangers. We have all seen how dogs, cats, and birds learn new behaviors that allow them to adapt better to the artificial world of their owners. The ability of humans to modify their behavior to acquire new job-related and leisure abilities, from computer programming and speaking foreign languages to bicycle riding and piano playing, is striking.

One of the major tasks that the field of psychology has set for itself is to discover the ways in which an animal's experiences lead to the acquisition of new behaviors. Countless worms, snails, rats, pigeons, monkeys, humans, and other animals have been subjected to a wide variety of experimental treatments to help us understand under what conditions and how this adaptive modification takes place. And although relatively little consensus exists today in the field of psychology concerning the mechanisms of learning, especially for the most intelligent animals such as dolphins, apes, and humans, it will be informative to consider a short history of psychological research and theory that have attempted to analyze learning into its basic components.

Pavlovian Conditioning

The first scientific attempts to study changes in behavior began during the 1890s at the Institute of Experimental Medicine in St. Petersburg. There, Ivan Pavlov (1849-1936) was director of what was at the time the world's best equipped physiology laboratory, with facilities to support a large number of dogs. Working with dogs into which stomach tubes had been inserted to collect gastric juices, Pavlov and his assistants observed that the animals would secrete gastric juices not only when food was placed in their mouths but also at the mere sight of food and even at the sight of anyone who regularly fed them. This was followed by the observation of Stefan Wolfsohn, a student in Pavlov's laboratory, that a dog that had repeatedly had sand injected into its mouth (causing salivation to remove the sand) began to salivate at the mere sight of sand. Anton Snarsky, another of Pavlov's students, then demonstrated that a dog could learn to salivate in response to completely arbitrary stimuli. One of Snarsky's experiments involved coloring an acid black and allowing the dog to see it before introducing it into the dog's mouth. After a few such repetitions, the dog would salivate profusely at the sight of any black liquid in a jar.

In this way research began on the type of learning that is still referred to today as Pavlovian conditioning.[4] It is said to have occurred when a neutral stimulus, for example a sound or a light that at first elicits no strong behavioral response, is paired with an unconditional stimulus that normally always results in a specific response, that is, the unconditional response. In Snarsky's experiment, the black acid placed into the mouth would be considered the unconditional stimulus and the secretion of saliva the unconditional response. The original unconditional response to the unconditional stimulus was not considered to be the result of any previous learning experiences (hence the term unconditional), but due instead to an inherited prewired reflex arc connecting the perception of the particular stimulus to a specific behavioral response. By repeatedly presenting a neutral stimulus such as the sounding of a bell immediately before the unconditional stimulus such as placing meat powder in the mouth, Pavlov's dogs soon learned to produce the response (in this case salivation) at the presentation of the previously neutral stimulus. In this way, the dogs would learn to salivate at the sound of a bell if the sound had regularly preceded the placing of food in their mouths.

It is interesting to note that although Snarsky attempted to explain this change in behavior by appealing to the dog's higher mental processes involving feelings, expectations, and thoughts, this was resisted by his professor.[5] Pavlov wished to remain "in the role of a pure physiologist, that is, an objective observer and experimenter."[6] This led him to reject any such mentalistic interpretations, preferring to consider the observed change in behavior as the result of the modification of a simple reflex. After Pavlov received a Nobel prize for his work on the physiology of the digestive system in 1904 (the first Russian and the first physiologist to be so honored), he shifted his attention away from digestion and focused his research efforts almost exclusively on the learning phenomenon discovered in his laboratory.

While Pavlov restricted his research to dogs, American psychologist John B. Watson (1878-1958) applied Pavlov's theory of conditioning to understanding the emotional development of human infants. Watson's observation and experiments led him to believe that during the first month of life babies showed only three emotions--fear, rage, and love--and that these emotions could be elicited only by specific unconditional stimuli, such as a loud sound to evoke a fear response. To demonstrate how an initially neutral stimulus could elicit emotional reactions, he performed what remains one of the best-known psychological experiments with an 11-month-old boy referred to ever since as Little Albert.[7]

Little Albert was presented with a number of live animals and showed no fear as he reached out to touch them. The conditioning procedure then began as a white rat was presented to him. Albert reached out to touch the rat, but as he did so, Watson produced a very loud sound by striking a steel bar behind Albert's head. This pairing of animal and sound was repeated once more. One week later when brought back to Watson's laboratory, Albert was more cautious toward the rat. After five more pairings of the rat with the loud sound, Albert would cry and attempt to move away when he saw the rat. Watson reports that Albert also showed some transfer of his fear to other furry objects, such as a rabbit, dog, and fur coat.

Watson used results such as these to argue that our emotions are largely habits acquired as a result of various experiences, and believed that his findings had important implications for psychological therapy. As he explained, "if we do possess, as is usually supposed, many hundreds of emotions, all of which are instinctively grounded, we might very well despair of attempting to regulate or control them and to eradicate wrong ones. But according to the view I have advanced it is due to environmental causes, that is, habit formation, that so many objects come to call out emotional reactions."[8]

Pavlovian conditioning and habit formation had a great impact on psychology, and continue to influence the practice of clinical psychology in treating individuals suffering from various psychological disorders. But since the theory deals only with the bonding of new stimuli to old responses, it cannot account for the development of new behaviors. For this a different theory of learning was required.

Operant Conditioning

It is a noteworthy coincidence that the same year (1898) in which Wolfsohn submitted his dissertation in St. Petersburg on the Pavlovian conditioning of the dog's salivary response, Edward Thorndike (1874-1949) deposited his dissertation at New York City's Columbia University on learning in cats and dogs. Like Pavlov, Thorndike's experimental studies of animal behavior were to convince him that all learning was dependent on establishing connections between environmental stimuli and specific behaviors. However, the learning task he investigated was very different from that studied in Pavlov's laboratory.

Thorndike was interested in the ability of animals to learn and remember new behaviors. To this end, he constructed a number of "puzzle boxes" into which he would place a hungry dog or cat. The animal could open the door of the puzzle box only by performing some special action such as turning a catch or pulling on a loop of string. Since a dish containing a small amount of food was placed in the animal's view just outside the box, the famished animal was quite eager to escape to obtain a morsel.

Thorndike found that a dog or cat made what appeared to be many random movements when first placed in a puzzle box, but would eventually stumble across the behavior that would allow it to escape. When placed repeatedly in the same box, the animal would generally take less and less time to escape until it was able to perform the specific action required to open the door with no hesitation. Thorndike was also surprised to discover that contrary to previous reports of animal learning, his dogs and cats were not able to learn by observing the successful actions of other dogs or cats, nor were they able to learn from being guided passively by Thorndike through the motions that would free them.

Based on this and other animal research, Thorndike boldly concluded that all learning in all animals (including humans) followed certain laws, the most important being his law of effect:

Of several responses to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur.[9]

He saw his animals acting with no knowledge whatsoever of the consequences of their actions, the sole purpose of the reward being to stamp in the connection between their perception of the situation and a behavioral response. So like both Pavlov and Watson, he came to the conclusion that the formation of connections between stimuli and responses were responsible for learning. But unlike Pavlovian conditioning, which deals with the formation of connections between new stimuli and old responses, Thorndike's animals demonstrated the gradual "wearing smooth of a path in the brain of connections between old stimuli and new responses."[10]

Thorndike was the first psychologist to propose that all new learned behavior results from the combination of random responses and reinforcement. His fellow American B. F. Skinner (1904-1990) did most to popularize this type of learning. Skinner called it "operant conditioning" since it dealt with how animals could learn new ways of operating on their environment. In addition to his extensive, detailed research on animal learning, particularly rats and pigeons, he wrote a number of popular books about behaviorism and its applications for solving social and educational problems.[11] For these reasons, he remains among the best-known psychologists of all time. His name remains most firmly connected to the theory of radical behaviorism, a theoretical perspective that ignores the role of internal mental states, purposes, and thought processes in behavior, and instead sees all changes in learned behavior resulting from contingencies of environmental reinforcement.

Limitations of Conditioning Theory

The combination of the conditioning of Pavlov and Watson with the operant conditioning of Thorndike and Skinner might appear to go a long way toward accounting for the adaptive changes in behavior occurring during the lifetime of an organism as a result of experience. Since the description of Watson's research with Little Albert might leave the impression that Pavlovian conditioning can lead only to maladapted behaviors, it should be pointed out how such learning can be adaptive. To return to Pavlov's setting, learning to salivate at the sight of food, or at the sound of a bell signaling its arrival, readies the mouth with the moisture and enzymes necessary for digestion. For a more striking example, consider the flight reaction of most wild animals such as deer to a loud, sudden sound. If this sound is repeatedly preceded by the sight of men holding rifles, the survival value of fleeing at the mere sight of hunters becomes obvious. Pavlovian conditioning can therefore be understood as a type of stimulus substitution, or the attaching of old meanings, such as danger, to new experiences, such as hunters. Previously the loud sound of a shotgun elicited an automatic fleeing response; now the mere sight of a hunter will do the same. Thus the development of conditional responses to previously neutral stimuli that regularly precede an unconditional stimulus would allow an animal to anticipate and thereby react more quickly to avoid danger, locate food, and win mates. One might invoke Pavlovian conditioning to explain how any previously meaningless stimuli could come to acquire a meaning for an individual, even in the case of learning a language or learning to read.

As already noted, however, Pavlovian conditioning cannot account for the emergence of new patterns of behavior, and it is here that the theory of operant conditioning is relevant. Operant conditioning can be seen as a way for animals to find and retain creative solutions to problems, as did Thorndike's dogs and cats in learning how to open the door of their puzzle boxes to escape, and Skinner's rats and pigeons as they discovered how to obtain food by pressing a lever, pecking at a key, or walking in a figure-eight pattern. The situations of these animals were contrived and controlled, but it is not hard to imagine natural settings in which such learning would be very valuable, as when an animal discovers a new source of food or finds a new location in which to find shelter. In natural settings, monkeys have learned to wash sand from their food, and birds have learned to get their breakfast by sipping from milk bottles left on doorsteps. Our own species would appear to be the most adept at this type of learning, as we constantly find new and creative ways of feeding, clothing, sheltering, and entertaining ourselves, and providing for our families.

Both Pavlovian and operant conditioning theories of learning gained great popularity during the first half of the twentieth century, particularly in the United States. Seeing behavior as responses to stimuli, and explaining learning as the formation of new stimulus-response connections, made up the core of a behaviorist movement that attempted to make psychology scientific by focusing on publicly observable stimuli and responses. The behaviorist approach reacted against and contrasted sharply with that of the so-called structuralists, who saw psychology as the study of consciousness and used research methods that relied on the subjective verbal reports of subjects. Behaviorists such as Watson, Thorndike, and Skinner began their research with animals and eventually extended their theories to include all human actions. In so doing, they intentionally disregarded any role that thought and other higher mental processes might have in the adaptive change of animal or human behavior.

This neglect of the role of cognitive processes in human learning led to a number of serious difficulties in the application of theories to human behavior. In 1974 cognitive psychologist William Brewer published a review of a large number of studies that were designed to determine whether the change in behavior demonstrated by adult humans in conditioning experiments could be explained by unconscious, automatic stimulus-response connections or if higher mental processes were necessary.[12]

For example, in several of the studies reviewed, a Pavlovian conditioning procedure was employed that paired an initially neutral light or sound stimulus with an electric shock. This pairing, as predicted, resulted in a response[13] to the light or sound (now a conditional stimulus) that was like the original unconditional response to the shock. However, the conditional response to the previously neutral stimulus often quickly disappeared when subjects were informed that the shock would no longer be administered, and whether the conditional response disappeared quickly depended on whether a particular subject actually believed it.

Other studies reviewed by Brewer showed that immediate Pavlovian conditioning often occurred when adults were informed of the purpose of the experiment, and that it did not occur when subjects were prevented from discovering the relationship between the conditional and unconditional stimuli. These and a large number of other studies led Brewer to conclude that "all the results of the traditional conditioning literature are due to the operation of higher mental processes, as assumed in cognitive theory, and that there is not and never has been any convincing evidence for unconscious, automatic mechanisms in the conditioning of adult human beings."[14]

Skinner's theory of operant conditioning has also been criticized by cognitive scientists. Perhaps the most important assessment was provided by American linguist Noam Chomsky who in 1959 reviewed Skinner's attempt to explain language behavior using operant conditioning theory.[15] Chomsky's review will be considered in chapters 9 and 11.

Let us now examine in a bit more detail how both Pavlovian and operant theories, although formulated to account for different types of learning, are both stimulus-response views of learning. By this is meant that a particular stimulus causes activity in some sensory system that is connected by the central nervous system (spinal cord and brain) to motor neurons, and causes a reaction in some muscles that results in an observable response. The view of learning described by Pavlovian conditioning can be seen as the development of new connections between new stimuli and old responses. In other words, if the organism is innately wired so that stimulus A (gunshot) is connected to response Z (fleeing), the pairing of a new, neutral stimulus B (hunters) with A (gunshot) will cause a new connection to form between stimulus B (hunters) and response Z (fleeing). Since the pairing of the unconditional and conditional stimuli (sight of hunters and sound of gunfire) is provided by the environment, and since no trial and error or selection of responses is apparent in such learning, Pavlovian conditioning seems to be a form of instruction by the environment. As British psychologist Henry Plotkin remarked:

When respondent [Pavlovian] behavior enters into a learning relationship it is explained by a process of "instruction." That is, some stimulus or stimulus configuration becomes associated with a reinforcing stimulus and comes to elicit (in some way cause) a response similar to that previously elicited by the reinforcing stimulus.[16]

The apparent instructionist nature of an animal's ability to make new, useful Pavlovian connections between stimuli and responses suggests that not all processes resulting in adapted complexity have to be selectionist in their operation. But although ostensibly instructionist in its basic operation, Pavlovian conditioning may nonetheless have certain important selectionist aspects. First, the ability to learn in this way must itself have been the product of biological evolution, and hence has its roots ultimately in selection, that is, the natural selection of organisms who could learn adaptively to associate new stimuli with old behaviors. Second, as we will see in chapter 9, the perception of stimuli, without which no learning could take place, may be best explained as a selectionist process. Third, it does not appear unreasonable to suspect that the synaptic changes in the nervous system that underlie Pavlovian conditioning may depend on a blind variation and selection of neurons, as discussed in chapter 5. And it should be kept in mind that the instructionist nature of Pavlovian conditioning imposes severe limits on what can be learned, in contrast to the more creative process of operant conditioning. To continue Plotkin's quotation:

What is learned [in Pavlovian conditioning] is an absolutely determined association between stimuli and reflexive responses--the learning does not, cannot, go beyond these explicit events, and the temporal parameters that relate them. This is what I mean by "instruction."[17]

In marked contrast to Pavlovian conditioning, operant conditioning involves a stimulus that initially does not elicit any particular response. Instead, the organism responds with a series of varied, random behaviors; or to use Skinner's term, the organism "emits" behaviors. Eventually, one of these creatively fashioned behaviors leads to a reward for the animal (for example, food, water, or reduced discomfort), and as a consequence, the behavior is more likely to occur in the same or similar situation. Thus stimulus A (the sounding of a tone after the rat is placed in a Skinner box) originally does not evoke any particular response, but rather the organism emits behaviors X (sniffing the floor), Y (scratching itself), and Z (pushing the lever) after A has sounded. If behavior Z (pushing the lever) is followed by a reward (the appearance of a food pellet), a connection between stimulus A (sound) and response Z (pushing the lever) will be first established and then strengthened by additional reinforcement so that response Z will be likely to occur in the future when stimulus A is again encountered. As described by Skinner, if response Z in the presence of stimulus A results in a reinforcing stimulus, then response Z will come under the control of stimulus A. The operant conditioning of Thorndike and Skinner can therefore be considered a stimulus-response theory in that stimuli in the environment come to control the responses of the organism. In contrast to Pavlovian conditioning, the environment now serves only to select the appropriate behavior that must first be emitted by the animal. When an appropriate response is made, the environment will provide reinforcement such as food, warmth, or a mate, which will then increase the probability of this same response occurring the next time similar circumstances (stimuli) are encountered. The organism may then refine this behavior so that it is more effective or efficient in obtaining the environmental reward by repeated, cumulative rounds of behavioral variation and selection.

So in contrast to the seemingly instructive role of the environment in Pavlovian conditioning, operant conditioning is clearly a selectionist theory of learning, since the environment does not initially determine the adapted response, but rather selects it, by contingencies of reinforcement, from the many varied responses tried out by the organism. Thorndike, seeing the new discovery of neurons and their interconnections as additional evidence consistent with his connection-based view of learning, wrote of a very Darwinian-sounding "struggle for existence among neurone connections."[18]

Thorndike made only passing references to a selectionist view of learning. But Skinner put considerable effort into promoting operant conditioning as accomplishing over the lifetime of the individual animal what biological evolution accomplishes over the long evolution of a species.[19] Natural selection accounts for the existence of remarkably complex and adapted forms of life without providential recourse to the intentions or purposes of a designer. Thus Skinner saw the selection of behavior by its consequences through operant conditioning as an explanation for the development of remarkably complex and adapted forms of behavior over the lifetime of the individual animal without recourse to the purpose or intentions of the animal. Indeed, it is apparent that toward the latter part of his long and productive career, Skinner's principal objective was to do for psychology what Darwin had done for biology:

Compared with the experimental analysis of behavior, developmental psychology stands in the position of evolutionary theory before Darwin. By the early nineteenth century it was well known that species had undergone progressive changes toward more adaptive forms. They were developing or maturing, and improved adaptation to the environment suggested a kind of purpose. The question was not whether evolutionary changes occurred but why. Both Lamarck and Buffon appealed to the purpose supposedly shown by the individual in adapting to his environment--a purpose somehow transmitted to the species. It remained for Darwin to discover the selective action of the environment, as it remains for us to supplement developmentalism in behavioral science with an analysis of the selective action of the environment.[20]

Skinner discounts here the Lamarckian view of biological evolution, and at first appears to do the same for a Lamarckian view of learning. However, he curiously abandons Darwin and flirts with Lamarck in his discussion of the learning of human culture:

Cultural evolution is Lamarckian in the sense that acquired practices are transmitted. To use a well-worn example, the giraffe does not stretch its neck to reach food which is otherwise out of reach and then pass on a longer neck to its offspring; instead, those giraffes in whom mutation has produced longer necks are more likely to reach available food and transmit the mutation. But a change of culture which develops a practice permitting it to use otherwise inaccessible sources of food can transmit that practice not only to new members but to contemporaries or surviving members of an earlier generation.[21]

This Lamarckian interpretation of cultural learning appears fundamentally inconsistent with Skinner's belief that learning always results from certain spontaneously emitted behaviors being selected by contingencies of reinforcement. From the perspective of operant conditioning, cultural practices cannot be simply transmitted from one person to another, although it may certainly appear that such transmission occurs when we see children adopt the linguistic and cultural practices of their social environment provided by parents and peers. Just as the pattern and color of the tree bark appear to instruct the pattern and color of the back of the well-camouflaged tree toad, it also appears as if behaviors can be transmitted from one generation or individual to another. But in Skinner's theory, no such instructionist transmission of behavior ever takes place. Certainly, the natural selection of learning is different in important ways from the natural selection of biological evolution. As Skinner stated above, cultural practices can spread quickly throughout a community in a way that biological adaptations cannot. But this difference is not due to a Lamarckian transmission of behavior, but is rather a consequence of learning involving selective, psychological processes operating within organisms on a short time scale, and not an evolutionary selection process operating among organisms on a much longer time scale. (We will return to the problem of accounting for the adapted and adaptive nature of culture and cultural change in chapters 10 and 15.)

Stimulus-response theories of behavioral change appeared to hold great promise during the first half of this century as objective, scientific explanations of the adapted nature of behavior and the adaptive nature of learning. But as we now approach the century's end, these theories are much less popular, particularly as applied to human behavior. Part of the reason for their decline has to do with their disregard of cognitive processes in learning coupled with the continuing cognitive revolution in psychology that began in the 1970s. We will see in chapter 9 that much of the adaptive modification of behavior in humans (and in the more intelligent mammals such as apes) results not from the cumulative variation and selection by the environment of overt responses, as Skinner insisted, but rather from the cumulative variation and selection by the animal of mental representations that serve as substitutes or proxies for overt actions. In addition, by viewing all behavior as determined by the environment, and failing to take into account the purposeful, goal-directed aspect of behavior, the principles espoused by behaviorists such as Skinner have been found to be inadequate in explaining human behavior and unreliable for modifying it.[22] Skinner made an important contribution in emphasizing the cumulative variation and selection involved in learning new behaviors. But we will see in the next chapter that he was off the mark concerning both what is selected and what does the selecting as animals continually adapt their behavior to conditions imposed by an unpredictably changing and often uncooperative, even quite hostile, environment.

[1]Skinner (1974, p. 17).

[2]Skinner (1953, p. 430).

[3]See Plotkin (1994, pp. 144-152) for a discussion of how learning is necessary to solve the "uncertain futures problem" that biological evolution alone cannot solve.

[4]Pavlovian conditioning is also referred to by psychologists as classical or respondent conditioning.

[5]See Boakes (1984, p. 121).

[6]Pavlov (quoted in Boakes, 1984, p. 121).

[7]Watson & Rayner (1920).

[8]Watson (1917; quoted in Boakes, 1984, p. 220).

[9]Thorndike (1911; quoted in Boakes, 1984, p. 74).

[10]Thorndike (quoted in Boakes, 1984, p. 73).

[11]Skinner (1948, 1971, 1974).

[12]Brewer (1974).

[13]The response studied was the galvanic skin response, a change in electrical resistance between two points on a person's skin.

[14]Brewer (1974, p. 27). Brewer also questioned the belief that noncognitive, automatic, unconscious processes are involved in what appears to be the conditioning of children and nonhuman mammals.

[15]Skinner (1957).

[16]Plotkin (1987, p. 144).

[17]Plotkin (1987, p. 144).

[18]Thorndike (1911; quoted in Boakes, 1984, p. 75).

[19]See in particular Skinner (1966).

[20]Skinner (1974, p. 68).

[21]Skinner (1971, pp. 130-131).

[22]See Kohn (1993) for a review of how the application of the principles of operant conditioning as espoused by Skinner and other behaviorists have repeatedly failed in home, school, and work settings.