The Risky Promises and Promising Risks of New Information Technologies for Education

NICHOLAS CONSTANTINE BURBULES

University of Illinois, Urbana/Champaign

THOMAS A. CALLISTER, JR.

Whitman College, Washington

Presented at the Education/Technology conference, Penn State University, Fall 1997

The theme of this conference is "Asking the right questions," and we want to argue in this essay that contemporary discussions about technology in education are often asking the wrong questions - or, to be a bit more precise, are asking important questions in the wrong ways, in unhelpful ways. We want to trace out some of the wrong ways in which technology issues are typically framed, explain why we think they are unhelpful, and propose a different way of thinking about technology issues, especially in the context of education.

Versions of the Technocratic Dream
The first way in which technology issues are often misframed can be called the "computer as panacea" perspective: new technologies carry inherent possibilities that can revolutionize education as we know it. If we simply unleash this potential many educational problems will be solved. Computers can help alleviate overcrowded classrooms; computers can ease the burden of overworked teachers; or computers can make teachers unnecessary at all. Such views are promoted enthusiastically by those who have a commercial stake in encouraging the sale and use of their hardware or software. The education market is so large that if even a few states or districts can be persuaded that a particular information technology will take care of their difficulties, millions of dollars can be made on the deal. But because so many problems of education in this society are the result of inadequate resources or the misallocation of resources, funneling more of the finite amount of funding available into one area of spending might actually exacerbate these problems, not remedy them.

Furthermore, the proclamation of panaceas is not simply a marketing ploy; it is a mantra long familiar to the educational scene. The history of education in the United States can be traced from technical innovation to innovation, from pedagogical gimmick to gimmick, from reform to reform, all in the search for the One Best Way of teaching, for the next new thing that will help educators cope with the fundamentally imperfect and indeterminate nature of the teaching process itself. Rather than acknowledge the inherent difficulty and imperfectability of the teaching-learning endeavor, rather than accept a sloppy pluralism that admits that different approaches work in different situations - and that no approach works perfectly all the time - educational theorists and policymakers seize upon one fashion after another and then try to find new arguments, or new mandates, that will promote widespread acceptance and conformity under the latest Revolution. The Information Technology Revolution is just the latest in this long line of utopian dreams, and there will always be a ready audience in education for such overpromising.

To be fair, many computer producers and advocates have actually been among the forefront in trying to limit exaggerated claims for new information technologies; those most familiar with these machines know best what they are and are not capable of. Ironically, it is often educational leaders who have raised the fevered sense of urgency that everything has to change, right now, before schools fall behind some perceived "wave" of technological innovation.

One consequence of the search for panaceas is that when the Revolution does not come to pass, when the imperfections of each new thing become all too apparent, there is typically an equally exaggerated rejection of the reform, not because it is of no use but because it falls short of the hyperbole marshaled in its favor. As a result, educational change lurches from one new thing to another, with the shortest of memories about similar (or even identical) reforms tried in the past, failing to learn from experience and less able to integrate the partial benefits of multiple approaches, multiple technologies, into a pragmatic orientation that seeks workable approaches to different problems as they arise.

We are already seeing some of this backlash toward computers and related information technologies. Schools that spent millions of dollars to purchase equipment and software in the first heady rush to be sure that they did not fall behind in some perceived race with what other schools were doing, find that much of this equipment is unused and already obsolete. Schools that are rushing now to build fast network connections are finding that this raises unexpected new difficulties, when students actually take advantage of the access provided but for purposes that authorities find troubling or inappropriate. The panacea approach reinforces a certain naiveté in educators, and in the public that evaluates education, by suggesting to them that spending money to acquire new technical resources solves more problems than it creates, not realizing that the potential of information technologies increases the need for imagination, careful planning, and coping on the fly with unexpected new challenges.

The second type of technocratic dream, much more subtle and seductive than the first, is the "computer as tool" perspective. Advocates of this view rightly excoriate the "panacea" perspective, and argue that it expects far too much of new information technologies which are, as they say, merely tools that can be used for good or bad purposes. Tools carry within them neither the guarantees of success or failure, or of benefit or harm - it is all a matter of how wisely people use them.

Unfortunately, this technocratic dream simply errs in the opposite direction from the first. Where the panacea perspective places too much faith in the technology itself, the tool perspective places too much faith in people's abilities to exercise foresight and restraint in how new technologies are put to use; it ignores the possibilities of unintended consequences or the ways in which technologies bring with them inherent limits to how and for what purposes they can be used. A computer is not just an electronic typewriter; the World Wide Web is not just an on-line encyclopedia. Any tool changes the user, especially, in this instance, in the way in which tools shape the conception of the purposes to which they can be put. As the old joke goes, if you give a kid a hammer they'll see everything as needing hammering.

A slightly more sophisticated variant on this perspective is the "computer as non-neutral tool" perspective. Yes, advocates say, every technology carries within it certain tendencies of how it is likely to be used and shapes the conception of purposes to which it can be put. Users should be reflective and critical, therefore, about the unexpected consequences of using these technologies, and should be prepared for the possibility that the benefits gained from the technology's usefulness may be tempered by unforeseen problems and difficulties created by its use (pollution caused by automobiles, for example).

This third version of the technocratic dream is probably where most thoughtful observers are today in regard to new information technologies. It is a sensible, level-headed approach. It understands balancing costs and benefits, tradeoffs, the mix of good and bad that comes from attempts at major reform. It understands the language of unintended consequences and accepts the imperfections of human rationality. It does not see technology as a panacea, nor does it imagine that technology is just a tool. Yet, we want to argue, it is still a variant of the technocratic dream. We will provide three arguments for why this is so.

Beyond the Technocratic Dream
First, the technocratic mindset maintains a clear distinction between the conception of a tool and the aims it serves. The "computer as non-neutral tool" perspective represents a transitional step away from this, stressing that people do not simply use new tools to pursue old purposes more efficiently or effectively. New tools cause people to imagine new purposes that they had not even considered before. But the problem goes even further than this: It is not simply a matter of an unproblematic relation of means to ends (even new ends). People's conception of what constitutes "success" is changed in light of the means used to pursue it. The technocratic mindset takes the relation of means and ends itself as given. A crude version (crude but still widely held in the field of education) simply defines problems as matters of relative efficiency or effectiveness in this relation. A less crude version sees changing purposes, even multiple or conflicting purposes, but still sees the relation of means to ends as given. Thinking beyond technocracy means seeing the means/ends relation itself as an artifact of a particular cultural and historical formation. A more dialectical perspective would regard the interpenetration of people's conceptions of means and ends, each continually refigured in light of the other.1 It would regard new information technologies, for example, not simply as means for doing what people used to do, better and faster, and not even simply as innovations that now allow people to do things they had never imagined before, but as artifacts that reshape people's perceptions of themselves as agents, their relations to one another, their perceptions of time and speed, their expectations of predictability, and so forth - all dimensions of changing people's ways of thinking about means and ends, purposes and efficacy. The point here is to see the relation of means to ends not as a given, but as itself a particular way of thinking, one subject to criticism and change like any other. The pursuit of "success," defined as the effective and efficient attainment of specific goals, needs to be situated in the context of a less linear conception of actions and outcomes, intentions and effects.

A second aspect of moving beyond the technocratic mindset is to rethink the calculus of costs and benefits as a way of evaluating change. Once again, there are relatively crude and relatively subtle versions of cost/benefit analysis. Crude versions regard such decisions as basically a matter of drawing two columns and listing considerations pro and con; perhaps these individual factors need to be given weightings as to their importance in relation to one another. But then you simply add up each column and determine the result. A more subtle formulation of this mode of thinking would acknowledge that there are unintended consequences, to which values cannot be ascribed because they cannot be anticipated; it would acknowledge multiple considerations that may be difficult to isolate from one another or evaluate separately. Hence it might acknowledge that cost/benefit assessments are a matter of imperfect approximations, not a formal calculus.

But it is, again, a significant step beyond this mode of thinking to regard the entire "cost/benefit" framework as artificial and simplistic. It is a matter of seeing decisions as more than a matter of tradeoffs or pros and cons. It would stress the value-laden character of even the most rudimentary identification of pros and cons: pros and cons for whom, within what time frame, relative to what other goals or values? In addition, it would stress the hubris that often underlies attempts to foresee discrete effects of complex social decisions. It is not only the problem of unintended consequences, not only the problem of multiple, conflicting consequences. It is the problem of a web of contingencies, caught up in complex relations of interdeterminacy; it is the obstinacy of circumstance, refusing to give people what they want without also giving them what they do not want. We want to emphasize that nowhere is this clearer than in the case of new information technologies, which are continually confronting us with the inseparability of consequences, the desirable and the undesirable - and we will discuss a specific example of this phenomenon in a moment. But the final step beyond technocratic thinking and the cost/benefit mindset is perhaps the most challenging of all.

The assessment of means and ends, the weighing of costs and benefits, also assumes that people can distinguish and evaluate the "good" and "bad" aspects of different aims and consequences. The inseparability and interdependence of many consequences should begin to shake the faith that such determinations can be so readily made. But, again, it is more than this: the very same effects can be regarded as "good" or "bad," depending on other considerations, or when evaluated by different people, or when judged within alternative time frames. For example, the widespread use of antibiotics to eliminate infectious bacteria has, clearly, saved many millions of lives. This is a good thing. But it is also hastening the development of more and more virulent strains of bacteria, some of which now are resistant to all antibiotics. That is a very bad thing. Note that this is not a simple matter of intended and unintended consequences: the very same decisions that give rise to one set of effects give rise to the others. Nor is this a simple matter of weighing competing "short-term" benefits against potential "long-term" costs - for one thing, the "long-term" costs of such policies could be of incalculable harm. The post-technocratic mode of thinking we are proposing here would stress the limits to human foresight and planning; the interdependency of multiple consequences; and the problematic attempt to sort out "good" from "bad" outcomes. Instead we want to stress the inseparability of good and bad in all complex human circumstances and the error of imagining that we can readily evaluate such matters individually and discretely. As Michel Foucault said, "I am not saying that everything is bad; I am saying that everything is dangerous." We must always keep in mind that new technologies are inherently dangerous, and not fool ourselves in imagining that we are their masters.

A Post-Technocratic View of Information Technologies
We mean these observations as comments on technological innovation and reform generally; but they apply to the field of new information technologies (IT) especially. Why? Because IT has shown itself to be particularly susceptible to overpromising and hyperbole, especially but not only in its purported impact upon educational change. Yet if our arguments about multiple effects, the indeterminacy and inseparability of consequences, and the difficulty of isolating "good" and "bad" outcomes hold any weight generally, they apply with special force to IT.

First, the field of IT is changing at an extremely rapid pace, one that appears to be accelerating even faster. These areas of innovation feed back on themselves in some unique ways: the increasing capacities of machines, programming languages, and other software hasten the development of still further innovations. The very vision of capabilities is continually re-invented, as new possibilities that were not imagined previously suddenly become within the reach of development, then soon within the scope of the taken-for-granted. This field of development is also socially, technologically, and commercially self-generating. For example, as operating systems and software become easier to use, and as more people then use them, this creates both a broader talent base and a widened scope of incentive to imagine and create new products. The problem field of information technology is, in a way, fundamentally about itself; in other words, the field of information technology is uniquely self-reflexive in the way in which new developments make possible more and more developments. This self-reflexive character makes it especially susceptible to defining its problems and goals hermetically, as technical objectives of value in and of themselves, apart from clear consequences for human society generally.

Second, and related to this point, because the object of information technologies is information, and the production, organization, and dissemination of information, there is a sense in which it is also continually re-inventing the perceptions of its use and purpose. All new technologies, as we discussed earlier, change people's understandings of what they can do, what they want to do, what they think they need to do. And when those technologies refer to the very raw material with which people imagine, plan, and evaluate change - that is, information - there arises an especially strong likelihood that what falls outside of the readily available raw material will fall outside the decision itself. Hence, as argued previously, a particular relation of means to ends needs to be situated in a larger constellation of what is known and what is not known; multiplied in this instance by a critical reflection on what the medium of information about what is known and not known can and cannot tell us.

Third, the various considerations about IT we have been discussing here press an even more radical conclusion about the indeterminacy of effects. In this instance, we would argue, the future lines of development are literally inconceivable - not only because of the rapidity and complexity of change in this field, not only because of the reflexive nature of innovation, but because new developments in IT are uniquely also new developments in our imaginings of capabilities and goals. Conventional descriptions of the enormity of these changes (the computer as the new Gutenberg printing press, and so on) are merely analogies. What made the printing press a momentous innovation was not only that it created a mechanism for a new kind of textual delivery. It was that by doing so it fundamentally changed the conditions for its own accessibility and uses. It created a mechanism for a new kind of production, organization, and dissemination of information, and as such it created possibilities that were not, and could not have been, imagined previously. That is the scale of change represented by new information technologies, and it should buttress our sense of humility to realize that we cannot know all of the changes it portends, and that what we consider today "good" or "bad" prospects will certainly appear to others who have passed through those changes in a very different light. But we are not those others; or, at least, not yet.

For all of these reasons, we believe, reflections upon new information technologies must proceed with a profound modesty and caution. They are, literally, dangerous. Yet they are dangerous precisely because they hold such tremendous potential - a potential that goes beyond our capacities to imagine it fully. Hence we need to go beyond the simplistic categories in which much current assessment of IT has proceeded (especially, but not only, in the field of education). Douglas Kellner refers to the polarities of "technophobic" and "technophilic" perspectives.2 Jane Kenway, similarly, describes "utopian" and "dystopian" alternatives.3 Along with these commentators, and others, we want to press the need to go beyond such easy dichotomies, dichotomies that rely fundamentally on the illusion that we can easily separate and imagine "good" and "bad" effects in this field. We can't.

A great deal of rhetorical ink has been spilled excoriating the "fraudulent" promises of new information technologies: books with titles like Silicon Snake Oil or Data Smog have gained a wide readership and have fundamentally shaped the perception of IT among many groups, especially those with relatively little direct experience with these new technologies themselves. The fact that such accounts serve a popular taste for reports of scandal and fraud partly explain their appeal. Part of the explanation also must be yielded to people's anxieties about changes they do not entirely understand. And, we have argued, a certain healthy skepticism and caution is more than justified in the context of IT.

But we use the phrase "risky promises and promising risks" in our title because we believe that a more modulated position is necessary. For one thing, these changes are upon us and have a particular momentum of their own; one way or another, these are issues society will need to struggle with. Furthermore, we persist in believing that there are multiple potentials in these technologies, and it is yet to be determined what forms they will take and the purposes to which they will be put. Adopting a Luddite position and yielding these decisions to others merely guarantees that the skeptics will be ignored and the enthusiasts given free rein. In our title we suggest the tone or feel to what a post-technocratic stance might mean: not just weighing "risks" and "promises" against each other (another species of cost/benefit thinking), but of seeing their fundamental inseparability. The dangers and possibilities of IT are not opposed to one another - they are aspects of one and the same capacities. We can't simplisticly choose one over the other.

In the final section of this essay, we want to discuss a major issue that has arisen in current debates around IT, and to show how the post-technocratic analysis we are proposing applies to thinking more carefully about the complex relations of cause and effect, of anticipated and unintended outcomes, of the difficulty of distinguishing "good" and "bad" effects where such matters are concerned.

A Case Study: The Dilemmas of Censorship
Even before the advent of new information technologies, such as computers, censorship in schools was on the rise. More and more groups, from a variety of political agendas, have been challenging standard curricula, textbooks, library materials, and so forth. These moves toward censorship have been most visible recently in the coordinated efforts of interest groups such as the so-called religious right, but they have often gained wider acceptance as well. As schools have made heavier use of information technologies, including connections to the Internet and CD-ROM's, there have been more and more calls to censor digital content even among groups who have not traditionally been pro-censorship; the usual rationale for such efforts is the fear that children will have ready access to pornographic or "indecent" materials.

To our way of thinking, the major current responses to this situation typify the technocratic mindset. The equation is typically framed in simple, straightforward terms. Access to the Internet is a benefit because it connects students to enormous amounts of information; but the cost is that some of that information is inappropriate at best, pornographic at worst. How can we have the good without the bad? What are the appropriate balances between the benefits of free access to information and the costs of potential harm to children? Having defined the problem in terms benefits and the costs, the next step is easy: eliminate or minimize the objectionable material without abandoning what is beneficial. The only question is a technical one: How?

One approach is to attack the problem on the side of supply. This was the approach, for example, of the Communications Decency Act of 1997, recently ruled unconstitutional. With that ruling, censors have been looking for new ways to restrict access to objectionable materials on the Internet. But there are good reasons to doubt whether the problem can ever be solved on the supply side, given the vast, decentralized nature of the Internet, the speed with which new provider sites can be established, the internationalization of content, which places many suppliers outside the grasp of national laws or regulations, and so forth. The predominant response among censorship advocates has been merely to rewrite the CDA to see if it can withstand constitutional review. (Give a kid a hammer....)

The most highly touted alternative method of limiting access has been through the use of filtering software. This approach seeks to address the problem on the side of demand, through software that blocks searching for certain terms or visiting known sites that contain designated kinds of "objectionable" material. However, early returns with this approach have revealed myriad cases of filters knocking out too much (for example, all pages mentioning "breasts," so that people cannot access pages with information on breast cancer detection and treatment) or of applying filtering criteria that have other unintended effects (for example, the access provider in Vietnam that picked up tonal marks in Vietnamese, rendered as the letters "sex," which knocked out 85% of all messages and overloaded the software). The predominant response has been that such software is in its infancy, and can be expected to improve with further development.

The problem with both the supply and demand approaches is that they abstract the technical problems from a larger social context; they analyze the problem as one of filtering out the "bad" to protect the "good"; and they both see the failure of technical solutions as simply requiring more and better technical solutions. Sometimes these results are merely laughable. On a deeper level, however, we want to argue that such approaches to censoring access to information reveal a deeply anti-educational bias and have the potential to cause real harm.

First of all, while the desire to protect young children from accidentally encountering crude or even dangerous material on the Internet is entirely understandable, as is the desire more generally of people not to have to deal personally with upsetting or offensive content, there is no general shortcut to solving these problems. The risk of exposure to unexpected and unwanted material is inherent to the structure of the Internet itself, and while there are specific things that informed users can do to limit such nuisances, they are as much a condition of this public space as they are with foul graffiti or overheard profanity in any other public space. For very young children it is possible to erect fairly reliable walls to limit their access to broad categories of material, but this is mainly due to the limits of children's abilities to exploit the technological resources. There is no way - no way - to prevent motivated, sophisticated adolescents or teenagers from accessing such materials if they are determined to do so, especially when (as is often the case) they are pooling their skills and information and sharing what they find. This means that the only intervention that can have any significant impact on this issue is an educational approach: parents or teachers talking with them about their curiosities, interests, peer relations, sexual feelings, and how they act those out. Technology or censorship laws won't solve this problem.

Second, as discussed earlier, the attempt to neatly demarcate "good" and "bad" (or "useful" and "indecent" material) is fraught with difficulties. Part of the problem is with the vague and subjective connotations of terms like "indecent" - which in fact was used in the CDA in place of terms like "obscenity," which has a better-defined set of legal precedents for interpretation and application, precisely in order to broaden the range of what could be limited under the scope of the law. Such language inevitably brings in substantive social and political assumptions that are not merely technical in nature.

But this problem goes even further than simply calling for better-defined criteria. The Internet is a hypertextual, fundamentally relational information environment; its defining feature is the link, the association of material and the opening of multiple pathways of getting from point to point within the information space. Both attempts to limit the supply of content and filtering software will inevitably block access to unexpected sources of information because of that information's tangential relationship with subjects that someone believes to be objectionable. For example, attempts to prevent access to material about sex may inadvertently limit information about gender issues in general, about health care issues, or about equity issues in women's sports.

An additional danger, beyond the inability to obtain information, is that users who are denied access to information will probably never know it. For example, a student writing a term paper searches for references to "abortion" and finds none. What does this mean to the student? Perhaps she will know the information has been censored; or perhaps she will be left with the belief that the subject is not sufficiently important to warrant an entry. Or perhaps she will only find materials discussing the issue from a particular moral or political perspective. The user is in a perpetual quandary of not knowing whether some information does not exist, is not important, or has been censored. You can't see what you can't see.

On a conventional level, this is not entirely different from other forms of censorship in schools and libraries - the book not assigned, the empty space on the shelf, or the space that isn't conspicuously "empty" because there was nothing allowed there in the first place. However, in cyberspace censorship is much more difficult to discern. Given the nature of how information is stored, searched, and retrieved with hypertechnologies, the practice of censoring information tears holes in the fabric of knowledge and understanding. Knowledge, creativity, critical thinking, wisdom - these are not about the accumulation of "facts," they are about the relations among ideas, information, ethics, and culture. As one searches using hypertechnologies, points of information are not so much destinations as they are nodes - points that are linked to other points of information. Navigation proceeds from point to point based on the idiosyncratic interests or needs of the user, and suggests, or creates, new relations of significance. It is not that someone goes to "abortion" so much as they move through "abortion" on their way to somewhere else - somewhere where they (and perhaps only they) see an important connection. If we close the door marked abortion, we don't just close a door, we close off an entire hallway of possibilities.

Such censorship is antithetical to the sorts of educational and democratic ideals society holds for schools. How can students learn to discern, discriminate, synthesize or evaluate? How can they learn to make good choices, social and intellectual, if the choices are made for them by restricting the information they can and cannot see? Censorship in a technical environment doesn't just remove information, it unpredictably prevents access to other information. Moreover, and at a deeper level, the development of skills of discernment, judgment, criticality, and so forth require that one encounter and deal with material that is unpleasant, misleading, offensive, and so forth. It is through engagements with such material that one can become more resistant to them: by making a choice that it is unworthy or immoral. Plato, in the Republic, famously argued that if certain topics or points of view were simply never presented to young learners they would never arise as issues. Not only is this demonstrably false, it is patently self-defeating; for the refusal to expose learners to "infectious" material simply guarantees that they never develop the "antibodies" against it. In an open society with widespread media, enormous diversity of viewpoints in public spaces, and the myriad content of the Internet at its disposal, any attempt to deal with these issues solely by censorship strategies must fail.

The attempt to restrict suppliers of "indecent" materials or the application of filtering software to limit demand will create many more problems than they will ever solve, because they are the wrong kinds of responses for the problems they attempt to address. These problems, in fact, can't be "solved." The blanket approach of trying to weed out the bad while retaining the good cannot take into account the complexity of learning and knowledge, and the diversity and diverse needs of learners. This, then, is the educational challenge: helping students learn to operate in an environment that is inherently "dangerous," to deal with what may be unexpected or unpleasant, to make critical judgments about what they find. Such a task cannot be framed as simply sorting out the "good" from the "bad," and excluding all that is "bad." Educationally, we need some of the "bad" in order to create some of the "good."

The evaluation of information technologies continually presents society with issues that cannot be analyzed in terms of simple dichotomies of good and bad. Such thinking promotes technocratic solutions that preclude the important educational questions that need be asked when discussing the retrieval of information in hypertextual environments. What children and students will do, see, and collect in cyberspace raises numerous educational issues beyond the capacities of the technology itself. Rather than discuss how to censor information from young people, educators need to focus their attention on a host of questions, including society's attitude towards sexual matters, the issue of what constitutes appropriate and inappropriate materials in general, how to help students to become more responsible and to learn to exercise critical judgment, and young people's rights to access information whether adults want them to or not. It is not difficult to understand why most educators have shied away from such controversies; but in shying away from them, they have shied away from the deeper educational issues at stake.

1 Bertram Bruce calls this the "transactional" perspective on new technologies: Bertram C. Bruce, "Literacy Technologies: What Stance Should We Take?" Literacy Research Vol. 29 No. 2 (1997): 289-309.

2 Douglas Kellner, "Media Literacies and Critical Pedagogy in a Multicultural Society," Educational Theory (forthcoming).

3 Jane Kenway, "The Information Superhighway and Postmodernity: The Social Promise and the Social Price," Comparative Education Vol. 32 No. 2 (1996): 217-231.


Return to My Home Page