NEXT: Seeing the Unforeseen Effects of Technology

David Staley David Staley NEXT: Seeing the Unforeseen Effects of Technology
Decrease Font Size Increase Font Size Text Size Print This Page

Why are some future effects of technology “unforeseen,” and is there anything that can be done to prevent this? 

A recent white paper from the consulting firm McKinsey explored the future of the “Bio Revolution.” Their analysis anticipated that the leading sectors in biotech over the next decade would include biomolecules, meaning the engineering of intracellular molecules; biosystems, or the engineering of cells, tissue and organs; biomachine interfaces, where the nervous systems of living organisms are connected to machines; and biocomputing, where cells are used to store information and thereby engage in computation. The white paper usefully imagines what kinds of businesses will emerge from these scientific and technological breakthroughs.  

The McKinsey report correctly considers the degree to which the technological futures examined are “scientifically conceivable.” The report states, “Over time, if the full potential is captured, 45 percent of the global disease burden could be addressed using science that is conceivable today.”

Careful futurists ask “under what conditions might the scenario not happen?” That is, what trends might interrupt the application of this science, even if it is conceivable? One way to miss “unforeseen effects” of technological development is to focus attention solely on “driving trends” to the exclusion of “blocking trends.”  

The report continues, “Funding basic science or helping promising applications accelerate through research pipelines could directly influence the number of commercial applications in the future.” This is another reasonable assumption, but that assumption bears some scrutiny.

Consider the state of science today: scientific evidence and the pronouncements of scientists are distrusted by wide swaths of the U.S. population, and science has been defunded by the present administration. Why assume that we will continue to fund basic science at historic rates? Why assume that global scientific leadership will continue to reside in the U.S.? Might China emerge as the new scientific superpower, with divergent interests as to the future direction scientific and technological developments might take?

It seems to me that part of any consideration of the future of technological development must include an assessment of what we might term the degree of “social and economic conceivability.” 

“About 70 percent of the total potential impact,” reports McKinsey, “could hinge on societal attitudes and the respective mechanisms employed to govern use, such as regulations and societal norms.” 

There is little doubt that “societal attitudes” toward technological adoption and reception are often ignored or ill-considered when assessing the future effects of technology. “Society” might reject a particular technology, or “society” might employ technology in ways antithetical to its developers.  Unforeseen effects of technology often arise because of the lack of consideration of societal attitudes in the early stages of research and development. Too many technologies are released into a societal context based upon unfounded or unexamined assumptions about how “society” and “societal attitudes” will impact their development.

“Unequal access could perpetuate socioeconomic disparity, with potentially regressive effects. Biological advances and their commercial applications may not be accessible to all in equal measure, thereby exacerbating socioeconomic disparity,” the report says.

Although it does raise some of these questions, the McKinsey report nevertheless states that, “Beyond the many risks are significant ethical questions that exceed the scope of this report.” Why aren’t these questions a central feature—even the central feature–of the report? Why are ethical questions treated as an afterthought, issues to-be-addressed at some later date and by unnamed others? 

“The challenge of cooperation and coordination of value systems across cultures and jurisdictions is no easy task, particularly when advances in these scientific domains could be seen as a unique competitive advantage for businesses and economies,” the report states.

If cooperation and coordination across value systems looms so important, with whom does this responsibility lie? Is it the scientists and technologists themselves? The lack of consideration of ethical and cultural questions at the fuzzy front end of scientific and technological research is one source for later, downstream, unforeseen effects of technology.  

“Global cooperation and coordination could help level the playing field but will be difficult to achieve when disparate value systems exist,” the report continues.

This is indeed the case. Politics is defined as the negotiation of competing interests, and there is clearly a politics of technology that must be considered with any new technological advance. In what forum will these competing interests be negotiated? In the laboratory? In the halls of Congress? Via the United Nations or some other supranational organization? As long as we assume that technological advance occurs outside social, cultural, economic or geopolitical contexts, then we surely will not anticipate unintended consequences. 

One of the McKinsey recommendations is that, “Civil society, governments, and policy makers need to inform themselves about biological advances and to provide thoughtful guidance.” 

What form does this guidance take? In what forums is this guidance provided? After a technology has been introduced into the market it may already be too late for such guidance. And even if there were such a mechanism, what kind of effective authority would it have? Let’s say that the cultural effects of some technology would be too disruptive, or would exacerbate xenophobia or would have disastrous environmental consequences. Would civil society have any recourse to halt the advance of the technology in order to address these concerns? 

“Several governments, including those of China, the United Kingdom, and the United States, published strategic plans and goals intended to catalyze innovation and capture its benefits,” the report says. But do these national plans imagine, anticipate and plan for unforeseen consequences, or do they focus only on the preferred futures, absent any consideration of the unintended effects? If you do not actively look for the unintended, you cannot claim to be surprised by the unforeseen.  

Imagining unintended consequences of technology demands interventions at the earliest stages of scientific and technological research, before products are unleashed upon the market. These interventions might take the form of questions, deep penetrating questions that reveal the unexamined assumptions of the researchers and developers, especially around ethical, societal, environmental and socioeconomic implications. Who would be permitted to ask these questions? In what forum may they ask them? Are they the technologists themselves? Is it regulators, proactively imagining and anticipating unintended consequences? Is it a think tank or other research organization with the stature and impact in the public sphere that their questions, their anticipations have the effect of slowing down or redirecting scientific and technological development, to forestall unpleasant unforeseen consequences and to accelerate the beneficial unforeseen consequences? 

Perhaps it comes from futurists asking difficult but necessary questions.  

David Staley is Director of the Humanities Institute and a professor at The Ohio State University. He is host of the “Voices of Excellence” podcast and president of Columbus Futurists.

Print Friendly, PDF & Email


features categories