Next: Why I am a Futurist
I was reading a book the other day by a pioneer in artificial intelligence, and came upon a passage that just floored me. He was describing the potential risks of developing artificial intelligence, and the concerns some have expressed about the threat artificial intelligence poses to humanity.
“This is not the first time an emergent technology has seemed to pose an existential threat,” he writes.
“The invention, development, and stockpiling of nuclear weapons threatened to blow up the world, but somehow we have managed to keep that from happening, at least until now. When recombinant DNA technology first appeared, there was fear that deadly engineered organisms would be set loose to cause untold suffering and death across the globe. Genetic engineering is now a mature technology, and so far we have managed to survive its creations.”
“Somehow we have managed to keep that from happening.” I found this attitude stunning and irresponsible. “Yes, yes, new technologies are threatening,” he seemed to be saying, “but somebody (else) always finds a way to deal with any problems.”
This breezy, casual attitude belies the very hard work taken of establishing systems, regulations, laws, norms, and an ethical infrastructure to keep the worst effects of technological development in check. The writer, a technologist, would simply and blithely pass this responsibility off on others.
An important part of this ethical infrastructure is the imagination necessary to foresee the challenges, not just the benefits, of technological change. Some thoughtful scientists and technologists have engaged in such imaginative activity, as did Albert Einstein and Leo Szilard at the dawn of the Atomic Age. Jennifer Doudna is at once a leader in the development of the CRISPR gene editing technique and a careful critic of the problems such a technology could cause.
“What’s hard to imagine are the uses to which a new invention will be put, and inventors are no better than anyone else at predicting what those uses will be” the technologist continues. “There is a lot of room between utopian and doomsday scenarios that are being predicted for deep learning and AI, but even the most imaginative science fiction writers are unlikely to guess what their ultimate impact will be.”
Most of the time, when we say we cannot imagine the future, 1) we aren’t trying hard enough, and 2) we press on anyway, throwing up our hands at our imaginative impotence.
Yes, of course, prediction is hard, perhaps even impossible. I tell every audience that I speak to: Anyone who says they can predict the future is lying to you. When we say we want to know the future, what we are usually saying is that we want to know what some complex system is going to look like at some point in the future. By their nature, complex systems are inherently unpredictable. That does not abrogate our responsibility to imagine the many possible behaviors of such systems. We may not be able to predict but we can — and indeed must — expansively imagine the future.
We must allow ourselves the responsibility to imagine. As the above quotation from the technologist suggests, imagination is undervalued in our society. We associate imagination with the activities of children, or mere poets or dreamers. Imagination is not something that serious people engage in.
We are sometimes admonished, “Don’t let your imagination get away from you,” scolded for child-like behavior. Nevertheless, I believe there is an imperative, a responsibility to engage in imaginative thought if for no other reason than to anticipate and forestall the problems new technologies might pose.
As a futurist, I am in the imagination business. So, when I write a Next column that looks at “The Future of Mobility,” “Tech That Generates Fake News,” or when I ask “Will Artificial Intelligence Have Civil Rights?” I am attempting to use my imagination to identify possibilities. Unforeseen effects of technological change are often the result of a willfully stunted imagination. These “Next” essays are not predictions, they are imaginative projections, considerations of the implications and consequences before we act.
The futurist has an ethical responsibility to engage in the difficult work of imagining the effects of technological, social, cultural, economic and political change.
David Staley is Director of the Humanities Institute and a professor at The Ohio State University. He is host of the “Voices of Excellence” podcast and host of CreativeMornings Columbus.