ADVERTISEMENT

    NEXT: The Future of Robocalls

    I was on the phone the other day with the cable company, trying to deal with a billing issue. After clicking through several menu items, I was finally connected to a human being somewhere who answered my questions to my satisfaction. 

    ADVERTISEMENT

    As I think about a potential future where more and more jobs are automated, I considered this call-center worker. Those menu steps I followed to reach the human being were all automated. (“OK, tell us what the problem is. You can say ‘billing…’”) I wonder, however, how much autonomy the human call center worker actually has. That is, her responses to my queries are “programmed” such that she has little leeway in how she might address my problem. 

    We are just a few steps away from a world where that final voice will be a form of artificial intelligence, masquerading as an actual human being, perhaps. In the same way I converse with Siri and Alexa today, such disembodied voices will be a regular and seamless feature of many of the systems that guide our lives. 

    In the future, the most common way that we will interact with artificial intelligence will be with these disembodied voices. That is, while there will certainly be embodied AI (in a form like Sophia the robot), most AI will be embedded in systems, dispersed across networks. Indeed, we will reach a stage when we’ll be unable to determine if the voice we are communicating with is human or AI. And in some circumstances, it won’t really matter to us. When we want a cable bill settled, for example, we will only be seeking an outcome, not a conversation. 

    Into what other kinds of systems will AI seep? Bureaucracy is a kind of system. In a bureaucracy, humans carry out the functions assigned to them. In carrying out these assigned roles, how much creativity, judgement and intellectual independence is such a functionary actually permitted? Many of the actions of bureaucracies are carried out anonymously and automatically. While there will certainly remain some who will have such flexibility, the actions of many bureaucracies will very likely be carried out by autonomous agents rather than human beings. 

    Christian Brose is imagining that future militaries will be “distinguished by the quality of their software, especially their artificial intelligence.” The military of the (near) future will consist of “swarms of intelligent machines that distribute sensing, movement, shooting, and communications away from vulnerable single points of failure and out to the edges of vast, dispersed networks.” Humans will continue to be a crucial part of these systems, but there will be fewer human beings required, and those that remain will have to learn to work and communicate with the autonomous intelligence that courses through those systems. “These systems,” concludes Brose, will be “unmanned and autonomous to the extent that is ethically acceptable.”

    Think of the way in which the stock market has already been infiltrated by automated systems that carry out trades without human intervention. Anytime a “flash crash” occurs — where the stock market drops rapidly, 1,000 points in a matter of minutes, before rebounding in almost as fast a time — analysts blame automated trading. What usually happens in these cases is that computers might trigger a crash, but panic from human investors exacerbates and accelerates the decline. 

    I don’t think I can say with certainty that these arbitrage algorithms are intelligent or have anything like a will or intent. But imagine what will happen when artificially intelligent agents are making trades on the stock market. Will they make riskier decisions than human stock brokers? How will human traders respond to and interact with the actions of these autonomously-intelligent traders?

    In the 1980s, French theorists developed “actor-network theory,” which holds that human societies are enmeshed in networks that include the natural world. Controversially, the theory held that non-human “actors” exhibit as much agency in these networks as human beings. So, for example, Bruno Latour wrote a book about the Aramis mass transit system

    Aramis, the technologies of transit, was a part of the network along with the technicians that ran and maintained the system and the politicians who funded it. Aramis was as much an “actor” as were the humans. Latour and other actor-network theorists were not suggesting that the non-human components of the system were in any way intelligent. But when artificial intelligence is added to a myriad number of systems, these non-human actors, in addition to having agency, will also exhibit intelligence. They might exhibit their own intentions and their own interests. 

    Our everyday experience of artificial intelligence will likely go unnoticed. Embedded in systems, most decisions will occur without our being aware of them (except when they fail). When we do encounter artificial intelligence in any system, we will likely be uncertain whether the person we are talking to on the robocall is human or algorithm. And we will likely grow indifferent to the distinction.

    David Staley is Director of the Humanities Institute and a professor at The Ohio State University. He is host of the “Voices of Excellence” podcast and host of CreativeMornings Columbus.

    ADVERTISEMENT

    Subscribe

    More to Explore:

    Craft Brews get a Techy Twist at Species X Beer Project

    Craft beer and artificial intelligence enthusiasts alike will find...

    NEXT: Perpendicular Futures – The American Trabant

    The Trabant was an East German-manufactured automobile, a much...

    The Confluence Cast: This Title Was Not Generated by AI

    This podcast was not generated by AI. But it may have been influenced by it. As all industries consider how to use artificial intelligence in their work, Walker Evans, Susan Post, and I discuss how it relates to journalism, disclosure, and transparency. We cover Columbus Underground’s updated policy on the matter, how content generation should and should not be used in a newsroom, what AI thinks about the Columbus food scene, and engage in a lively debate on the ethical challenges of how this new tool will affect our lives.
    David Staley
    David Staley
    David Staley is president of Columbus Futurists and a professor of history, design and educational studies at The Ohio State University. He is the host of CreativeMornings Columbus.
    ADVERTISEMENT