ADVERTISEMENT

    NEXT: Mechanized Warfare

    The videos are certainly impressive, and just a little creepy.  If you go on YouTube and search “Boston Dynamics,” you can see some of their work.  Boston Dynamics is developing robots—both four-legged and bipedal—that are capable of impressive feats of locomotion and balance. One example includes a four-legged robot that can maneuver over a variety of terrains. Even after an engineer kicks it on its side, the robot is able to regain its balance. The bipedal robots walk like humans, and are also capable of Parkour-like leaps across asymmetrical obstacles.  

    ADVERTISEMENT

    When I first started watching these videos a couple of years ago, I had already surmised that one of the first applications would be military in orientation. And indeed, it seems that the military are now imagining robots as soldiers.  

    The Robotics Collaborative Technology Alliance has been working for years now on developing military robots that, in contrast to the robots currently used in warfare, are able to “follow orders.” The Alliance is led by the Army Research Laboratory (ARL) and includes researchers from MIT and Carnegie Mellon—who’ve long been at the forefront of robotics research in universities—along with NASA’s Jet Propulsion Laboratory and—you guessed it—Boston Dynamics.

    This initiative is aimed at combining robotics with artificial intelligence, making the robots more adaptable in the battlespace. When developed, these robots might play a role similar to military dogs—like the one who cornered Islamic State leader Abu Bakr al-Baghdadi —maneuvering ahead of human troops to look around corners in urban warfare, or to sniff out IEDs. If successful, these robots might mean less of a need for human soldiers on the ground.

    Political leaders are usually circumspect about escalating conflicts by sending in ground forces. “Boots on the ground” often means the high likelihood of casualties, of soldiers returning to Dover Air Force Base–and the optics that that represents—to say nothing of the tragedy of those veterans who return physically and psychologically maimed by warfare. These images, and the consequences of sending in “boots on the ground.” often act as a brake on military action. Politicians might refrain from escalating a conflict because of the concern for American lives.  

    The increased use of drones for warfare was escalated during the Obama administration. Drones—operated by soldiers at some distance from the battlespace—are viewed as a “cleaner” way to fight a war, cleaner in the sense that such mechanized warfare does not place American lives in jeopardy. But the U.S. is far from the only power employing drones in warfare today. 

    “Three decades ago, drones were available to only the most technologically developed state military organizations. Today they’re everywhere, being used by weaker states and small military forces, as well as many non-state actors, including Islamic State and al-Qaeda.”

    The Drone Wars Are Here Already – Bloomberg Businessweek

    Given the escalating use of drones, and now with the development of robotic soldiers, does this mean that future war will be automated, devoid of human soldiers?  

    “Drones will definitely be taking more important roles in the next few years, but they aren’t about to replace soldiers,” according to Ben-Gurion University’s Ben Nassi.

    Philosopher Daniel Statman from the University of Haifa asserts that, “The fantasy that many of us have is to have units of robots going in, and that will take a lot of time…Let’s say that in 30 years, we will see more and more automated tools on the battlefield but still see a lot of soldiers.”

    Eventually, and especially as artificial intelligence improves, we will see, as stated in the Bloomberg Businessweek article “robots in the air and on the ground and underwater, and all of them will be based on artificial intelligence and be completely autonomous and very well programmed and know—so to speak—the rules of war, the general conventions on how to distinguish between combatants and noncombatants. We will give them everything we know, and how to identify legitimate military targets, and they will do a much better job than human beings.” 

    Military leaders often say that an air war is rarely sufficient to obtain victory. Eventually ground forces must intervene.  

    Robots as ground forces will make it easier for politicians to escalate conflict into military action. Losing a robot on the battlefield would not carry the emotional and ethical consequences of losing an American service member. Sending a robotic soldier into harm’s way would be a capital expenditure, an economic calculation that many politicians might make with little hesitation. The brake that the loss of American life represents would be removed. One result of this could very well be more warfare, more endless war.  

    It is possible a new ethics of warfare would emerge: that the potential loss of capital expenditure will be the new ethical brake on military action. Rather than contemplating the loss of human life, politicians might instead consider a technological cost-benefit analysis of conflict. At the same time, politicians will no doubt debate whether the resources spent on mechanized warfare might be better spent on domestic programs.   

    Recently, the Defense Innovation Board—an advisory board to the Pentagon made up of industry leaders—published a set of guidelines for the application of military AI. These guidelines are intended to be forward-looking and anticipatory. Like any good futurist, the Innovation Board is looking to anticipate unforeseen consequences. The Board suggested five principles for guiding military AI:

    1) Humans should remain responsible for the development, use, and outcomes of the department’s AI systems. That is, there should be a “human in the loop” when deploying lethal force.

    2) AI systems should be tested for reliability.

    3) Experts building AI systems should understand and document what they’ve made.

    4) The Defense Department should take steps to avoid bias in AI systems that could inadvertently harm people.

    5) Military AI should be able to detect unintended harm and automatically disengage if it occurs, or allow deactivation by a human.

    What is striking to me is that these guidelines say nothing about the ethics of employing military robotics and AI at all, or ethical guidelines that might restrain their use in combat. That robots will be used to fight future wars now seems to be a given.  

    The above scenario assumes that there will be an asymmetry between a mechanized army fighting against a traditional human army. But as we are seeing with drones, we should expect that both/all sides of future conflicts will have mechanized ground troops. Will this mean that warfare will be largely devoid of human combatants? That warfare will be a violent clash of machines, wars of technological attrition? As is frequently the case in modern warfare, humans would become vulnerable bystanders in a battlespace dominated by machines fighting other machines.  

    David Staley is Director of the Humanities Institute and a professor at The Ohio State University. He is host of the “Voices of Excellence” podcast and host of CreativeMornings Columbus.

    ADVERTISEMENT

    Subscribe

    More to Explore:

    NEXT: Perpendicular Futures – The American Trabant

    The Trabant was an East German-manufactured automobile, a much...

    NEXT: What If? Questions for 2024

    What if Kamala Harris declines to stand as President...

    NEXT: Quantum Computing and the Quantum Worldview

    In contrast to conventional computers, quantum computers are (or...

    NEXT: Artificial Life

    While our attention has been transfixed on ChatGPT and...
    David Staley
    David Staley
    David Staley is president of Columbus Futurists and a professor of history, design and educational studies at The Ohio State University. He is the host of CreativeMornings Columbus.
    ADVERTISEMENT