Our City Online

Features

NEXT: Tech That Generates Fake News

David Staley David Staley NEXT: Tech That Generates Fake News
Decrease Font Size Increase Font Size Text Size Print This Page
  • Sumo

The new ambassador to the Netherlands, Peter Hoekstra, was asked by Dutch interviewers recently to clarify statements he had made in 2015 that Dutch cities were in chaos because of “the Islamic movement.”

“There are cars being burned. There are politicians that are being burned…And yes, there are no-go zones in the Netherlands,” he told a conservative group. (None of this is true, of course.)

Hoekstra has attempted to deflect the controversy away, largely by ignoring any such questions, and has made no effort to retract or otherwise correct his statement. Indeed, finally, Hoekstra denied having made the statements, calling the reports that he had “fake news.”

As you might guess, YouTube and other outlets very quickly shared video footage of Hoekstra making precisely those statements. In an era of the rapid, viral spread of media, it should be fairly easy to discredit claims of “fake news” by hapless public figures.

If some new technologies are permitted to develop, however, it will very soon become next to impossible to use video to verify or refute such claims. Baidu—China’s answer to Google—has recently announced a new app that can mimic any voice in a manner of minutes. Another company, Lyrebird, has developed a tool that can mimic any voice, and for demonstration purposes recreated the voices of Trump and Obama. The artificial intelligence behind these applications is reaching the stage where they can quickly and convincingly impersonate anyone.

Face2Face is an application that alters video footage. Imagine a video headshot of Urban Meyer taken from a press conference. Face2Face’s application employs a depth-sensing camera: I would train the camera on my own face and start making funny faces or twisting my mouth in odd ways.

The application combines both my facial distortions with Urban Meyer’s face to produce a composite such that Meyer appears to make strange facial contortions. I could, of course, have the depth-sensing camera record me moving my lips and saying “Michigan is a far superior team to Ohio State. I don’t see how we will be able to win this season.” Meyer could be convincingly made to be mouthing these words. As a public figure, Urban Meyer has a voice that’s easy to capture, and after a minute or two I could use Lyrebird or some other technology to create a video of Urban Meyer conceding victory to Michigan.

Now imagine that video going viral. On the one hand, Meyer could be made to look ridiculous. He could, like Peter Hoekstra, deny ever having made that statement at all. Then, of course, people would point out the “obvious” video evidence of him making this statement. Meyer’s protests would look pathetic in the face of such “evidence.”

Our society would quickly descend, however, into one where no one would be able to trust “evidence,” video or otherwise. Once consumer-grade versions of facial and voice mimicry become widely available, there would be little reason to believe any video or audio recordings as truthful. Of course, we might just as likely choose to believe any such fake video that confirms our beliefs and prejudices.

How would we discern falsehood from truth when the AI-enhanced technologies make such convincing fakes? How would we know that the video we are watching of the president isn’t some crafty manipulation? We have already seen Hollywood insert virtual actors into movies: Peter Cushing—who died 20 years earlier–featured in Star Wars: Rogue 1. Indeed, we should expect to see many more deceased actors resurrected on film.

Using Face2Face or similar technologies, someone like Steven Colbert will mock the President in ways that won’t require him to create a “Cartoon President.” We might have some assurance that, because he is a satirist, the video we are seeing is probably a fake, and that that is part of the humor. To employ facial/voice mimicry in a satirical fashion will require that viewers exhibit some very educated and discerning viewership to read those larger contextual clues around video recreations. Given how susceptible people are to believing the Photoshopped forgeries spreading virally today, I am not optimistic this will happen, and decontextualized doctored videos will proliferate.

Early press reports extolled the virtues of Lyrebird’s application, noting that a user could alter the voice of the narrator of an audiobook to one of their favorite actors, or customize the voice of a personal assistant. That is, rather than Alexa’s voice or Siri’s voice, you could create a voice of your choosing.

There is a certain “creep factor” here, of course: that recreated personal assistant’s voice could be one you’ve captured from the person you are stalking. After a minute of recording their voice, you could be regularly conversing with and hearing the voice of your “unrequited love.” I shudder at this new kind of invasion of privacy. Actors are seeking intellectual property protections of their voices and likenesses: will ordinary citizens similarly seek such protections? My voice and likeness are my property after all…right?

I bring up the creep factor here to make a more important point about facial/voice mimicry: what problem are these technologies solving? Ideally, we develop new technologies to solve a problem, understanding of course that new problems are a possible side-effect of any technological development.

It is a common refrain that “technologies are just neutral tools. It is the use that people put them to that are either good or bad.” That is true, I think, for some technologies, but this is certainly not true of all technologies. I instruct my history students that the guillotine was a technology that was developed for one purpose: to kill many people more quickly and efficiently than an executioner’s sword. To say that the guillotine was “just a tool” or was “neutral” until it fell into either the right or wrong hands is laughable: no one used that technology to chop lettuce or lumber. The tool was not neutral, it was designed to kill people.

What problem do technologies like Lyrebird and its ilk solve? Is it really a problem that I can’t personalize my virtual assistant? Not hearing Samuel L. Jackson read my audiobook is a problem? I would contend that Lyrebird and Face2Face have been designed to create falsehoods.

David Staley is interim director of the Humanities Institute and a professor at The Ohio State University. He is president of Columbus Futurists and host of CreativeMornings Columbus.

The next Columbus Futurists monthly forum will be Thursday, March 22 at 7:30 p.m. at Ohio State’s Thompson Library, Room 165. The topic for the evening will be “Continuing the Sustainability Conversation: Discounting the Future.”

The next CreativeMornings Columbus will be Friday, March 16 at 8:30 a.m. at Pat and Gracie’s. Sandra Lopez will speak on the theme “Courage.”

Tags:

features categories

Join us on July 22nd to experience the best that Columbus has to offer in the realm of healthy living, exercise and wellbeing!

CLICK HERE FOR MORE INFO & TICKETS