‘Deepfake’ Videos: a New Weapon in Disinformation Wars

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. (AP Photo)

Your browser doesn’t support HTML5

‘Deepfake’ Videos: a New Weapon in Disinformation Wars

New technology is making it possible for people to make realistic videos of people appearing to say things they never actually said.

The videos are known as “deepfakes.”

The name comes from the process of deep learning, a form of artificial intelligence, or AI. They can be created with computer programs available for sale or on the internet. The technology uses face mapping and AI to produce false videos that look almost real.

Lawmakers, intelligence officials and media experts have expressed concern about deepfakes. They warn the false videos could be used to threaten America’s national security or interfere in elections.

The videos are created by loading a complex set of instructions into a computer, along with lots of images and audio recordings. The computer program then learns how to copy the person’s facial expressions, movements, voice and speaking patterns.

Technical experts say, with enough video and audio of a person, the system can produce fake video of the person saying anything.

Sen. Marco Rubio, R-Fla., arrives to testify at a Senate Judiciary Committee hearing on Wednesday, March 14, 2018, on Capitol Hill in Washington. (AP Photo/Jacquelyn Martin)

Florida Senator Marco Rubio is one of several U.S. lawmakers warning of the dangers of such technology. He says that, so far, deepfakes have mainly been used against famous people or to create humorous videos. But he says he can imagine foreign nations finding ways to use them to harm American democracy and society.

Rubio told the Associated Press that a foreign intelligence service could use the technology. Such an actor could produce a fake video of a politician using racist language or doing something illegal. Or they could produce fake video of a U.S. soldier killing civilians overseas or a foreign leader threatening nuclear war.

“It’s a weapon that could be used - timed appropriately and placed appropriately - in the same way fake news is used,” Rubio said. But in video form, such news could create distrust and chaos before an election or any other major U.S. decision, he added.

“We know there are people out there that are trying to divide society, influence elections, and we know this capacity exists. So it's only logical that at some point someone's going to take the next step and sort of weaponize it.”

The issue got attention earlier this year when the website BuzzFeed published a deepfake political video. The false video appeared to show former President Barack Obama giving an address that criticized President Donald Trump.

You see, I would never say these things, at least not in a public address. But, someone else would.

It was created using a combination of professional and free video editing programs using machine learning.

Hany Farid is a digital forensics expert at Dartmouth College in Hanover, New Hampshire. He agrees there is a great possibility that deepfakes will be used to try to influence America’s politics. “I expect that here in the United States we will start to see this content in the upcoming midterms and national election, two years from now.”

The problem, Farid says, is that it will be very easy for almost anybody to create a realistic-looking fake video of world leaders. “We have entered a new world where it is going to be difficult to know how to believe what we see,” he told the Associated Press.

He added that the opposite result is also worrying. People will become so used to seeing false videos that they will be more likely to doubt a real video. Farid expects the problem to spread worldwide.

The U.S. Defense Advanced Research Projects Agency (DARPA) is already working to develop technologies to identify fake images and videos. But Senator Rubio says, currently, the identification process is complex and takes a very long time.

“It takes some real forensic capability, technical capabilities, to be able to show that it's not real. And by the time that's done, it's been widely disseminated.”

Rubio and other lawmakers say people will need to take more responsibility to identify fakes.

I’m Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from the Associated Press and other sources. was the editor.

We want to hear from you. Write to us in the Comments section, and visit our Facebook page.

_____________________________________________________________

Quiz

_____________________________________________________________

Words in This Story

artificial intelligence n. ability of a machine to reproduce human behavior

fake adj. not real, false

appropriately adv. suitable or right for a particular situation or person

chaos n. a situation where there is no order at all and everyone is confused

logical adj. using reasoning

forensicsn. scientific methods for examining objects or substances related to a crime

disseminate v. to spread or give out news, information, ideas, etc. to many people