Everything you need to know about the deepfake phenomenon

Everything you need to know about the deepfake phenomenon

(Shutterstock)


Save Story
Leer en español

Estimated read time: 6-7 minutes

This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.

SALT LAKE CITY — When you first click play on this YouTube video of a speech by German Chancellor Angela Merkel, it looks normal. She's speaking at a gathering of the Christian Social Union in Munich. But then you look closer.

About 10 seconds in, her face changes. In fact, it's not her face at all — it's President Donald Trump's. Merkel, like other world leaders, including Barack Obama and Argentina's president Mauricio Macri, has had her video edited to create what has been coined a "deepfake."

What is a deepfake?

A deepfake is an image, audio clip or video, created using artificial intelligence software, that seems real — but isn't.

The word is a combination of the words "fake" and "deep learning" — a type of machine-learning method.

How do you create a deepfake?

Digital film editing has been around since the 1980s but was mostly used only in big-budget movies. It was often very expensive and time-intensive. Adding CGI to movies to create a fake landscape or person could take months and a team of specialists.

Now, machine learning does this intensive work. With artificial intelligence, images and videos can be superimposed on an existing image frame-by-frame, with little input from the human editor — though it does require somewhat extensive source footage.

For a closer look at how this technology works, watch Michael Zollhoefer's "Deep Video Portraits." Zollheofer is a visiting assistant professor at Stanford University in the computer science department.

The software that makes this possible is open-source — meaning that it's free and available on the internet for anyone who wants it. Tools like Google's TensorFlow and Facebook's DensePose allow hobbyists and those without much technical skill to make deepfakes.

Almost anyone can also create deepfake audio clips using software like Lyrebird or Adobe's forth-coming Project Voco. This means that, in addition to Donald Trump's head being superimposed on Angela Merkel, it could also be his voice.

While there is some promise to this type of technology, deepfakes still pose major concerns.

How did deepfakes start?

Like other new technologies, some of the first applications of deepfakes were in pornography. Most often the targets of these deepfakes were female celebrities like Olivia Wilde and Gal Gadot who had their faces superimposed on porn stars in videos in late 2017.

Now, those wishing to create revenge porn or embarrassing videos of others can do so relatively easily through apps that allow users to create deepfakes with enough source footage. Lawmakers are already investigating the legal implications of these actions.

PornHub, Reddit and Twitter all banned deepfake and AI-generated porn content, but it's difficult to manually monitor, and the content still exists and has moved to other corners of the internet, according to Vice News.

Vice News did some foundational reporting on deepfakes in late 2017, beginning a flood of reports from other news outlets across the world about the trend.

What effects have deepfakes had?

Perhaps more concerning than deepfakes' use in pornography, however, is its effect on trust. Trust is declining in the United States, according to the 2018 Edelman Trust Barometer, which monitors trust in institutions globally.

The United States had a deeper decline than any other country with a staggering 37-point drop in all institutions, including the media. Across the world, seven out of 10 respondents to the Edelman survey expressed concerns about false information being used as a weapon.

If a viewer cannot trust audio or video, it becomes more difficult to trust all media and erodes trust in institutions like the legal system, government and more.

However, a 2018 Pew Research study found that individuals who had more trust in the media were more likely to identify factual statements.

Though deepfake media is relatively new, it can be combatted with some critical thinking skills. Parents and educators can encourage media literacy skills by asking such questions as:

  • Who created the content?
  • How might others interpret what the content says?
  • What possible biases and opinions are shared in this content?
  • How is this video or image trying to get my attention?
  • What purpose does this content have?
Deepfakes can also be flagged by the same technology that makes them possible: artificial intelligence. According to tech news platform Wired, GIF-hosting company Gfycat can run deepfakes through its AI tool and flag a clip that may resemble someone but that does not render perfectly in each frame.

But technology will have a harder time keeping up with deepfakes as they become more sophisticated, making it increasingly critical for media consumers to be throughtfully critical of everything they encounter.


Carrie Rogers-Whitehead

About the Author: Carrie Rogers-Whitehead

Carrie Rogers-Whitehead is the CEO of Digital Respons-Ability, which trains parents, educators and students on digital citizenship. She is also a college instructor, mother and author of the forthcoming book Digital Citizenship in Schools.

What effects have deepfakes had?

Perhaps more concerning than deepfakes' use in pornography, however, is its effect on trust. Trust is declining in the United States, according to the 2018 Edelman Trust Barometer, which monitors trust in institutions globally.

The United States had a deeper decline than any other country with a staggering 37-point drop in all institutions, including the media. Across the world, seven out of 10 respondents to the Edelman survey expressed concerns about false information being used as a weapon.

If a viewer cannot trust audio or video, it becomes more difficult to trust all media and erodes trust in institutions like the legal system, government and more.

However, a 2018 Pew Research study found that individuals who had more trust in the media were more likely to identify factual statements.

Though deepfake media is relatively new, it can be combatted with some critical thinking skills. Parents and educators can encourage media literacy skills by asking such questions as:

  • Who created the content?
  • How might others interpret what the content says?
  • What possible biases and opinions are shared in this content?
  • How is this video or image trying to get my attention?
  • What purpose does this content have?
Deepfakes can also be flagged by the same technology that makes them possible: artificial intelligence. According to tech news platform Wired, GIF-hosting company Gfycat can run deepfakes through its AI tool and flag a clip that may resemble someone but that does not render perfectly in each frame.

But technology will have a harder time keeping up with deepfakes as they become more sophisticated, making it increasingly critical for media consumers to be throughtfully critical of everything they encounter.


![Carrie Rogers-Whitehead](http://img.ksl.com/slc/2585/258536/25853698\.jpg?filter=ksl/65x65)
About the Author: Carrie Rogers-Whitehead -----------------------------------------

Carrie Rogers-Whitehead is the CEO of Digital Respons-Ability, which trains parents, educators and students on digital citizenship. She is also a college instructor, mother and author of the forthcoming book Digital Citizenship in Schools.

Related stories

Most recent Utah stories

Related topics

UtahScience

STAY IN THE KNOW

Get informative articles and interesting stories delivered to your inbox weekly. Subscribe to the KSL.com Trending 5.
By subscribing, you acknowledge and agree to KSL.com's Terms of Use and Privacy Policy.

KSL Weather Forecast