AI is an amazing tool. But will it supercharge the spread of 'fake news'?

Artificial intelligence service ChatGPT is pictured on a computer in Salt Lake City on Jan. 18 in this photo illustration. News consumers will need to be more skeptical than ever as the use of AI becomes more prevalent.

Artificial intelligence service ChatGPT is pictured on a computer in Salt Lake City on Jan. 18 in this photo illustration. News consumers will need to be more skeptical than ever as the use of AI becomes more prevalent. (Spenser Heaps, Deseret News)


Save Story

Estimated read time: 8-9 minutes

This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.

Editor's note: This is part of a KSL.com series looking at the rise of artificial intelligence technology tools such as ChatGPT, the opportunities and risks they pose and what impacts they could have on various aspects of our daily lives.

SALT LAKE CITY — Like social media and the internet before it, artificial intelligence will likely transform the world in profound and unexpected ways. But like those technologies, it has the potential to unleash chaos at the hands of both witting and unwitting users.

Take ChatGPT, for instance. The AI chatbot took the world by storm after its November release, and is capable of creating paragraphs of relatively complex prose almost instantaneously.

ChatGPT is developed using OpenAI's GPT-3 large language model, which — put very simply — analyzes billions of text samples on the internet and uses that existing knowledge base to generate its own text. Although it can mimic the writing styles used by high school English students, academic researchers and journalists, the technology is only trying to predict what sequence of words should follow a particular prompt — like a super-powered version of predictive text already used in smart phones and email services.

Generative AI models like ChatGPT rely on information that already exists on the internet, so while they can be remarkably accurate at times, they can be prone to ingesting and then recreating a lot of dubious — or completely false — information.

And although most people have been trained to be skeptical of unsourced information on social media and the internet, generative AI's confidence and ability to "write" in an authoritative and definitive tone could fool even the most cautious information consumers.

Jim Tabery, professor of philosophy at the University of Utah, said while it's too early to say for certain what generative AI will mean for society, there are two general categories of concern when it comes to misinformation.

"One is for people who are going to be looking for accurate information. Because of how the systems work, they're getting inaccurate information, and you might have all sorts of concerns about how people are making decisions with that information," he said. "The other side of it, though, is people who turn to these with the intention of creating disinformation."

Starting with the latter, here's how Tabery's concerns could play out.

The weaponization of AI

Misinformation can often be spread unintentionally on social media by users who think they're sharing a genuine article or post. But those "fake news" articles are often created intentionally by those who know how to confuse or mislead.

AI could make it easier for everyday people to flood social media with bad information, even if they lack the savvy and know-how to successfully run a propaganda campaign.

"One genuine concern is that this makes the bar for producing this information so low, and then anybody can circulate that," Tabery said. "You see these cases where people can go to the chatbot and say, 'Write me a story in the style of Alex Jones that debunks the Sandy Hook shooting,' or say 'Write me an essay that references a research paper in the Journal of the American Medical Association that tells us why COVID-19 vaccines don't work.' And these things do that. They give a sort of produced text, which sounds like the kind of thing that you see on KSL.com."

Because artificial intelligence can create paragraphs of text in a fraction of a second, in the style of legitimate news sources, it could perceivably create an ecosystem of disinformation to evade our best media literacy practices.

In The Atlantic, Matteo Wong imagines the way AI could be used to create a web of misinformation about the bird flu outbreak, which is not currently spreading between humans.

Wong writes: "A political operative — or a simple conspiracist — could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake 'leaked' documents, audio and video recording, and expert commentary."

These rewritten histories could be made to look remarkably realistic, and mirror the way news outlets link back to previous articles or studies to back up their sources.

This would make it even more critical that people find and rely on trustworthy outlets and sources, Tabery said, at a time when trust in media is in a yearslong decline.

"It's going to be continuing to make it harder for the average information consumer to tease out in any kind of way what's true and what's not, particularly if they don't trust experts, right?" he said. "This stuff is made for people quote-unquote 'doing their own research' to find and confirm biases that they already have."

Will good information be harder to find?

Tabery's first concern — that generative AI models could inadvertently share inaccurate information — may be less dystopian than the weaponization of the technology, but it's still potentially disruptive.

For better or worse, Americans turn to search engines for quick information on a variety of topics, from general trivia to medical advice. Search results can turn up a broad range of inaccurate information, but to varying degrees, people have learned to sift through Google's results and identify the more reliable sources.

Chatbots though, don't always cite their sources, making it harder to determine if the information they provide is accurate, Tabery said. He said search engines compile results from National Geographic or Encyclopedia Brittanica, along with blogs or other less reliable information, but the user is in control of what they click on for more information.

With a chatbot, answers may read like an encyclopedia entry, but be sourced from completely fabricated sources, with no way for the user to see where the information originated.

"I think the worry with these chats, is they give the illusion of a kind of objectivity that makes you think you don't have to worry about that," Tabery said. "(You think), 'This is just a computer, it's just spitting out information — going out there and finding the truth and giving it back to me. And so if I ask a question, and it gives me something back, it's not because it's a Republican or a Democrat, it's because it found the answer.' And so that's the worry: that it gives this sense of objectivity that in fact isn't there."

Leelila Strogov, founder and CEO of AtomicMind, a college admissions coaching company, said that while ChatGPT is "notoriously inaccurate" and likely won't ever be completely reliable, it's still a valuable tool for learning.

"I think we just have to understand that we as humans ... need to work on training ourselves to work with the tools we have as effectively as possible," she said. "It's another tool in a toolbox. Nothing more, nothing less."

She likened it to Wikipedia, which can be unreliable as a source, but can serve as a jumping-off point for someone trying to learn about a new topic.

"I think when you turn to ChatGPT for just about anything, you are setting yourself up for an enormous amount of fact-checking and research if you actually want to get things right," she said. "So, I think it can serve as a starting point for curious minds, but it very quickly needs to go into investigation mode. What here is true? What is not true? And which sources can I actually turn to that will likely be reliable?"

How to live with AI

Whether you like it or not, artificial intelligence is here to stay. And while it's good to be cautious and aware of the downsides, that doesn't mean it's something to be feared.

Strogov and Tabery have both spent time with ChatGPT since November, and say one of the best ways to address any fears is to simply experiment with it.

As confident as ChatGPT may seem in its answers, Tabery said it frequently provides answers that are obviously wrong, but may seem true to anyone who isn't well-versed in a particular subject. He encouraged people to ask chatbots questions specific to their line of work or study, in order to appreciate just how imperfect the answers can be.

"I think there are nuances to a human conversation, there are nuances to tracking down information that there is on a scientific question that still requires a lot of active user involvement," he said. "I think people should keep in mind how many experiences they've had with these chatbots already, and then sort of extrapolate from that what they can expect as these things become more common."

Strogov recommended creating your own propaganda using ChatGPT, to better understand the ways it can be manipulated.

"I think some great things to do would be feed it a variety of material, then look at what it turns out critically. I would ask it to write a piece of propaganda, ask it to write a completely false piece about a historical figure, and see what it can do so that you're better able to recognize it," Strogov said. "I go on websites now all the time where I see very clearly based on the mistakes that are made. I'm like, 'Oh, an AI wrote that article.' It's clear as day. So use it to build your critical thinking skills and hone your own detection systems for when something is just purely false or purely AI written."

Most recent Artificial Intelligence stories

Related topics

Bridger Beal-Cvetko is a reporter for KSL.com. He covers politics, Salt Lake County communities and breaking news. Bridger has worked for the Deseret News and graduated from Utah Valley University.
KSL.com Beyond Business
KSL.com Beyond Series

KSL Weather Forecast

KSL Weather Forecast
Play button