Utah lawmakers, tech leaders discuss potential regulation of artificial intelligence

Utah policymakers discussed potential regulation of artificial intelligence to protect privacy rights and increase transparency.

Utah policymakers discussed potential regulation of artificial intelligence to protect privacy rights and increase transparency. (Michael Dwyer, Associated Press)


Save Story
Leer en español

Estimated read time: 8-9 minutes

Editor's note: This is part of a KSL.com series looking at the rise of artificial intelligence technology tools such as ChatGPT, the opportunities and risks they pose and what impacts they could have on various aspects of our daily lives.

SALT LAKE CITY — Artificial intelligence is likely here to stay, and the technology is only going to get much, much better.

That's the assumption behind a recent panel discussion hosted by the Utah Policy Innovation Lab to address the pros and cons of the rapidly emerging technology, and how the state can harness the power of advanced artificial intelligence.

The panel included a pair of state lawmakers, state agency leaders as well as several CEOs of tech and AI companies, all of whom touted the potential AI has to greatly expand productivity in business, governance and academia.

Although the models that support software like ChatGPT are still far from perfect, innovation could rapidly improve artificial intelligence, and lead to major breakthroughs in research and business in the process.

"Some people are predicting it cures cancer in the next 5-10 years using AI to help solve different forms of disease," said Matthew Poll, CEO of GTF. "I think cancer is definitely on the list. I also think from a geopolitical standpoint, this may be our saving grace."

Margaret Woolley Busse, executive director at the Utah Department of Commerce, said companies that integrate new technology well can see dramatic increases in productivity. One bank in China saw productivity per worker increase to around $16 million after it adopted AI, she said, "which is just mind-boggling, that it could be that extreme."

An AI-led transformation like that would be "very disruptive" to workers and companies, she said, "but I just find that scale really amazing."

The disruption that AI could cause in a variety of sectors is why some lawmakers are already considering the best ways government can get involved to prevent the most damaging outcomes without limiting the positive transformation the technology could spark.

Education and development

The release of ChatGPT last year prompted widespread concern that more sophisticated language models would automate away large portions of the white-collar workforce. While the panelists didn't ignore the potential for disruption, several said the outlook isn't that simple — or that gloomy.

"It's pretty clear that this is a game-changer," said Alan Fuller, Utah's chief information officer. "I've heard this thing, that AI is not going to replace doctors, but doctors who use AI are going to replace doctors who don't use AI. And I think you will certainly see that."

Nick Pelikan, CEO of Piste.AI, cautioned against getting swept up by the "hype cycle" around AI — "people are saying that we're a couple months away from Skynet and Terminator and Arnold Schwarzenegger, so ... take everything you hear with a grain of salt" — but said it's important to prepare students and workers for a future when AI use is more common.

Panelists floated the idea of creating AI curriculum in higher education, to help prospective workers understand how to implement the technology in their careers. Alex Lawrence, an associate professor at Weber State University, said it can be daunting for students to face the "tectonic shift" across industries, "but the easiest thing that I can say is, this willingness to be open-minded that your job has been changed dramatically."

User privacy

Sen. Kirk Cullimore, R-Sandy, spoke of two potential paths for government regulation of AI: protecting user privacy and countering AI-created disinformation. He was clear that the proposals aren't official recommendations, but could translate to concrete policies in the future.

Online privacy has become an increasingly important issue across the political spectrum, and privacy for minors on social media was a key part of the Utah Legislature's regulations of Big Tech platforms this past legislative session.

Cullimore said AI models may be able to quickly collect lots of personal information from users, and users may not realize the prompts or other data they enter can be stored and used by the company. Utah has several privacy laws on the books, he said, but doesn't have a comprehensive way to deal with privacy and artificial intelligence.

"If you're a small company, but you start using AI and have a model that has hundreds of thousands of people's data in it, but you're a small startup, how do you do privacy in that?" he said. "So, we're really considering — we don't have the whole strategy yet — how do we incorporate privacy into our laws to ensure users and Utahns are protected?"

"You don't want to stop the industry from growing, but what we need to figure out what are those guardrails, what are those rules that companies have to play by," he continued.

Although privacy laws and other regulations will likely need to be implemented at the federal or even the international level, Cullimore said Utah is in a position to lead out on policy, similar to being the first state to pass major social media regulations.

"States like Utah are primed to deal with this type of stuff. We have consumer data privacy laws, we have the technical force and the technical expertise, the entrepreneurship here in Utah to deal with these things, and set these types of models that then become ubiquitous," he said.

With the rise of social media and other online spaces, Busse said the business model of selling user data has "essentially gone unfettered."

"I think we're only now sort of waking up to what that is, and how we have ceded a lot of our autonomy because we've allowed all that data to be collected," she said.

In many ways it's too late to claw back control of personal data from big companies because it's "baked in" to how the industry works, she said, and AI will only make it easier for companies to analyze and utilize all that data.

"So, we've got to get on top of it now, in my view," Busse said.

Will AI harm the public trust?

The internet has long been a safe haven for conspiracy theories and outright untruths, but AI has the potential to make it even harder to trust what you see online.

Fuller said AI image generators could be used to sway elections or erode trust in government officials, offering the hypothetical of a completely fake image of a state legislator in the middle of conducting a drug deal.

"It was a generated image, but with our human eyes ... we assume, because we see the picture, that it's worth 1,000 words," he said.

Generated images have already confused some online users. Images showing former President Donald Trump resisting arrest circulated widely on Twitter, although their creator was quick to point out they were not real.

On the other hand, Busse warned that exposure to generated images, text or recordings may desensitize some to actual news events they should be concerned about.

"Then you have kind of a 'never cry wolf' problem," she said, where it becomes easy to dismiss things as "fake news" because of the proliferation of misinformation.

Whether it's using the blockchain or other technologies, Cullimore said the state needs to look at ways of verifying where information comes from and whether it was created by an AI.

"That's one thing that it will be important for the state to do is set up whatever the parameters are as structured, so that we can verify the authenticity of whatever data it is that we're looking at," he said.

Keeping pace

AI advancements could continue to accelerate as models become more sophisticated, which means governments will have their work cut out for them in trying to keep up.

When thinking about regulation and education around AI, Busse said a word that often comes to mind is "agility," which she admits is often out of step with the usual way of doing things in government.

"In this job, I thought a lot about regulation. And at first, people used to talk about 'reasonable and reliable' as what you wanted. What you really need is relevant," she said. "And it's hard to do that, because what happens is you pass a law, and you're like, 'OK, we're done,' and then things change like that."

Rep. Jefferson Moss, R-Saratoga Springs, who previously spoke to KSL.com about the future of AI technology, said Utah is well-equipped to respond because the various branches of government work well with agencies and other stakeholders in the state.

"This is something that's moving so fast, there's so many variables, and I think you guys are really in prime position, if we can come up with the right mix of being innovative while also protecting our privacy and civil liberties," he said.

Related stories

Most recent Artificial Intelligence stories

Related topics

Artificial IntelligenceUtah LegislatureUtah governmentSilicon SlopesUtahPoliticsScienceBusinessSalt Lake County
Bridger Beal-Cvetko covers Utah politics, Salt Lake County communities and breaking news for KSL.com. He is a graduate of Utah Valley University.

STAY IN THE KNOW

Get informative articles and interesting stories delivered to your inbox weekly. Subscribe to the KSL.com Trending 5.
By subscribing, you acknowledge and agree to KSL.com's Terms of Use and Privacy Policy.

KSL Weather Forecast