- Utah is developing AI regulations, led by Margaret Woolley Busse, to address concerns.
- The state emphasizes transparency and safety, with penalties for non-compliance in AI use.
- Utah's AI initiatives include a pilot program for AI in medical prescriptions, ensuring human oversight.
Editor's note: This is the second of three stories in a series that examines Utah's approach to artificial intelligence and its ongoing governance.
SALT LAKE CITY — If you want to understand why Utah — traditionally a state that shies away from overregulation — is suddenly the national architect of artificial intelligence policy, you have to look at the lawsuits sitting on Margaret Woolley Busse's desk.
As executive director of the Utah Department of Commerce, Busse has been on a four-year "social media journey," leading the charge against tech giants like TikTok, Meta and Snap. But for Busse, those legal battles aren't just about the past; they are a cautionary tale for the future.
"We haven't really solved the social media problem, and now we have this," Busse says, referring to the explosion of generative AI. "We are seeing the same business models — collecting data, using it to monetize — that made social media bad. We decided we weren't going to let that happen again."
The 'political crisis' of trust
Busse described AI as facing a "political crisis." While Silicon Valley is focused on velocity, the public is increasingly anxious. Recent tragedies involving AI "companions" and teen self-harm have shifted the needle; for many families, the excitement of a smarter world has been replaced by the fear of an extractive one.
"Anxiety has actually overcome excitement in terms of how Americans view AI," Busse noted. "If we don't have trust in the technology, there's going to be a massive backlash. We have to do it in a way that builds trust — not through voluntary 'pinky swears' from tech companies, but through high-stakes transparency."
Legislative 'teeth' in 2026
Utah's answer is a framework built on six distinct pillars: regulatory policy, public protection, learning, workforce, academia and state government. The current 2026 legislative session is moving forward with HB286, the "Artificial Intelligence Transparency Act," spearheaded by Rep. Doug Fiefia, R-Herriman. Unlike earlier social media laws, this bill treats advanced AI models as product features rather than "free speech," requiring "frontier model" developers to:
- Publish child protection plans: Detailing exactly how they prevent harmful targeting or emotional exploitation of minors.
- Provide whistleblower protections: Ensuring employees within these AI firms can report safety incidents or "catastrophic risk" without fear of retaliation.
- The enforcement "hammer": Busse was clear — the state isn't just offering slaps on the wrist. Violations carry civil penalties of $1 million for a first offense and $3 million for subsequent ones. If a company in Utah's "learning lab" misses a safety mark, the state can immediately revoke its regulatory relief, leaving it fully exposed to legal liability.
Doctor, not device
Nowhere is this "trust-first" model more visible than in Utah's recent pilot with Doctronic, the first state-approved program allowing AI to participate in medical decision-making for prescription renewals.
While the idea of a "bot" signing off on heart medication might cause a double-take, Busse argues it's actually safer than the status quo. "This would probably be more thorough than what (physicians) do ... it asks questions a doctor often doesn't," she explained.
The guardrails, however, are absolute: The AI cannot handle controlled substances such as opioids, and it operates under a strict "phased review." A human doctor must manually validate every single prescription for the first 250 patients in the pilot before the system moves to the next level of autonomy. It is automation designed to support the doctor, not replace the doctor.
The human-enhancing future
Busse's ultimate goal for AI is to be a "human-enhancing" technology — tools that solve cancer rather than "parking us on a couch," she said. This includes a push into the classroom with partners like SchoolAI, ensuring technology serves as a "human-in-the-loop" tutor for students and an aide for teachers, rather than a replacement for mentorship.
As Utah leads the national conversation, Busse's message to the industry is clear: In the Beehive State, quality and safety aren't hurdles to innovation — they are the only way to ensure innovation survives the public's growing skepticism.
The final installment of this three-part series will examine the Lehi-based SchoolAI system of using AI in the classroom.
Read more:











