Artificial intelligence compromises better than humans, BYU researchers say

Artificial intelligence compromises better than humans, BYU researchers say

(Phonlamai Photo, Shutterstock)


Save Story

Estimated read time: 3-4 minutes

This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.

PROVO — Researchers have known for years that computers can compete with humans, but two BYU faculty members and staff from other universities say artificial intelligence may be able to compromise more effectively than humans, too.

Now those researchers are hoping their study can be used to improve how humans communicate with each other.

"We found that if you paired two machines with each other, they cooperate better than when you pair humans with each other," said Jacob Crandall, an associate professor for computer science professor, who worked on the study along with BYU professor Michael Goodrich, colleagues at MIT and other international universities.

"They have much more profitable relationships, and so that gives us the opportunity to take a look at ourselves and say, 'Hey what is it that people do in their relationships that cause them to break down so they can't these effective compromises?'" he added.

Crandall started working on the study about 15 years ago — working off and on the project since. The researchers were hoping to discover if computers and artificial intelligence could cooperate and compromise much like humans do.

The group experimented with various algorithms as part of the study. While many showed how computers could compete, such as playing chess or checkers, the researchers found those algorithms failed at compromises or cooperation. That's when the researchers put their focus on what they needed to develop to get the computers to cooperate, Crandall said.

From there, they developed different mathematical algorithms that could allow the computers to perform the function they were looking for. The researchers then needed to find if different machines could cooperate using that algorithm.

They found that the machines would work well with each other; however, when they paired it up with humans, the cooperation would sour because humans weren't always completely honest. The relationship between the two went to a state where nobody would profit from the relationship.

"That's when we realized the key point to getting people to cooperate is to be able to talk to them, express what you plan to do and express your displeasure for things they've done or compliment them for things they've done," Crandall said. "Once the machine starts to talk and voice these different strategies to the humans that listened to them, all of a sudden something flips in the person and they think very differently about the relationship and they, in turn, act very differently."

After adjusting the machine to understand human "cheap talk" phrases, the machines were able to double their cooperation.

Now the group is hoping their results can help humans find ways to improve the way they cooperate with each other, Crandall said. That includes situations such as friends falling out over an incident or divorce.

"That would be one implication down the road," Crandall said. "There's still a lot of work to be done to kind of help understand how we can use the A.I. to teach people but it seems like a possible implication."

Most recent Artificial Intelligence stories

Related topics

Carter Williams, KSLCarter Williams
Carter Williams is a reporter for KSL. He covers Salt Lake City, statewide transportation issues, outdoors, the environment and weather. He is a graduate of Southern Utah University.
KSL.com Beyond Business
KSL.com Beyond Series

KSL Weather Forecast

KSL Weather Forecast
Play button