U. helps identify bias in job, loan application software

U. helps identify bias in job, loan application software

(Courtesy of the University of Utah)


1 photo
Save Story
Leer en español

Estimated read time: 2-3 minutes

This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.

SALT LAKE CITY — The idea that computers can make decisions without bias since they aren't human is nice, but it doesn't always work out that way.

Unfortunately, the algorithms used by companies to make hiring decisions, loan approvals and other life-altering judgments can be unintentionally biased just like their flesh-and-blood counterparts, according to researchers. The good news is that a team led by the University of Utah's Suresh Venkatasubramanian says it has created a tool to identify these biases in software algorithms.

"The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations," Venkatasubramanian said in a statement.

The problem lies with some machine-learning algorithms that "change and adapt like humans" as they scan through data, according to researchers. Often, the algorithms perform tasks like sorting resumes based on applicants' grade point averages to give them a score.

Machine-learning algorithms aren't always problematic. Similar algorithms are used to create targeted advertisements, or to make recommendations for customers on websites like Netflix. However, it is possible human biases can creep into the programs.


If there are structural aspects of the testing process that would discriminate against one community... that is unfair." - Suresh Venkatasubramanian

Venkatasubramanian and his team created their own machine-learning algorithm to test whether algorithms may be discriminating against people of different races, religions and other groups protected by U.S. anti-discrimination laws. They said there is potentially a problem if their algorithm is able to predict a person's race or gender by looking at the data being analyzed.

"There's a growing industry around doing resume filtering and resume scanning to look for job applicants, so there is definitely interest in this," he said. "If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair."

To fix the problem, researchers said they determined the data being processed by the machine-learning algorithms just needs to be redistributed. As a result, the algorithm shouldn't be able to create the biases because it can't see the relevant information.

The team presented its research in Sydney, Australia at the 21st Association for Computing Machinery's SIGKDD Conference on Knowledge Discovery and Data Mining on Aug. 12.

Photos

Most recent Utah stories

Related topics

Utah
Natalie Crofts

    STAY IN THE KNOW

    Get informative articles and interesting stories delivered to your inbox weekly. Subscribe to the KSL.com Trending 5.
    By subscribing, you acknowledge and agree to KSL.com's Terms of Use and Privacy Policy.

    KSL Weather Forecast