HomeWinBuzzer NewsAI Can Amplify Sexist Bias in Datasets, Warn Microsoft Researchers

AI Can Amplify Sexist Bias in Datasets, Warn Microsoft Researchers

Microsoft Researchers and teams from the University of Virginia have discovered that AI is making biased assumptions from the photo sets and amplifying them. The company has set up an internal ethics board to counteract this problem.

-

As humans, we form assumptions all the time based on experience and existing data. It helps keeps us and others safe. However, it can also have a more negative effect, incomplete or skewed data leading to things like sexism and racism.

Researchers are starting to see the same problem with AI. It began when University of Virginia computer science professor Vicente Ordóñez saw a disturbing pattern in his image recognition software. If it saw a picture of a kitchen, it would be much more likely to associate it with women.

Concerned, he created a team to investigate two sets of AI training photos supported by Facebook and Microsoft. He found that both show gender bias, with activities like shopping and cooking linked to women, and sports to men. It didn’t just reflect the bias in the data, it amplified it further.

It’s an important drawback to be aware of, and Microsoft researchers have their own work in that regard. In a previous study, the software giant teamed up with Boston University.

It trained an AI from Google News content and found that when asked  “Man is to computer programmer as woman is to X?,” it replied, “homemaker.”

Fighting the Bias

Researchers are now looking for ways to minimize such biases. However, they have to be able to identify the bias before they counteract it.

As a result, director of Microsoft research Eric Horvitz is pushing others to adopt the same tools. His team has an internal ethics committee which focuses on finding and addressing biases.

“I and Microsoft as a whole celebrate efforts identifying and addressing bias and gaps in datasets and systems created out of them,” he said.

He’s also considering the introduction of more idealized data sets. As with educational content for children, it may be worth presenting AI with an equal world.

“It’s a really important question–when should we change reality to make our systems perform in an aspirational way?” he added.

Princeton researcher Aylin Caliskan says no. “We risk losing essential information,” she argues. “The datasets need to reflect the real statistics in the world.”

SourceWired
Ryan Maskell
Ryan Maskellhttps://ryanmaskell.co.uk
Ryan has had a passion for gaming and technology since early childhood. Fusing the skills from his Creative Writing and Publishing degree with profound technical knowledge, he enjoys covering news about Microsoft. As an avid writer, he is also working on his debut novel.

Recent News