HomeWinBuzzer NewsResearchers Show How AI Can Develop Prejudices without Faulty Datasets

Researchers Show How AI Can Develop Prejudices without Faulty Datasets

A joint study by Cardiff University and MIT explores ways AI can form prejudices. A donation game with 100 AI revealed increasingly fractured groups who actively avoided each other.

-

Research previously revealed that AI can amplify sexist bias, but it was primarily a dataset issue. If an AI is looking at images of kitchens from Google, it's likely it will see women in them, and associate them as a result. It's an important issue, but one that can be fixed by more careful selection.

However, a new study by MIT and Cardiff University reveals that the formation of bias isn't always so obvious. It found that AIs can form prejudices by learning and copying behavior from one another.

To observe this, the researchers thought of a game. 100 AI bots were grouped and could choose whether to donate fake money to its own group or another. Donations were based on a system of reputation and their personal strategies. As the system continued, each individual began to learn donation strategies from others and carried their bias with it.

By copying those looking for a bigger payoff in the short term, the AI's became more prejudiced towards those in other groups. It's a fascinating look at not just computers, but how prejudice evolves in general.

According to the study, it starts with a mutation. Certain ‘in-group' agents discount another ‘out-group' to reduce the risk of making donations that may not be returned. As the tactic spreads, agents that don't have a prejudice are exposed to it, further promoting prejudice.

“Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others. Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse,” said Cardiff University's Professor Roger Whitaker.

Unintelligent AI Can Still Be Prejudiced

Further, it shows that AI doesn't have to be advanced or have human intelligence to form such biases. Autonomous vehicles and IoT devices are already influenced by others.

“It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,” Whitaker warns.

It's a warning that rings particularly clue after Twitter CEO Jack Dorsey‘s testimony to Congress this week. During it, his company has accused of censoring conservative voices with its algorithms. Though the evidence for that is lacking, the study highlights how even simple systems could do so in the future.

Such research could be instrumental in finding ways to combat it, too. The teams found that increasing the number of distinct subpopulations within a group lessened prejudice. If they were encouraged to co-operate outside of their group, non-prejudicial sub-populations could form alliances without being exploited.

“By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it,” said Whitaker

SourceNature
Ryan Maskell
Ryan Maskellhttps://ryanmaskell.co.uk
Ryan has had a passion for gaming and technology since early childhood. Fusing the skills from his Creative Writing and Publishing degree with profound technical knowledge, he enjoys covering news about Microsoft. As an avid writer, he is also working on his debut novel.

Recent News