HomeWinBuzzer NewsHow AI Search Errors Brought Discredited Race Science to the Forefront

How AI Search Errors Brought Discredited Race Science to the Forefront

Google’s and Microsoft’s AI tools face criticism for surfacing IQ scores from discredited studies, raising concerns over AI’s role in spreading bias.

-

AI search results from tech giants Google and Microsoft, along with Perplexity, have stirred debate after presenting racially charged IQ data sourced from a discredited dataset by Richard Lynn, a figure linked to race science. Patrik Hermansson, an investigator for Hope Not Hate, discovered the biased search results while researching national IQ scores using Google’s AI Overviews, a tool meant to simplify search by offering condensed answers. Instead, Hermansson found Google reporting specific IQ scores, such as 80 for Pakistan, which matched data from Lynn’s controversial national IQ study—long known for its inaccuracies and racially biased methodologies.

Google’s AI Overview and Misleading National IQ Data

Launched this year, Google’s AI Overviews are experimental search features delivering AI-computed summaries directly on the search page. For users like Hermansson, who searched for “Pakistan IQ”, the AI-generated answer was concise but startlingly precise, showing an IQ of 80.

Other searches, like those for Sierra Leone and Kenya, yielded similar numbers, all traceable to Lynn’s national IQ dataset. While Google’s intent with AI Overviews is to improve search efficiency, the unintended result of surfacing pseudoscientific data underscores the risks of automated content curation.

After the incident, Google responded by confirming that these Overviews had bypassed content quality filters. “These Overviews violated our policies and have been removed,” Google spokesperson Ned Adriance told WIRED. Google has since modified the Overviews to minimize occurrences of harmful content; however, some AI Overview results still refer to Lynn’s dataset through indirect sources, highlighting ongoing issues with source transparency and automated content accuracy.

AI Search Tools by Microsoft and Perplexity Also Implicated

Microsoft’s AI-powered Copilot, integrated into Bing search, was similarly found to provide results referencing national IQ scores from sources indirectly based on Lynn’s work. When asked about countries like Pakistan, Copilot cited IQ statistics similar to those in Google’s responses, attributing them to sites that themselves lack transparent sourcing.

Microsoft claims Copilot generates answers by condensing multiple sources into a single response, with linked citations allowing users to verify details. Nonetheless, these instances raise questions about how effectively AI search tools are monitoring for biased or unreliable content. “Copilot answers questions by distilling information from multiple web sources into a single response,” Caitlin Roulston, a Microsoft spokesperson confirmed to WIRED.

Perplexity, another AI-powered search service, faced comparable scrutiny after queries about IQ scores led to responses citing social media threads and websites that ultimately referenced Lynn’s discredited research. Perplexity declined to comment on these findings, leaving concerns around transparency and information quality unresolved.

Race Science Legacy and the Pioneer Fund’s Influence on Modern AI Searches

Lynn’s dataset is rooted in his association with the Pioneer Fund, established in 1937 with the intent to promote research supporting racial hierarchies, often using manipulated data. The fund, originally an American non-profit, directed substantial financing toward eugenics research and studies purporting to prove racial intelligence differences, making it a central player in the race science movement.

Lynn, a long-time proponent of this ideology, compiled “national IQ” scores with questionable methodology, often using minimal and unrepresentative samples, such as drawing conclusions on Angola’s IQ based on data from 19 individuals.

The Pioneer Fund’s work, though largely discredited, has continued under the Human Diversity Foundation (HDF), a private entity reportedly funded by tech entrepreneur Andrew Conru. Hope Not Hate’s investigation links Conru’s contributions to HDF, supporting similar studies and race science projects aligned with Pioneer’s original goals. Through connections with groups like Germany’s Alternative für Deutschland, HDF has developed partnerships aimed at advancing ethnically exclusive ideologies under a scientific guise.

Broader Issues with AI Transparency and Content Validation

Google’s AI overview tool, initially designed to improve search clarity, highlights an important challenge: handling niche topics with limited quality sources. In such cases, AI can inadvertently prioritize biased or poorly-sourced data. As shown with the IQ data from Lynn’s research, niche sources can sometimes evade the typical content checks.

Despite its stated intent to filter low-quality data, Google’s AI occasionally pulls from these flawed sources, as it did in Hermansson’s case. Testing on Google’s Gemini chatbot revealed a different approach, with the AI providing a nuanced response explaining potential cultural biases in IQ testing rather than specific scores.

Last Updated on November 7, 2024 2:20 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x