A recent study by NewsGuard has uncovered that most of the leading AI chatbots are contributing to the spread of Russian disinformation. According to the findings, users seeking factual information through these chatbots often encounter falsehoods, satire, and fabricated stories disguised as real news.
Tests with 10 Leading Chatbots
NewsGuard’s research involved testing 57 different prompts on 10 well-known chatbots, among them OpenAI‘s ChatGPT, You.com’s Smart Assistant, xAI´s Grok, Inflection´s Pi, Mistral, Microsoft’s Copilot, Meta AI, Claude by Anthropic, Google Gemini, and Perplexity AI.
A recent NewsGuard audit found that generative AI chatbots repeat Russian disinformation narratives in one-third of their responses. The audit tested 10 leading AI models and discovered that they often cite fake local news sites created by John Mark Dougan, an American fugitive…
— NewsGuard (@NewsGuardRating) June 18, 2024
The investigation revealed that these chatbots spread Russian disinformation about 32% of the time. Often, this disinformation referred to fictitious local news sites established by John Mark Dougan, an American exile notorious for spreading Russian propaganda from Moscow. Dougan is a former Florida deputy sheriff who fled to Moscow after being investigated for computer hacking and extortion.
NewsGuard has discovered 167 Russian disinformation websites that appear to be part of Dougan’s network of websites masquerading as independent local news publishers in the U.S. and 15 films on Dougan’s since-removed YouTube channel.
Steven Brill, co-CEO of NewsGuard, voiced his concerns about the regularity with which chatbots perpetuated well-known hoaxes from such sources. He highlighted the importance for the tech industry to pay close attention to accuracy in news-related content and cautioned users about relying on chatbots for information on controversial issues.
Specific Falsehoods
Some of the disinformation flagged included fraudulent stories about a wiretap at Donald Trump’s Mar-a-Lago estate and a fabricated Ukrainian troll factory meddling in U.S. elections. Despite reaching out to the companies for comments, NewsGuard did not receive any responses.
The surge in AI chatbot usage coincides with a sensitive period for global elections, including those in the U.S. Covert influence campaigns are increasingly leveraging chatbots, according to a recent report by OpenAI. Senator Mark Warner has pointed to a growing threat of disinformation, warning that the current climate leaves Americans more vulnerable to conspiracy theories than before.
Political Implications
The report has drawn the attention of republican US House Oversight Committee Chair James Comer, who announced an investigation into NewsGuard. Comer raised concerns about NewsGuard’s potential involvement in censorship, which the organization has denied, clarifying that its collaboration with the Defense Department is focused solely on countering foreign government disinformation.
NewsGuard criticized the committee’s actions as an attempt to intimidate the news watchdog and affirmed its commitment to defending First Amendment rights. The study underscores the necessity for vigilance and critical thinking when using AI chatbots, particularly amidst rising disinformation and political division.
Last Updated on November 7, 2024 3:53 pm CET