HomeWinBuzzer NewsOpenAI's AI Models Exploited in Global Manipulation Campaigns

OpenAI’s AI Models Exploited in Global Manipulation Campaigns

OpenAI's investigations have targeted actors leveraging AI models for various malicious aims.

-

OpenAI has identified and disrupted five covert influence operations that sought to manipulate public opinion and elections using its AI models. These operations, linked to Russia, China, Iran, and Israel, were found to have minimal impact on audience engagement and failed to significantly amplify manipulative messages.

Over the past three months, OpenAI's investigations have targeted actors leveraging AI models for various tasks, including generating multilingual comments and articles, creating fake social profiles, conducting open-source research, debugging code, and translating and proofreading texts. According to a report by OpenAI, the operations were traced to two campaigns in Russia, one in China, one in Iran, and a commercial entity in Israel.

Specific Campaigns and Tactics

  • The Russian operation “Doppelganger” utilized OpenAI's models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine.
  • Another Russian group used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States.
  • The Chinese network “Spamouflage” employed OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms.
  • The Iranian “International Union of Virtual Media” used AI to generate content in multiple languages.
  • The Israel-based firm STOIC created content related to the Gaza conflict and Israel's trade union, Histadrut.

According to OpenAI, these influence operations scored only a two on the Brookings' Breakout Scale, which measures the impact of influence operations from one (spreads within one community on a single platform) to six (provokes a policy response or violence). A rating of two indicates that the fake content appeared on multiple platforms but did not reach authentic audiences. OpenAI's report highlights that these operations often exposed themselves through errors that human operators failed to correct. For example, “Bad Grammar” posted content that included refusal messages from the AI model, revealing the content as AI-generated.

Concerns about Impact on Elections

OpenAI's findings come amidst growing concerns about the impact of generative AI on multiple elections around the world, including in the US. The report sheds light on how networks of individuals engaged in influence operations have used generative AI to produce text and images at much higher volumes than before. These networks also used AI to generate fake comments on social media posts.

Thomas Rid, a professor of strategic studies and founding director of the Alperovitch Institute for Cybersecurity Studies at Johns Hopkins University, noted that while it was anticipated that bad actors would use large language models (LLMs) to enhance their covert influence campaigns, the initial attempts were surprisingly weak and ineffective.

OpenAI worked with various stakeholders across the tech industry, civil society, and governments to disrupt these influence operations. Ben Nimmo, principal investigator on OpenAI's Intelligence and Investigations team, stated in a media briefing that the report aims to provide insights into the use of generative AI in influence operations.

On Wednesday, Meta released its latest report on coordinated inauthentic behavior, detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign targeting people in the US and Canada.

SourceOpenAi
Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

Mastodon