HomeWinBuzzer NewsOpenAI's ChatGPT Plugins Pose Security and Privacy Risks

OpenAI’s ChatGPT Plugins Pose Security and Privacy Risks

ChatGPT plugins are a powerful way to extend the functionality of ChatGPT, but they also pose a number of security and privacy risks.

-

In March 2023, OpenAI launched a plugin platform for ChatGPT that allows developers to create plugins that extend the functionality of . However, Wired reports that security researchers have warned that ChatGPT plugins pose a number of security and privacy risks.

One of the main risks is that plugins could be used to inject malicious code into ChatGPT sessions. This could allow attackers to steal data, install malware, or even take control of a user's computer.

Another risk is that plugins could be used to collect user data without their knowledge or consent. For example, a plugin could be used to track a user's browsing activity or record their conversations with ChatGPT.

A security hobbyist and red team director at Electronic Arts, Johann Rehberger, is one of the researchers who has been exploring ChatGPT Plugins in his spare time. He has uncovered how they can be exploited to access someone's chat history, obtain personal information, and execute code on someone's computer without their consent.

He has mainly concentrated on plugins that use OAuth, a web standard that enables you to share data between online accounts. Rehberger says he has privately notified several plugin developers of the problems, and has also tried to contact a few times. “ChatGPT cannot trust the plugin,” Rehberger says. “It fundamentally cannot trust what comes back from the plugin because it could be anything.”

OpenAI has taken some steps to mitigate these risks. For example, the company requires that all plugins be reviewed by OpenAI before they can be published. However, security researchers say that more needs to be done to protect users from the risks posed by ChatGPT plugins.

Here are some of the security and privacy risks posed by ChatGPT plugins

  • Malicious code injection: Plugins could be used to inject malicious code into ChatGPT sessions, which could allow attackers to steal data, install malware, or even take control of a user's computer.
  • Data collection: Plugins could be used to collect user data without their knowledge or consent. For example, a plugin could be used to track a user's browsing activity or record their conversations with ChatGPT.
  • Phishing attacks: Plugins could be used to launch phishing attacks. For example, a plugin could be used to generate a fake ChatGPT session that looks like the real thing, which could trick users into entering their personal information.

In May, Rehberger said ChatGPT users should beware of “prompt injections,” a new security threat that allows outsiders to manipulate your ChatGPT queries without your consent. Rehberger demonstrated this by inserting new prompts into a ChatGPT query that he did not initiate.

He used a ChatGPT plugin that summarizes YouTube transcripts and edited the transcript to add a prompt that instructed ChatGPT to call itself by a specific name at the end. ChatGPT followed the prompt and changed its name accordingly. While these prompts injections are fairly safe, they highlight how the method can be used for malicious purposes.

Last Updated on July 26, 2023 7:00 pm CEST by Luke Jones

SourceWired
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News