HomeWinBuzzer NewsOpenAI Struggles With GPT Store Moderation as 100+ Custom GPTs Violate Policies

OpenAI Struggles With GPT Store Moderation as 100+ Custom GPTs Violate Policies

Analysis found over 100 tools on OpenAI's GPT marketplace violate content regulations, including explicit content generators, academic cheating aids, and unvetted medical/legal advice.

-

An investigation by Gizmodo has revealed that more than 100 tools on OpenAI's GPT marketplace contravene the company's own content regulations. These include apps that produce explicit material, aid in academic cheating, and provide unvetted medical and legal counsel.

Policy Breaches in OpenAI's GPT Store

The GPT store was rolled out nine months ago, allowing users to create and distribute customized variants. However, several available GPTs reportedly break OpenAI's rules, which forbid the creation of explicit content, unapproved medical and legal advice, and tools that support cheating or gambling.

On September 2, some GPTs showcased on the homepage were found to contravene these guidelines, such as a “Therapist – Psychologist” bot, a fitness trainer, and BypassGPT, which assists students in bypassing AI plagiarism detection. A search for “NSFW” brought up the NSFW Generator, designed to generate pornographic AI art.

Content Moderation Hurdles

Milton Mueller, director of the Internet Governance Project at Georgia Institute of Technology, pointed out the contradiction in aims and the current issues. Despite the company's objective to shield humanity from AI-related risks, it faces difficulties in applying basic content on its own platform.

After Gizmodo alerted OpenAI to more than 100 GPTs that violated policies, the company removed several problematic tools, including AI porn generators and sports betting advisors. Nonetheless, GPTs that offer dubious medical advice and cheating tools are still available and even featured on the marketplace's homepage.

OpenAI's Reaction and Ongoing Challenges

OpenAI spokesperson Taya Christianson indicated that the company uses a mix of automated systems, human reviews, and user reports to detect and address policy violations. She told Gizmodo out that OpenAI has acted against violators but continues to provide in-product reporting tools for users to flag inappropriate content.

However, this strategy has not fully resolved the problem. The store still lists GPTs that give false medical and legal advice. For example, a GPT named AI Immigration Lawyer claims to offer expert legal insights, while research from Stanford University highlights that OpenAI's models frequently produce inaccurate legal information.

OpenAI plans to launch a revenue-sharing model, rewarding developers based on their tools' usage. The approach could complicate content moderation further, as developers might prioritize creating popular but potentially harmful tools.

I reported in March that the GPT Store has faced significant challenges related to spam and copyright infringement. As the store has grew rapidly, the quality of offerings declined. Some GPTs infringe on copyrights or promote academic dishonesty. OpenAI says it has been working to address these issues through various measures, but the scale of the problem remains a challenge.

SourceGizmodo
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon