HomeWinBuzzer NewsInstagram Tests AI-Content Labeling to Address Misinformation Concerns

Instagram Tests AI-Content Labeling to Address Misinformation Concerns

Instagram is testing a way to label AI-generated content. The feature would add notifications to posts created or edited using AI.

-

is reportedly testing a new way to label AI-generated content. The feature, which was first spotted by app researcher Alessandro Paluzzi, would add a notification to posts that have been created or edited using AI. The notification would read, “The creator or Meta said that this content was created or edited with AI.”

The feature, which is part of Meta's efforts to promote the ethical use of AI, will use to produce labels that match the style and tone of the original post.

The labels will appear on posts that have been identified as AI-generated, such as deepfakes, neural art, and synthetic text. The labels will explain what generative AI is, how it works, and how to spot it. The goal is to increase the transparency and accountability of the platform, while also acknowledging the creative potential of AI.

The feature is currently being tested with a small group of Instagram users, and Meta has not announced when it will be rolled out more widely. The feature is one of the outcomes of Meta's commitment to the White House around the responsible development of AI, which also includes investing in and discrimination research, and developing a watermarking system to inform users when content is AI-generated.

Meta's CEO Mark Zuckerberg has said that generative AI is “literally going to touch every single one of our products” and that he expects that these tools will be valuable for everyone from regular people to creators to businesses. Meta has recently “open-sourced” its large language model LLaMA 2, but it's yet to widely release consumer-facing generative AI features for its products like Instagram.

The Challenges of Flagging AI Content

However, detecting is not an easy task, as the technology is becoming more sophisticated and human-like. There is no foolproof method to distinguish between AI and human-written content. It is unclear what methods Meta will use and whether the company can 100% confirm AI content. The feature could lead to some legitimate content getting flagged as AI content.

Of course, AI content is not inherently bad and it is not illegal on Instagram or other platforms. The main concerns come from the accuracy of AI such as ChatGPT, Google Bard, and Bing Chat. All these generative AI services are capable or presenting inaccuracies in their content but also doing a good job at making their factual mistakes look like accurate information.

Some studies show that chatbots like ChatGPT are getting worse. Company's like OpenAI refute this and say the models are constantly improving as they learn from growing datasets. Still, it is clear the content exodus I discussed months ago is becoming real. It will be hard to know what is AI, what is not, what is accurate, and what is incorrect. Meta and other companies may think they can flag such content but I am not convinced.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News