Meta’s AI detection system, launched in February, incorrectly flags authentic photos as “AI-generated”, causing frustration among photographers. PetaPixel reports how this issue has surfaced on Facebook, Instagram, and Threads, mostly recognizable on mobile devices. One such incident involved an image from former White House photographer Pete Souza, and another pictured the Kolkata Knight Riders celebrating their cricket victory.
AI Editing Tools Causing Mislabels
Photographers speculate that certain editing tools might be triggering the mislabeling. Pete Souza told TechCrunch he thinks cropping and flattening images using Adobe software could be part of the problem. Even minor adjustments, such as removing small objects with Adobe’s Generative Fill, have led to photos being falsely labeled. PetaPixel’s tests confirmed that minimal edits are enough to cause the “Made by AI” tag.
Photographer Peter Yan reached out on Threads to inquire with Instagram chief Adam Mosseri about the reason his genuine photograph of Mount Fuji was tagged as “Made with AI.” Mosseri responded to Yan, questioning, “Was this label applied automatically?” The “Made with AI” tag was later removed from Yan’s image of Mount Fuji.
Similar posts from other photographers illustrate that the problem of mislabeled real photos is a common occurence.
Post by @noahkalinaView on Threads
Meta has acknowledged the problem to various media outlets such as The Verge and Techcrunch and promised that efforts are underway to improve the labeling mechanism.
The company uses metadata indicators from sources like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to identify AI-generated content. Adobe’s Content Credentials system, which embeds metadata about the content’s origins, is one tool used by Meta.
Origins of Meta’s AI Labeling
Meta started the labeling of AI-generated photos in its social networks this February, rolling it out regularly by May. Its aim is to mitigate misinformation, notably during election periods. However, the current issues with mislabeling have sparked demands for a more refined system. Photographer Noah Kalina remarked that tagging retouched photos as “Made with AI” undermines the term’s integrity, suggesting that in such a case, all photos might need a disclaimer about their authenticity.
Meta is developing tools to detect imperceptible markers at scale, specifically information embedded through the C2PA and IPTC technical standards. While some photographers agree that using AI tools should be disclosed, Meta currently lacks separate labels to distinguish whether a tool was used only for minor edits or for complete AI generation. If tapped, the label reads, “Generative AI may have been used to create or edit content in this post.”
Despite efforts, many AI-generated images on Meta’s platforms still evade proper labeling. As U.S. elections draw near, social media companies face increasing pressure to handle AI-generated content accurately. Meta says it is working closely with industry partners to enhance the precision of its AI detection algorithms, aiming to reduce the misclassification of genuine photographs.
Last Updated on November 7, 2024 3:47 pm CET