Google Photos users will soon see clear indicators when images are edited using AI-powered tools, following the app’s recent update. Launched this week, the new feature will place an “AI info” label in the image details to specify when editing tools like Magic Editor or Magic Eraser have modified a photo. Google says the update reflects broader industry momentum toward making AI content recognizable, addressing rising concerns over the trustworthiness of digital images altered by machine learning.
Metadata Standards: Bringing AI Transparency to Google Photos
Google’s approach embeds AI-editing information in image metadata using the International Press Telecommunications Council (IPTC) standard, which ensures that these labels are visible when viewing the photo’s details. IPTC is a consortium of the world’s major news agencies, other news providers, and news industry vendors. It acts as the global standards body of the news media.
Although metadata marking AI involvement was already present, it was previously accessible only through metadata-viewing tools. According to John Fisher, Google Photos’ engineering lead, this added visibility underscores Google’s goal of making AI modifications transparent and accessible to users within the app.
The IPTC standard plays a vital role here, creating uniform labeling for digital media. This standard is widely used across media to add metadata fields, like time and place of capture, into images and enables platforms to ensure content integrity when images are modified or AI-assisted.
Broader Context: AI Labeling Beyond Google
Google’s transparency measures align with recent industry moves by Microsoft, Meta, and OpenAI, which are also adopting metadata labels and watermarks for AI-generated content. The watermark is a small Bing icon appearing at the bottom of the image and indicates that it was created using Bing Image Creator. You can see an example of this in the image I generated with Bing above.
Microsoft began embedding a small Bing watermark in images created through Bing Image Creator as of June 2023, aimed at helping users distinguish human-made content from AI-produced visuals. The change, announced at Microsoft’s Build 2023 conference,
OpenAI and Meta have taken similar steps. OpenAI added C2PA (Coalition for Content Provenance and Authenticity) watermarks to its DALL-E 3-generated images in early 2024. Unlike the IPTC standard, C2PA leverages cryptographic metadata, providing additional traceability for images across platforms.
Even so, OpenAI recognizes that users may manipulate the provenance information associated with DALL-E outputs by cropping, taking screenshots, or altering image pixels. It is possible to remove watermarks and obfuscate the original source. Additionally, many social media platforms remove metadata such as C2PA upon image upload, further hindering the ability to trace the authenticity of images disseminated online.
Meta, for its part, has integrated mandatory watermarks on AI-generated images on Facebook and Instagram, labeling media with “Imagined with AI” to help users understand content origins. Meta’s initiative covers multiple AI tools, including its own, and plans are underway to expand to video and audio content.
Challenges in Watermark Effectiveness and Industry Response
Despite these developments, AI watermarking’s limitations remain a topic of debate. Research published in October 2023 by the University of Maryland found that current watermarking techniques can be circumvented. The study, led by Professor Soheil Feizi, revealed weaknesses in watermark defenses against deepfakes, noting that “diffusion purification”—a method that slightly distorts an image and applies a denoising process—can effectively bypass watermark detection.
Watermarking becomes more challenging for social platforms like Meta, where metadata may be stripped out during uploads, further complicating efforts to track AI-altered content. According to Meta, additional partnerships are being explored to enhance real-time identification and provide context, particularly around high-stakes events like elections.
Regulatory Pressure and Legislative Frameworks in AI Labeling
Rising regulatory oversight has also driven the push for AI transparency measures. In a landmark decision, the European Union passed its AI Act, which mandates clear labeling for AI-generated media and sets guidelines for “high-risk” AI applications. The Act holds platforms accountable for AI content that could impact individual safety, public health, or democratic processes, requiring companies like Google, Microsoft, and Meta to mark AI-generated content and document the data used to train their models.
Alongside the AI Act, the EU’s Digital Services Act (DSA) includes stipulations for online platforms to prevent the spread of harmful AI-generated content, like synthetic media or deepfakes. Tech companies such as Google and Meta have acknowledged these requirements, though they also raise concerns about balancing compliance with continued innovation.
Expanding Scope of AI Transparency with New Technologies
This latest update to Google Photos follows the company’s earlier introduction of SynthID, an AI watermarking tool that can detect and label AI-generated text, video, and audio. SynthID was initially launched for images and later extended to include video content in May 2024.
Unlike typical metadata, SynthID applies a subtle digital imprint on each frame of a video, offering a new way to detect AI alterations without disrupting the original image quality. This evolution in watermarking technology has set SynthID apart from traditional labeling methods, although it remains vulnerable to some of the same limitations, such as metadata removal in shared media.
The development of AI labeling protocols highlights the tech industry’s varied approaches to AI transparency. Google’s SynthID is distinct from Microsoft’s Bing Image Creator in its ability to adapt to multiple content formats. While Google and OpenAI aim to refine their respective C2PA and SynthID models, Microsoft’s watermarking tool continues to focus on clarity by visibly marking AI-created images with a recognizable Bing icon, now in use since June 2023.
Last Updated on November 7, 2024 2:21 pm CET