OpenAI Is Testing Watermarks for Its GPT-4o Image Generation Mode

OpenAI is testing a visible watermark for mages created with ChatGPT-4o i as part of its broader efforts to improve AI content traceability and policy compliance.

As OpenAI rolls out the new GPT-4o image generation mode to free-tier ChatGPT users, the company is also experimenting with both visible and invisible watermarking techniques to help better identify AI-generated visuals. The visible watermark—appearing as an “ImageGen” label—is being tested on outputs from free accounts, while paid users will reportedly continue to receive images without the mark.

As Tibor Blaho shared on Threads, a recent update to the ChatGPT web app introduces an experimental feature that mentions a “watermarked asset pointer” and provides an option to “Save image without watermark”.

View on Threads

The testing coincides with the general availability of ChatGPT-4o’s image generation tool for all users, including those on the free plan, who now have access to it with daily usage limits. Image generation was previously limited to paid tiers. Now, it’s not just the technology that’s expanding—so is OpenAI’s effort to make its content more traceable. This signals a broader strategy from OpenAI to strengthen content attribution amid mounting scrutiny over AI generated media.

Visible Marks, Invisible Metadata

While the visible watermark is the most obvious change, OpenAI also embeds metadata in its images using the C2PA standard. These machine-readable identifiers include timestamps, software labels, and origin markers that help verify the content’s provenance. The approach builds on OpenAI’s prior use of C2PA metadata in DALL·E 3 image generation, which began in early 2024.

However, the limitations of metadata-based systems are well known. As OpenAI has previously acknowledged, simple manipulations—like cropping, screenshotting, or uploading images to platforms that strip metadata—can remove these invisible markers. Despite these shortcomings, the company supports legal frameworks requiring watermarking. OpenAI, besides Adobe and Microsoft, has backed California’s AB 3211 bill, which would mandate labeling of AI-generated content to help mitigate the risks of misinformation.

OpenAI’s experiments with watermarking date back well before this latest test. In addition to its C2PA metadata rollout for DALL·E 3, the company also developed a text watermarking system for ChatGPT, announced in mid-2024. The tool embedded imperceptible linguistic patterns to help flag AI-generated text. But concerns about accuracy and unintended consequences led OpenAI to delay the rollout.

OpenAI acknowledged the tool’s limitations and the possibility of sophisticated techniques to bypass it. The company also noted potential fairness concerns for users who rely on AI for second-language writing support. The watermark could be removed or obfuscated using simple strategies like rephrasing or machine translation.

These earlier efforts provide essential context for the ChatGPT-4o watermarking test. By combining visible and invisible indicators, OpenAI is attempting to strike a middle ground between usability and traceability.

Industry-Wide Moves Toward Attribution

OpenAI is not alone in rethinking content authentication. Other tech giants have taken parallel steps to mark AI-generated media. In February 2025, Google expanded its SynthID system—originally developed by DeepMind—to Google Photos. The technology now applies to edited images as well as fully generated ones, embedding imperceptible watermarks directly into pixel data. SynthID can survive basic transformations like resizing or light filtering but is less effective against heavy edits or cropping.

Microsoft also adopted watermarking in September 2024 through its Azure OpenAI Service, embedding cryptographically signed metadata into images generated by DALL·E. These metadata entries note the generation source, creation date, and software identifier. The system is part of a broader initiative involving partnerships with Adobe, Truepic, and the BBC to standardize content authentication across platforms.

Meta, meanwhile, took a more direct approach. In February 2024, the company rolled out mandatory visible watermarks on AI-generated content across Facebook, Instagram, and Threads. The label “Imagined with AI” appears on any image created by Meta’s tools or third-party models like Midjourney and DALL·E. Meta has signaled it will soon require similar disclosure for synthetic video and audio, with potential enforcement actions for noncompliance.

Technical Limitations and Research Challenges

Despite growing adoption, watermarking still has weaknesses. In October 2023, researchers from the University of Maryland published a study examining the limits of AI image watermarking. Their findings showed that common watermarking methods can be defeated by adversarial techniques.

For instance, a process called diffusion purification—where Gaussian noise is added to an image and then denoised—can effectively strip imperceptible watermarks from generated images. More concerning is that attackers can “spoof” a watermark onto an unmarked image, making it appear to be AI-generated when it’s not.

In their paper, the team also described a detection tradeoff: improving accuracy to avoid false negatives (missing a watermark) increases the risk of false positives (flagging unmarked content). These findings suggest that watermarking alone may not be a reliable safety net against manipulated media or misinformation.

Balancing Transparency With Usability

The decision to limit the visible “ImageGen” watermark to free users has prompted questions about consistency. If paid subscribers will continue to enjoy watermark-free outputs, this would creates a two-tier system that could complicate content tracking and moderation across platforms. An image without a visible tag might circulate more freely—regardless of whether it’s AI-generated.

OpenAI’s dual-layered strategy—combining visible marks with metadata—may reflect an attempt to mitigate these issues without making tradeoffs too harsh for usability. However, as synthetic media proliferates, watermarking systems will likely need to evolve further to remain effective. Whether the current balance meets the expectations of regulators, platforms, and users is still unclear.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x