OpenAI Enhances DALL-E 3 with C2PA Watermarks for Transparent Image Origins

OpenAI adds watermarks to AI-generated images to combat misinformation. The watermark, featuring C2PA logo and creation date

OpenAI has upgraded its DALL-E 3 image generation API to involve watermarking, allowing for the identification of AI-generated images. The organization has included a transparent watermark that carries the Coalition for Content Provenance and Authenticity (C2PA) logo, the creation date of the image, and remains visible without compromising the image’s quality or creation speed. This change facilitates users in recognizing whether an image was crafted by artificial intelligence or manually by a human artist.

Challenges in Provenance Detection

Nonetheless, concerns remain about the ease with which these watermarks might be circumvented. OpenAI acknowledges that users can alter the provenance information by simply cropping or taking screenshots of DALL-E outputs, which can effectively remove the watermark or manipulating the image pixels. Furthermore, current social media platforms often strip out metadata like C2PA upon image upload, which poses additional complications in tracing the authenticity of images online.

Industry-wide Transparency Efforts

In parallel with OpenAI’s efforts, Microsoft has similarly incorporated the C2PA standard into Bing Image Creator, featuring an invisible digital watermark to certify that images are AI-generated. In addition, Meta is taking steps towards transparency by labeling content uploaded to Facebook, Instagram, and Threads which was produced using AI technology, signaling a broader move within the tech industry to develop clear standards for AI content labeling. These initiatives reflect a growing commitment to provide users with tools to discern the origin of digital content in the face of advancing AI creativity algorithms.

The policy aims to help users recognize when the content they are viewing may not depict reality but is rather the product of machine learning algorithms capable of generating photorealistic images. Meta will display the “Imagined with AI” tag on images created using its  feature and plans to do the same with media generated by other companies’ tools. Furthermore, Meta will soon mandate that users disclose when they are sharing realistic AI-created videos or audio. Non-compliance, such as failing to provide proper disclosure, might result in actions against the user’s account, including warnings or post takedowns.

Last Updated on November 7, 2024 10:37 pm CET

SourceOpenAI
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x