HomeWinBuzzer NewsOpenAI Enhances DALL-E 3 with C2PA Watermarks for Transparent Image Origins

OpenAI Enhances DALL-E 3 with C2PA Watermarks for Transparent Image Origins

OpenAI adds watermarks to AI-generated images to combat misinformation. The watermark, featuring C2PA logo and creation date

-

has upgraded its DALL-E 3 image generation API to involve watermarking, allowing for the identification of . The organization has included a transparent watermark that carries the Coalition for Content Provenance and Authenticity (C2PA) logo, the creation date of the image, and remains visible without compromising the image's quality or creation speed. This change facilitates users in recognizing whether an image was crafted by artificial intelligence or manually by a human artist.

Challenges in Provenance Detection

Nonetheless, concerns remain about the ease with which these watermarks might be circumvented. OpenAI acknowledges that users can alter the provenance information by simply cropping or taking of outputs, which can effectively remove the watermark or manipulating the image pixels. Furthermore, current platforms often strip out metadata like C2PA upon image upload, which poses additional complications in tracing the authenticity of images online.

Industry-wide Transparency Efforts

In parallel with OpenAI's efforts, has similarly incorporated the C2PA standard into Bing Image Creator, featuring an invisible digital watermark to certify that images are AI-generated. In addition, Meta is taking steps towards transparency by labeling content uploaded to Facebook, Instagram, and Threads which was produced using AI technology, signaling a broader move within the to develop clear standards for AI content labeling. These initiatives reflect a growing commitment to provide users with tools to discern the origin of digital content in the face of advancing AI creativity algorithms.

The policy aims to help users recognize when the content they are viewing may not depict reality but is rather the product of machine learning algorithms capable of generating photorealistic images. Meta will display the “Imagined with AI” tag on images created using its  feature and plans to do the same with media generated by other companies' tools. Furthermore, Meta will soon mandate that users disclose when they are sharing realistic AI-created videos or audio. Non-compliance, such as failing to provide proper disclosure, might result in actions against the user's account, including warnings or post takedowns.

SourceOpenAI
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News