OpenAI has upgraded its DALL-E 3 image generation API to involve watermarking, allowing for the identification of AI-generated images. The organization has included a transparent watermark that carries the Coalition for Content Provenance and Authenticity (C2PA) logo, the creation date of the image, and remains visible without compromising the image's quality or creation speed. This change facilitates users in recognizing whether an image was crafted by artificial intelligence or manually by a human artist.
Challenges in Provenance Detection
Nonetheless, concerns remain about the ease with which these watermarks might be circumvented. OpenAI acknowledges that users can alter the provenance information by simply cropping or taking screenshots of DALL-E outputs, which can effectively remove the watermark or manipulating the image pixels. Furthermore, current social media platforms often strip out metadata like C2PA upon image upload, which poses additional complications in tracing the authenticity of images online.
Industry-wide Transparency Efforts
In parallel with OpenAI's efforts, Microsoft has similarly incorporated the C2PA standard into Bing Image Creator, featuring an invisible digital watermark to certify that images are AI-generated. In addition, Meta is taking steps towards transparency by labeling content uploaded to Facebook, Instagram, and Threads which was produced using AI technology, signaling a broader move within the tech industry to develop clear standards for AI content labeling. These initiatives reflect a growing commitment to provide users with tools to discern the origin of digital content in the face of advancing AI creativity algorithms.
The policy aims to help users recognize when the content they are viewing may not depict reality but is rather the product of machine learning algorithms capable of generating photorealistic images. Meta will display the “Imagined with AI” tag on images created using its Meta AI feature and plans to do the same with media generated by other companies' tools. Furthermore, Meta will soon mandate that users disclose when they are sharing realistic AI-created videos or audio. Non-compliance, such as failing to provide proper disclosure, might result in actions against the user's account, including warnings or post takedowns.