HomeWinBuzzer NewsMeta Introduces Mandatory AI-Generated Media Watermarks Across Platforms

Meta Introduces Mandatory AI-Generated Media Watermarks Across Platforms

Meta labels AI-generated images with "Imagined with AI" to curb misinformation. Policy applies to Facebook, Instagram and eventually videos and audio.


Meta has announced the implementation of a new content policy across its platforms, which includes the Facebook and Instagram. The initiative, designed to increase transparency, will involve labeling images and media that have been generated by artificial intelligence (AI). As part of the policy, media created by popular AI generators, such as Meta's own tools, Midjourney, Dall-E, and Bing Image Creator, will bear a watermark stating “Imagined with AI.” The watermark indicates that the media is not an original photograph but has been synthesized using AI technology.

Enhancing User Awareness

The policy aims to help users recognize when the content they are viewing may not depict reality but is rather the product of machine learning algorithms capable of generating photorealistic images. Meta will display the “Imagined with AI” tag on images created using its feature and plans to do the same with media generated by other companies' tools. Furthermore, Meta will soon mandate that users disclose when they are sharing realistic AI-created videos or audio. Non-compliance, such as failing to provide proper disclosure, might result in actions against the user's account, including warnings or post takedowns.

Challenges and Precautions Around Synthetic Media

Despite efforts to mitigate risks, detecting synthetic video and audio completely remains difficult. Meta acknowledges ongoing challenges with content that escapes detection and has therefore engaged in collaborations with partners to improve real-time identification and labeling of synthesized media. As the company's President of Global Affairs, Nick Clegg, emphasizes, there is a particular concern regarding the use of manipulated media to deceive the public, especially around critical times such as elections. To address high-risk situations, Meta may employ more prominent labeling to provide additional information and context to the audience.

The introduction of these labeling mechanisms is part of a broader effort to establish industry standards for the clear marking of . YouTube, another major content platform, has also announced plans in 2023 to assist its users in identifying whether a video has been created with the aid of generative . Together, these measures reflect the 's growing commitment to transparency and responsibility in the age of AI-generated content.

Microsoft's Watermarks and Doubts Over their Effect

already watermarks images generated by Bing Image Creator. The addition of the watermark is intended to help users identify AI-generated content.  are becoming increasingly realistic, and it can be difficult to tell them apart from human-made images. The watermark will help users to know that an image was created using AI, and to give credit to Microsoft for its creation.

In October, results from a study suggested watermarks on AI images have little effect.  , and  recently added watermarking – a method of adding metadata to digital content to establish its origin– to bolster security measures against  produced by their AI models. 

The research paper “Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks”, published on ArXiv, details the findings. The team was co-led by Soheil Feizi, Associate Professor of Computer Science at the University of Maryland. He said in an email to The Register that the study reveals “fundamental and practical vulnerabilities of image watermarking as a defense against deepfakes.”

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News