Google Launches SynthID Detector to Identify AI-Made Media

Google has unveiled SynthID Detector at I/O 2025, a tool to identify AI-generated media via digital watermarks, tackling deepfake and misinformation concerns

Google unveiled its SynthID Detector at its I/O conference. This public tool identifies AI-created media by checking for embedded digital watermarks in images, video, audio, and text. The SynthID Detector is Google’s response to the rapid increase in AI-generated content, which raises concerns about deepfakes and misinformation.

Estimates see a 550% rise in deepfake videos from 2019 to 2024, and an increasing number of highly viewed social media posts are now AI-created. According to Google, when users upload content, “Google returns whether the “entire file or just a part of it has SynthID [watermark] in it.”

The tool is currently rolling out to early testers. A waitlist is available for journalists, researchers, and developers, as the detector is still in its development phase.

How SynthID Detector Identifies AI Content

SynthID Detector scans files for invisible watermarks embedded by Google’s AI. When a watermark is found, the tool is designed to pinpoint the specific parts of the content most likely to contain it. In audio files, the tool can pinpoint the specific segments where it detects a SynthID watermark, and for photos, it highlights areas where a watermark is most likely present. 

Google claims its SynthID Detector “acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations.”

This watermarking approach isn’t entirely new for Google. The company previously integrated SynthID into Google Photos for images altered with its AI-powered Magic Editor. These watermarks were readable using Google’s “About this image” tool.

Expanding Watermarks Across Google’s AI Suite

Google plans to embed SynthID watermarks across its latest generation of creative AI tools like Veo 3, a video generator now capable of creating synchronized audio. Eli Collins, Google DeepMind’s product vice president, explained that Veo 3 capably handles prompts for complex scenes, physics, and accurate lip-syncing. This marks a significant step, with DeepMind CEO Demis Hassabis stating, “we’re emerging from the silent era of video generation.”

Other tools receiving SynthID watermarking include Imagen 4, for enhanced image detail and text rendering, and Lyria 2, for music generation. Lyria 2 is also featured in the Music AI Sandbox. Google’s new AI filmmaking assistant based on Veo 3, Flow, is accessible via a Google portal, will also integrate these watermarked outputs. Access to some of these advanced tools is through new subscription plans, like the Google AI Ultra plan. Further extending SynthID’s reach, Google is partnering with NVIDIA to mark media from the NVIDIA Cosmos model. GetReal Security, a service provider to detect malicious digital content and deepfakes, will also be able to verify SynthID watermarks.

The Broader Challenge of AI Authenticity

The introduction of SynthID Detector comes as Google acknowledges the evolving landscape. As Google statement noted, that while generative AI enables new forms of content creation, as these capabilities advance, “questions of authenticity, context and verification emerge.”

However, the system has limitations. Its effectiveness is primarily within Google’s ecosystem. Google also admits that SynthID is not infallible. It can be bypassed, particularly with text or through extreme modifications to images.

Concerns about watermark robustness are shared by experts. A University of Maryland study found that adversarial techniques can often remove AI watermarks. The researchers concluded, “Watermarks offer value in transparency efforts, but they do not provide absolute security against AI-generated content manipulation” The broader tech industry is also grappling with AI content verification. Microsoft, Meta (including its Video Seal framework), and OpenAI are developing their own labeling and watermarking methods. This leads to a somewhat fragmented detection landscape.

Regulatory Landscape and Ongoing Debates

Regulatory bodies are increasingly scrutinizing AI-generated content. The Biden administration issued an executive order in October 2023 mandating stronger watermarking, as reported by The Verge. Similarly, the European Union’s AI Act imposes requirements for labeling and detecting AI-created media.

Beyond technical solutions, ethical questions persist. Transparency regarding the datasets used to train these powerful AI models remains a key discussion point. Google’s Gemini privacy policy mentions data collection but copyright protection is another major concern, highlighted by RIAA lawsuits against AI music startups. The debate also extends to the impact on human creativity.

This sentiment was captured by author Joanna Maciejewska, saying “I want Al to do my laundry and dishes so that I can do art and writing, not for Al to do my art and writing so that I can do my laundry and dishes.” She expressed a desire for AI to handle chores, not creative endeavors.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x