menu
close

Google Unveils SynthID Detector to Combat AI-Generated Content

Google has launched SynthID Detector, a verification portal that identifies content watermarked with SynthID technology. Announced at Google I/O 2025, the tool can detect AI-generated images, text, audio, and video created using Google's AI models. With over 10 billion pieces of content already watermarked, Google is initially rolling out access to journalists, media professionals, and researchers through a waitlist system.
Google Unveils SynthID Detector to Combat AI-Generated Content

In an era where distinguishing between human-created and AI-generated content has become increasingly challenging, Google has introduced a critical new tool in the fight against misinformation and deepfakes.

Announced at Google I/O 2025 in May, SynthID Detector provides a centralized verification portal where users can upload media to determine if it contains Google's invisible SynthID watermarks. The system can analyze images, text, audio, and video, highlighting specific portions most likely to be AI-generated.

"As these capabilities advance and become more broadly available, questions of authenticity, context and verification emerge," stated Pushmeet Kohli, Vice President of Science and Strategic Initiatives at Google DeepMind. The portal addresses these concerns by providing essential transparency in the rapidly evolving landscape of generative media.

SynthID technology has already watermarked over 10 billion pieces of content since its initial launch in 2023. Originally focused on AI-generated imagery, Google has expanded the technology to cover text, audio, and video content generated by its Gemini, Imagen, Lyria, and Veo models.

While SynthID represents a significant advancement in content verification, it does have limitations. The system primarily works within Google's ecosystem, though the company has partnered with NVIDIA to watermark videos from their Cosmos model. Additionally, Google acknowledges that SynthID is not infallible and can be bypassed, particularly with text or through extreme modifications to images.

A University of Maryland study found that adversarial techniques can often remove AI watermarks, with researchers concluding that "watermarks offer value in transparency efforts, but they do not provide absolute security against AI-generated content manipulation."

To expand SynthID's reach, Google has open-sourced its text watermarking framework and partnered with GetReal Security for third-party verification. The company is currently rolling out SynthID Detector to early testers, with journalists, media professionals, and researchers able to join a waitlist for access.

Source:

Latest News