YouTube has rolled out new AI detection technologies designed to protect creators from unauthorized usage of their likenesses. The new tools build upon YouTube's existing Content ID system to identify AI-generated renditions of voices.
Updated Content ID Capabilities
Previously used to spot copyrighted material, the Content ID system has now been upgraded to also detect AI-generated faces. Additionally, YouTube is working on mechanisms to regulate the use of its platform for training AI models—responding to creators' concerns about unauthorized usage by companies like Apple, Nvidia, Anthropic, OpenAI, and Google.
In a blog post, YouTube says it is developing new ways for creators to control third-party use of their content. While detailed plans are yet to be shared, YouTube promises more clarity later this year. This comes after previous efforts to compensate musicians whose works are used to create AI-generated content, particularly in collaboration with Universal Music Group (UMG).
Safeguards Against AI Abuse
The platform is constructing a system to assist prominent figures in detecting AI-generated content that uses their faces without permission. The initiative aims to curb unauthorized endorsements and misinformation. Though still in the works, YouTube emphasizes its commitment to ensuring AI supplements rather than substitutes human creativity.
YouTube stresses its dedication to preventing unauthorized data scraping from its platform that contravenes its Terms of Service. The company maintains its focus on responsible AI use, aiming for AI that augments human work rather than replacing it. YouTube plans to start testing the synthetic-singing detection technology early next year in collaboration with UMG.