YouTube is offering video creators more control over how their work is used for artificial intelligence training. A new setting introduced to YouTube Studio lets creators and rights holders explicitly approve or block third-party companies from accessing their videos to train generative AI models.
“Over the next few days, we’ll be rolling out an update where creators and rights holders can choose to allow third-party companies to use their content to train AI models directly in Studio Settings under ‘Third-party training,’” YouTube stated in its official announcement.
The move comes amid escalating concerns from creators about AI companies using publicly available content—without consent or compensation—to improve their systems. YouTube’s response, described as a “first step” toward addressing these frustrations, offers creators the ability to decide how and when their videos are used.
Related: Shutterstock Now Offers Research Licenses For AI Video Model Training
How the New Opt-In System Works
The new feature, available under “Third-Party Training” in YouTube Studio, is off by default, ensuring no third-party AI developers can use videos for training unless creators specifically opt in. Eligible users—those with administrator roles in YouTube’s Content Manager—can access a list of approved third-party companies and choose to authorize or deny access.
Related: Meta Introduces Video Seal Framework for Hidden AI Video Watermarks
The list includes well-known generative AI developers such as OpenAI, Anthropic, Meta, Microsoft, Adobe, Apple, Stability AI, and Nvidia. In total, 18 companies are featured initially, all of whom YouTube described as “sensible choices for collaboration with creators.”
Creators also have the option to select “All third-party companies,” granting blanket approval for generative AI training, even for firms not listed. Notably, YouTube clarified that this setting applies only to public videos and does not change existing protections under Content ID.
Google’s Use of Content and Creator Frustrations
While creators now control third-party AI access, YouTube says that Google itself will continue to train its own AI models using some YouTube content. This aligns with YouTube’s existing Terms of Service, which permits the company to use uploaded videos for various purposes, including machine learning.
Related: Google Unveils Veo 2 AI Video Generation in 4K; Improves Imagen 3 Image Creator
Creators have long argued that AI companies are benefiting from their work without providing fair compensation. Unauthorized scraping—automated methods to extract large amounts of content—has exacerbated these concerns. YouTube reiterated that such actions remain prohibited, stating:
“Accessing creator content in unauthorized ways, such as unauthorized scraping, remains prohibited.”
However, YouTube did not confirm whether its new system could apply retroactively to existing AI models trained on content without explicit approval.
AI Detection Tools and Expanding Content ID
YouTube’s opt-in system builds on its earlier announcement of plans to expand its Content ID system. Initially designed to protect copyrighted content, Content ID compares uploaded videos to a database of reference material to identify and manage unauthorized use.
The company is now working to develop AI detection tools capable of identifying generative AI replicas of creator content, including voices, faces, and likenesses. This technology aims to address rising concerns about AI systems producing content that closely mimics the work of creators without their approval.
Related: Amazon Unveils Nova Multimodal AI Models For Text, Image, and Video
Broader Industry and Legal Context
The issue of AI model training using publicly available content has sparked global debates and legal challenges. In the United States, artists recently won a landmark case against Stability AI and MidJourney, with courts ruling that training AI systems on copyrighted works without consent violated intellectual property rights.
Meanwhile, in the UK, a coalition of publishers, photographers, and writers has called for stricter government regulation to ensure AI developers compensate creators fairly. The coalition advocates for a formal licensing system to manage the use of creative works in AI development.
These legal disputes highlight the growing tension between generative AI’s rapid development and content ownership rights. YouTube’s update may set a precedent for platforms balancing AI innovation with creator protections.
Related: OpenAI Releases Sora AI Video Generator to ChatGPT Plus and Pro Subscribers
A Step Toward Collaboration
YouTube framed the new opt-in controls as part of its effort to support creators while enabling AI companies to build their systems responsibly. By providing an official, permission-based framework, YouTube aims to foster clearer relationships between video creators and AI developers.
The company says the update is an initial step to help creators gain control over how their content is used in AI model development while potentially exploring new opportunities for their videos.
Creators globally will be notified of the new feature via alerts in YouTube Studio over the coming days, on both desktop and mobile platforms.