Google DeepMind is broadening access to its Music AI Sandbox, equipping the platform with its updated Lyria 2 music generation model and introducing new features aimed at musicians, songwriters, and producers. This expansion positions the Music AI
Sandbox as a collaborative toolset for creators, arriving shortly after Lyria’s initial debut in a limited enterprise preview and against a backdrop of ongoing industry contention over the use of copyrighted material for training AI music models.
The updated Music AI Sandbox provides a suite of experimental tools built upon Google’s latest music generation model, Lyria 2, which the company claims produces high-fidelity, professional-grade 48kHz stereo audio outputs across diverse genres.
A related model, Lyria RealTime, allows for interactive music creation and manipulation on the fly. Key features within the Sandbox include ‘Create’, which generates musical parts from text descriptions or user-supplied lyrics; ‘Extend’, designed to generate musical continuations from existing audio clips, aimed at sparking ideas; and ‘Edit’, which provides controls to transform the mood or style of audio clips using presets or text prompts and can also blend different sections.
Google frames this development as an extension of its long-term engagement with the music community, referencing work dating back to the Magenta project in 2016 and incorporating feedback gathered through YouTube’s Music AI Incubator. For now, broader access beyond initial testers is limited to US-based creators who sign up via a waitlist.

Artists involved in early testing offered positive initial reactions. Isabella Kensington, a TuneCore artist, described it as “fun and unique experience,” highlighting the ‘Extend’ feature for helping “formulate different avenues for production while providing space for my songwriting.”
The Range noted its potential for overcoming writer’s block: “I’ve found it really useful to help me cut writer’s block right at the point that it hits as opposed to letting it build.” Adrie, a Believe artist, found it useful for experimenting but added the perspective that “music will always need a human touch behind it.” Sidecar Tommy remarked on generating orchestrations, stating it “gave me fuel to go down a path I wouldn’t have gone!”
Navigating the Copyright Minefield
Google’s expansion of its music AI tools comes as the industry confronts the legal implications of training such models. In June 2024, the Recording Industry Association of America (RIAA), representing major labels, filed lawsuits against AI music startups Suno and Udio, alleging mass copyright infringement through unauthorized scraping and use of protected songs. RIAA Chairman and CEO Mitch Glazier stated at the time, “Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work… set back the promise of genuinely innovative AI for us all.” The lawsuits (Suno complaint, Udio complaint) seek damages up to $150,000 per work.
Suno and Udio formally responded in August 2024, invoking the “fair use” doctrine as a defense. Udio specifically argued its system ‘listens’ to music akin to a human student, learning underlying “musical ideas” to create “new musical ideas,” claiming it is “completely uninterested in reproducing content in our training set.”
The RIAA countered forcefully, calling the companies’ admission of training on recordings a “major concession” and reiterating that using artists’ work without licenses to compete against them is not fair. This legal standoff highlights the complex terrain Google also faces. The company emphasizes a responsible approach, stating that Lyria 2 outputs are watermarked using its SynthID technology.
SynthID, now expanded beyond audio, embeds an imperceptible digital signal directly into the audio waveform’s spectrogram, designed to survive common modifications like MP3 compression, potentially helping trace the origin of generated audio. However, like many AI developers under scrutiny, Google has not detailed the specific datasets used for training Lyria.
Part of a Broader AI Media Push
The wider availability of the Music AI Sandbox follows Lyria’s initial appearance in early April 2025 on Google’s Vertex AI platform, which serves as Google Cloud’s primary managed machine learning platform for enterprise users.
This phased rollout suggests a strategy of offering advanced AI capabilities for business clients via Vertex AI, while providing tools like the Sandbox for individual creators, possibly funnelled through platforms such as Google’s AI Studio. Lyria joins other Google generative media models recently updated or introduced, including the Veo 2 video generator, the Chirp 3 audio model (with voice cloning features), and the Imagen 3 image generator, all aimed at enhancing Vertex AI’s suite.
The Evolving AI Audio Landscape
Google’s tools enter a competitive and rapidly evolving space. Competitor Stability AI released Stable Audio 2.0 in April 2024, providing free web access for generating tracks up to three minutes long and allowing users to upload their own audio samples for AI transformation – a feature conceptually similar to the Sandbox’s ‘Edit’ function.
Stability AI partnered with Audible Magic for copyright checks. In contrast, NVIDIA announced its Fugatto audio model in November 2024 but chose not to release it publicly due to potential misuse concerns. “Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t,” NVIDIA’s Bryan Catanzaro said back then.
These technological advances continue to fuel debate about AI’s role. While some creators view tools like the Music AI Sandbox as augmenting human ability, others worry about the displacement of human creativity, echoing author Joanna Maciejewska’s widely shared sentiment: “I want Al to do my laundry and dishes so that I can do art and writing, not for Al to do my art and writing so that I can do my laundry and dishes.”
The ease of generation also raises questions about the potential devaluation of music, recalling a David Bowie prediction that music might become like “running water or electricity,” shifting value towards unique human elements like live shows. The perceived quality and authenticity of AI music versus human creation remain points of discussion as the technology progresses.