Amidst a rapidly evolving field of artificial intelligence tools for audio creation and ongoing legal battles over copyright, Suno has outlined its next iteration, v4.5.
The company positions this update as its “newest and most expressive model,” aiming to improve the process of turning musical ideas into finished tracks. Suno suggests the update focuses on delivering more dynamic music, handling genre descriptions with greater accuracy, and producing vocals with more emotional depth and range.
The company frames its mission, saying “To make your own music is to transform inspiration into reality—bringing to life what your mind has always heard.”
Refined Prompting and Feature Upgrades
A key focus of Suno v4.5 appears to be improved user control through text prompts. The company claims the model can better interpret descriptive language, allowing nuances like “uplifting nostalgic tones,” “leaf textures,” or “melodic whistling” to shape the generated music.
Suno v4.5 is also said to capture more subtle musical elements, including “natural tone shifts” and “instrument layering.” It reportedly handles genre mashups, such as combining midwest emo with neosoul, more effectively than previous versions. To aid users, a new “prompt enhancement helper” feature aims to expand simple genre ideas into more detailed style instructions suitable for the AI.
Beyond core generation, existing Suno features receive upgrades. “Covers,” a tool that reimagines user-uploaded audio based on a new prompt, and “Personas,” which captures the sonic character (vocals, style, vibe) of one track to apply to another, now utilize the v4.5 engine.
Suno suggests the updated Covers feature should retain more melodic detail from the source audio and perform better when switching genres. Furthermore, users can now combine the Covers and Personas features.
Technical improvements cited include faster generation speeds, the ability to produce coherent songs up to eight minutes long, and enhanced overall audio quality, featuring more balanced mixes and fewer sonic artifacts like degradation or shimmer effects. Users can try the new model via Suno’s creation interface.
Navigating Legal and Ethical Currents
Suno’s technological advancements arrive as the company navigates serious legal challenges. Since June 2024, Suno and competitor Udio have faced lawsuits filed by major record labels, coordinated by the RIAA, alleging mass copyright infringement.
The core accusation is that Suno trained its AI on vast libraries of copyrighted music without securing licenses, constituting what RIAA Chief Legal Officer Ken Doroshow termed “unlicensed copying of sound recordings on a massive scale.”
While Suno and Udio invoked the “fair use” doctrine in their August 2024 court responses, the RIAA has pushed back strongly. The industry group later called the companies’ admission of likely using copyrighted recordings in training a “major concession” and argued that such large-scale use does not qualify as fair use.
Suno CEO Mikey Shulman publicly responded to the lawsuit’s filing, calling the technology “transformative” and stating it was “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content,” while accusing the labels of resorting to an “old lawyer-led playbook.”
This legal fight mirrors broader industry concerns about AI training data and intellectual property rights, extending to areas like voice cloning. Legislative proposals like the NO FAKES Act, reintroduced in the US Congress around April 2025, seek to establish federal protections for voice and likeness against unauthorized AI replicas, drawing support from both music industry groups and tech companies like Google and OpenAI.
The potential for misuse within music streaming itself was starkly illustrated by the September 2024 federal indictment of Michael Smith, who allegedly used AI-generated tracks and bots to fraudulently obtain nearly $10 million in royalties.
The Competitive AI Audio Space
Suno operates within a competitive environment. Google expanded access to its Music AI Sandbox featuring the Lyria 2 model in April, offering tools for professional creators and utilizing its SynthID watermarking technology—an imperceptible digital signal embedded in audio to trace its origin.
Stability AI released Stable Audio 2.0 in April 2024 and NVIDIA announced its sophisticated Fugatto audio model in November 2024 but has refrained from public release, citing ethical worries. NVIDIA’s Bryan Catanzaro told Reuters, “Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t.”
Music streaming platforms are also defining their positions. Spotify CEO Daniel Ek stated in September 2023 that the platform would allow AI-generated music but police unauthorized artist impersonation, acknowledging a “contentious middle ground.”
That same month, Universal Music Group and Deezer announced an “artist-centric” streaming model designed to prioritize human artists in royalty calculations, partly in response to concerns about AI-generated content flooding platforms. While some creators embrace these tools, others echo the sentiment shared by Believe artist Adrie during early tests of Google’s Lyria: “music will always need a human touch behind it.”