Adobe’s latest update at its Adobe Max conference introduces its Firefly Video Model into Premiere Pro, making AI-driven video creation a reality for creators. The new tools are aimed at enhancing video editing workflows by allowing users to extend footage or generate new video content using simple prompts, bringing a fresh approach to video editing through the power of artificial intelligence.
The release of the Firefly Video Model is part of Adobe’s broader push to integrate AI into its suite of creative software. The company revealed over 100 new features across its products, with AI playing a significant role in these updates
New AI-Driven Features in Premiere Pro
One of the newly added features, Generative Extend, helps users fix short clips or adjust video frames. Imagine a situation where a clip is too short to fit into a sequence: this tool lets you extend it by two seconds, either at the start or the end. Users can also fix subtle issues in shots, like shifting eye direction or other minor movements, without having to redo the whole scene.
While these tweaks may seem small, they are especially useful for fine-tuning scenes quickly. The feature currently supports 720p or 1080p at 24 frames per second, meaning it’s limited to short clip extensions. In addition to visual adjustments, Generative Extend can modify audio, like extending background noise or ambiance by up to 10 seconds. Unfortunately, this won’t work for spoken word or music tracks.
Web-Based Tools: Text-to-Video and Image-to-Video
Adobe also brings AI-powered video generation to the web. The Text-to-Video and Image-to-Video tools are accessible via the Firefly web app and allow users to create short video clips from text descriptions or images. While the Text-to-Video option lets users type a scene description, choosing from various cinematic styles such as animation or stop-motion, the Image-to-Video feature goes a step further by letting users upload a reference image to steer the generated clip.
However, the clips produced by these tools are capped at five seconds in length and have a resolution limit of 720p. These limitations suggest that the tools are intended more for rough drafts or initial concepts rather than polished final products. Adobe is reportedly working on speeding up the video generation time, but for now, it takes about 90 seconds to produce each video.
Content Credentials for AI-Generated Media
One concern Adobe aims to address with these tools is the transparency around AI-generated content. Each video made using Firefly’s video model comes with Content Credentials, a feature that marks the media as AI-generated and helps clarify the rights and ownership for creators. This is part of Adobe’s push for ethical AI, especially as companies like Meta and OpenAI face scrutiny over the data used to train their models. Unlike some other AI tools, Adobe claims its Firefly models were trained on licensed content, making the resulting videos safe for commercial use.
This could make Adobe’s offering more appealing to creators concerned about copyright, especially as other AI-generated video tools raise legal questions around the source of their training data.
Competition in AI Video Tools
Although Adobe’s Firefly Video Model offers some promising features, other tech companies are working on similar tools. For instance, OpenAI’s Sora promises to generate longer clips with higher visual quality, though it remains unavailable to the public. Likewise, Meta and Google have AI-driven video tools in development, but like Sora, they haven’t yet been released for public use.
Adobe’s early public beta puts it ahead of the curve, but there’s a lot of competition on the horizon, especially as more AI video tools come to market. Still, Firefly’s current limitations—such as the short clip length and resolution—may be a sticking point for users looking for more robust capabilities.
Last Updated on November 7, 2024 2:35 pm CET