Adobe announced a novel update to the models that power its Firefly AI-powered image creation service, at its annual MAX conference. The updated Firefly Image 2 is poised to bolster genuine and accurate AI-rendered images of human figures with particular attention to facial features, skin, appendages, and body structure. This enhancement aims to offset the recurring challenges in the precision rendering of specific elements such as hands in similar models.
Phenomenal Expansion and Exciting Next Steps
According to Adobe, Firefly users have generated three billion images since the launch of the model earlier this year, reaching one billion in the previous month alone. It became generally available in Photoshop last month. The company highlights the fact that an impressive 90% of Firefly users are new to Adobe’s family of products. Adobe recently integrated Firefly into its Creative Cloud service, transforming the erstwhile Firefly demo site into a full-service offering.
Enhanced AI Capabilities and Future Prospects
Adobe continues to invest in growing Firefly as Adobe’s VP for generative AI and Sensei, Alexandru Costin, indicated that Firefly’s models received substantial training on recent images from Adobe Stock and other safe commercial sources. He noted the new models are three times larger, akin to a “brain that’s three times larger.” The substantial enlargement equips Firefly to generate more refined image pixels and provide organic, engaging user experience without inflating cloud costs.
While the company has prioritized quality, efforts are underway to balance performance. The larger model may demand more resources but should run at the identical speed to the first model. Adobe’s long-term play with generative AI is to facilitate generative editing, focusing more on enhancing existing assets than generating new ones.
The upgraded model will initially be available on the Firefly web app and will soon extend to Creative Cloud applications such as Photoshop. Firefly also introduced novel controls that allow depth of field, motion blur, and field of view settings adjustments for users. Other exciting features unveiled include an image style-match and an auto-complete feature for prompts optimized for user convenience.
Adobe’s move into the generative AI space signifies a growing trend in AI adoption across tech services. Other companies are also offering generative AI image tools, including:
- OpenAI has also introduced ShapE, a generative model that can create 3D models from text, opening up new possibilities for AI in image creation. The company also recently launched its DALL-E 3 image generating art model.
- Microsoft partners with OpenAI to bring Bing Image Creator, which has recently been integrated with DALL-E 3.
- Stability AI, a startup that focuses on generative AI, has released StableStudio, an open-source web app that uses its Stable Diffusion model to generate images from text prompts. Users can also use DreamStudio features to make multiple variations of an image with different styles and attributes.
- Meta, the company formerly known as Facebook, has unveiled I-JEPA, its own AI image generator based on its generative transformer model. I-JEPA can learn the associations between words and images, and generate realistic images from text descriptions.
- Alibaba, the Chinese e-commerce giant, has launched Tongyi Wanxiang, a generative AI image generator that can handle both Chinese and English languages. Users can customize the image output parameters using Composer, a large model developed by Alibaba Cloud.
- Chip giant Nvidia debuted its Perfusion AI art creation tool in August.
Last Updated on November 8, 2024 10:41 am CET