Meta Platforms, the parent company of Facebook and Instagram, has announced a new AI image creation model called I-JEPA. The model is designed to create realistic images from text descriptions, and it is said to be more human-like than other AI image creation models.
Image Joint Embedding Predictive Architecture (I-JEPA) was developed by Meta’s AI research team, and it is based on the company’s generative transformer model AI. The model was trained on a massive dataset of text and images, and it is able to learn the relationships between words and images. This allows I-JEPA to create realistic images from text descriptions.
For example, if you were to ask I-JEPA to create an image of a “cat sitting on a chair,” the model would be able to generate an image that is both realistic and visually appealing. The image would likely show a cat sitting on a chair in a natural-looking pose.
Meta says that I-JEPA could be used for a variety of purposes, including creating marketing materials, designing products, and generating art. The company is also exploring ways to use I-JEPA to improve its existing products, such as Facebook’s photo tagging feature.
I-JEPA is still under development, but it is a significant development in the field of AI. The model’s ability to create realistic images from text descriptions could have a major impact on a variety of industries.
The release of I-JEPA is a sign of the progress that is being made in the field of AI. The model’s ability to create realistic images from text descriptions is a significant achievement, and it could have a major impact on a variety of industries.
I-JEPA is also a reminder of the potential dangers of AI. The model could be used to create fake news or propaganda, or it could be used to create deepfakes that could be used to damage someone’s reputation. It is important to be aware of the potential dangers of AI, and to use it responsibly.
Meta Going All in on AI Across its Services
The announcement of I-JEPA comes following a recent interview where Meta CEO Mark Zuckerberg said the company will place AI models into all of its products. Speaking to AI research and podcaster Lex Fridman, Zuckerberg offered a wide-ranging state of play for Meta overall. Across the all-encompassing 3 hour+ interview, he also discussed the company’s wide-ranging plans for AI.
Zuckerberg announced that Meta has achieved “amazing advances” in generative AI lately, and that the company is eager to offer this technology to its users. He said that generative AI can “change how we make, distribute, and enjoy content.”
Discussing Meta’s LLaMA large language model (LLM) and the company’s response to the growth in generative AI through solutions such as ChatGPT from OpenAI, Bing Chat from Microsoft, and Bard from Google, Zuckerberg said the following:
“We are working on a follow on model (of LLaMA) now that we’ve had time to work on a lot more of the safety and the pieces around that. But overall, I just kind of think that it would be good if there were a lot of different folks who had the ability to build state of the art technology and not just a group of big companies.
“A lot of what we are doing is taking the first version of LLaMA and trying to now build a version that has all of the latest state of the art safety precautions. You’ll have an assistant that you can talk to in WhatsApp… every creator will have an AI agent. Every small business will have an AI agent they can talk to for commerce and customer support.”
Meta intends to apply generative AI to let users alter their own photos and post them in Instagram Stories. For instance, a user could use a text command to switch the color of their hair or put a new background in their photo. Meta is also developing AI assistants that can assist or amuse users. These assistants will have different traits and skills, and they will be accessible to use in Messenger and WhatsApp.
Big Tech Rivals Competing in Image Generating AI Space
DALL-E 2 and MidJourney are two of the most popular AI image generators today. OpenAI created DALL-E 2, which can make realistic and original images from natural language descriptions. MidJourney is an AI image community that lets users generate high-quality images for different purposes and platforms.
In March, Microsoft made a big improvement in AI image creation by adding OpenAI’s d AI image creator to Bing Chat, creating the Bing Image Creator. This tool makes images based on user’s written text. In June 2023, Bing Image Creator was updated to have “Precise” and “Balanced” modes, giving users more options to control the image generation process.
In April, Microsoft made another advance by making its Designer tool with OpenAI’s DALL-E AI available to everyone. This innovative tool can also make images from text descriptions, offering a unique way to create personalized designs.
At the same time, NVIDIA was advancing in generative AI research, developing new methods to improve the quality and realism of AI-generated images. OpenAI also increased the potential of AI in image creation with the release of ShapE, a generative model that can create 3D models from text descriptions.
Stability AI introduced StableStudio in May, an open-source version of its DreamStudio web app. StableStudio uses Stability AI’s Stable Diffusion model to make images from text prompts, enabling users to create a variety of images. Stability AI also improved its Clipdrop tool by launching Reimagine XL, a feature that lets users create multiple versions of an image using Stable Diffusion.
Last Updated on November 8, 2024 12:45 pm CET