HomeWinBuzzer NewsMistral's Mixtral 8x7B Emerges as a Groundbreaking Open Source AI Model

Mistral’s Mixtral 8x7B Emerges as a Groundbreaking Open Source AI Model

Mistral's Mixtral 8x7B, a "mixture of experts" language model, matches top competitors like GPT-3.5 and runs on non-GPU devices, potentially democratizing AI access.


Mistral AI, a startup known as the most well-funded in Europe's history, has become more competitive in the AI market with the introduction of its new language model, Mixtral 8x7B. The latest offering employs an innovative technique called “mixture of experts” which integrates a variety of specialized models to perform diverse categories of tasks. Notably, the model's abilities are head-to-head with OpenAI's proprietary GPT-3.5 and Meta's Llama 2 family—the latter of which previously led the open-source AI race.

Comparative Performance and Accessibility

Mixtral 8x7B has demonstrated exceptional results in several AI benchmark tests, as reveals through a recent blog post. It matches or surpasses the performance of renowned competitors like . Moreover, the model's efficiency is such that it can operate on devices that lack dedicated graphics processing units (GPUs), including the latest Mac computers powered by the M2 Ultra CPU.


The swift performance and ease of deployment without high-end hardware requirements signify that Mixtral 8x7B could substantially democratize the use of large language models. Enthusiasts and professionals have begun experimenting with the model, marveling at its capabilities, which have been likened to those of GPT 3.5. Early adopters can access the model under an Apache 2.0 license, suggesting an openness and commercial friendliness that could disrupt the current AI landscape.

Implications for AI Policy and Safety Measures

However, Mixtral 8x7B's lack of safety protocols has sparked discussions among academics and AI influencers. It poses a conundrum for policymakers and experts since unrestricted language models can potentially generate harmful or unsafe content. A professor from the University of Pennsylvania's Wharton School accentuated the model's absence of safety guardrails, pointing out that while this allows for more extensive content creation, it releases the proverbial “genie out of the bottle” regarding regulatory considerations.

Despite these concerns, the HuggingFace platform offers a version of Mixtral 8x7B with built-in safety measures. The platform's implementation rejects prompts to create content that could be harmful or not safe for work (NSFW), effectively addressing some of the safety issues highlighted by critics.

As part of its growing suite of AI tools, Mistral is also reportedly testing an alpha version of a larger and more powerful model through its API. This development, coupled with Mistral's successful $415 million Series A funding round led by prominent venture capital firm A16z, at a valuation of $2 billion, positions the company as a pivotal force in the AI industry.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News