HomeWinBuzzer NewsDell Collaborates with Meta towards Streamlining the Deployment of Llama 2 AI

Dell Collaborates with Meta towards Streamlining the Deployment of Llama 2 AI

Dell and Meta Partner to Simplify On-Premises Deployment of Llama 2 Large Language Model for easier access to AI.

-

Dell has partnered with Meta to ease the implementation of the Llama 2 large language model (LLM) on-premises for their clients. This collaboration aims to curtail reliance on cloud-based access and shift towards using enterprise-level IT infrastructures. Dell positions itself to become the preferred provider of equipment required for this transformation.

Dell's Validated Design for Generative AI is central to this initiative. This extensive and pre-tested hardware build, launched this year, leverages the GPU manufacturing capabilities of Nvidia. Dell further extends support through deployment and configuration guidance, significantly reducing the roll-out timeframe for clients. An integral component of this support is the integration of models into Dell's system sizing tools, facilitating a tailored configuration to meet client requirements.

Dell's Chief AI Officer, Jeff Boudreau, expressed his optimism regarding this collaboration in a prepared statement, models including Llama 2 have the potential to transform how industries operate and innovate. We are striving to make GenAI more accessible to all customers by providing detailed implementation guidance in synergy with the optimal software and hardware infrastructure for deployments of all sizes.”

Llama 2 Language Model: Parameters and Sizes

The Llama 2 model was launched in July as a comprehensive set of models, pre-tuned and refined for utilization. The model comes in varying sizes with varying parameters—7 billion, 13 billion, and 70 billion, each demanding different hardware requirements. While this model can be freely downloaded for academic purposes, some commercial usage is also permitted. Meta had previously collaborated with other tech giants, and Amazon, for availability on Azure and AWS cloud platforms.

Despite its utility, the Llama 2 model's classification as an open-source model has been a subject of debate. The primary reason being its unavailability under a license endorsed by the Open Source Initiative (OSI). Meta claims to be open about its AI, offering a community license for Llama 2. It thinks that openness leads to more innovation and safety in AI. The company invites the community to test Code Llama, find problems, and fix them.

But a recent study says that Meta and other companies are not really open about their AI models. The study, done by AI experts from Radboud University in Nijmegen, Netherlands, shows that some of the strongest AI LLMs are hidden from the public, because the code that trained them is not shared.

The study names  and Meta as the most closed LLM makers and says that this hurts the AI community. It asks for more honesty and openness from companies, so that others can learn from their work and make it better.

Dell's Validated Designs and Hardware Requirements

Unveiled in August, Dell's Validated Designs for Generative AI marries the company's server kit expertise with 's GPU prowess and software, such as Nvidia's AI Enterprise suite. Alongside this offering, the company extends professional assistance to ensure that clients derive maximum benefits from generative AI's applications.

The validated designs are particularly effective in inferencing work that involves activities such as natural language generation, like , virtual assistance, and marketing content creation. Dell has diversified the portfolio to facilitate the customization and tuning of models.

The hardware prerequisites for Llama 2 differ based on the model size. A model with 7 billion parameters can function on a single GPU. Still, a model with 13 billion parameters necessitates two GPUs, and a 70 billion parameter variant requires eight GPUs. Dell has detailed the deployment of the 7 billion and 13 billion versions on its PowerEdge R760xa system in a recent blog post. However, a larger system, like the PowerEdge XE9680 server, is required for the 70 billion version due to its requirement for eight GPUs.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.
Mastodon