HomeWinBuzzer NewsHugging Face Introduces SafeCoder: An Enterprise Code Assistant

Hugging Face Introduces SafeCoder: An Enterprise Code Assistant

Hugging Face has launched SafeCoder, a secure, self-hosted code assistant for enterprises. It is now available in VMware.

-

has unveiled SafeCoder, a code assistant tailored for enterprises. This new tool is designed to bolster software development efficiency by providing a secure, self-hosted pair solution. SafeCoder emphasizes security, ensuring that code remains within the Virtual Private Cloud (VPC) during both training and inference stages. The design of SafeCoder allows for on-premises deployment, giving enterprises ownership of their code, akin to a personalized GitHub Copilot.

Furthermore, Hugging Face has entered into a partnership with VMware, enabling SafeCoder to be available on the VMware Cloud platform. VMware is not only offering SafeCoder on its platform but is also utilizing it internally. They have shared a blueprint that facilitates swift deployment on their infrastructure.

Code assistants, such as , which is built on OpenAI Codex, have been instrumental in enhancing productivity. By customizing Large Language Models (LLMs) with their code, enterprises can further amplify this productivity. However, using closed-source LLMs presents potential security concerns, especially during training and inference phases. SafeCoder addresses these concerns by allowing the creation of proprietary LLMs built on open models, fine-tuned on internal code, all without external sharing.

SafeCoder supports over 80 programming languages and adapts code suggestions through collaborative training with Hugging Face. The proprietary data of enterprises remains secure, leading to a personalized model. SafeCoder's inference capability is versatile, supporting a range of hardware options, from NVIDIA Ampere GPUs to Xeon Sapphire Rapids CPUs.

Debuting in VMware for Enterprise Users

According to the official announcement by Hugging Face, SafeCoder is not just a model but a comprehensive commercial solution. It has been built with a focus on security and privacy. The code never leaves the VPC during training or inference, and it is designed for self-hosting by customers on their infrastructure.

Chris Wolf, Vice President of VMware AI Labs, expressed his views on the collaboration, stating, “Our collaboration with Hugging Face around SafeCoder fully aligns to VMware's goal of enabling customer choice of solutions while maintaining privacy and control of their business data. In fact, we have been running SafeCoder internally for months and have seen excellent results. Best of all, our collaboration with Hugging Face is just getting started, and I'm excited to take our solution to our hundreds of thousands of customers worldwide.”

SafeCoder is now available to VMware enterprise customers, and VMware has also released a reference architecture to ensure a seamless deployment experience on their infrastructure.

Hugging Face Grows as an AI Leader

Hugging Face is becoming an increasingly important player in the growing AI market. In February, the company announced a strategic partnership with Amazon Web Services (AWS), selecting Amazon as its preferred cloud provider for the future.

With this partnership, developers and companies can harness the power of machine learning models and deliver NLP features more quickly. The Hugging Face community can tap into AWS's machine learning offerings and infrastructure. They can use Amazon SageMaker, the cloud machine-learning platform, AWS Trainium, the custom machine-learning processor, and AWS Inferentia, the machine learning accelerator, to train, fine-tune and deploy their models.

In March, announced that Azure Machine Learning now has Hugging Face foundation modelsMicrosoft and Hugging Face has been in partnership since last year. That initial collaboration focused on building Hugging Face Endpoints – a machine learning inference service that is underpinned by Azure ML Managed Endpoint.

Last month, Hugging Face also struck a partnership with chip giant AMD. The partnership will allow developers to train and deploy large language models (LLMs) on AMD hardware, which will significantly improve performance and reduce costs.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News