HomeWinBuzzer NewsOpenAI Partners with Scale AI to Enhance GPT-3.5 Fine-Tuning Capabilities for Enterprises

OpenAI Partners with Scale AI to Enhance GPT-3.5 Fine-Tuning Capabilities for Enterprises

OpenAI and Scale AI have partnered to enhance the fine-tuning capabilities of OpenAI's advanced models, starting with GPT-3.5.

-

OpenAI has officially announced its collaboration with Scale AI to enhance the fine-tuning capabilities of its advanced models. This partnership aims to help enterprises customize OpenAI’s most potent models using their proprietary data. OpenAI emphasizes that all data sent through the fine-tuning API remains the property of the customer and is not utilized by OpenAI or any other entity for training other models.

Scale AI, recognized for its expertise in data labeling and AI solutions, has been named as OpenAI’s “preferred partner” for this initiative. Scale’s role will involve leveraging its enterprise AI expertise and Data Engine to benefit companies that wish to fine-tune OpenAI models. This collaboration will allow businesses to achieve robust enterprise-grade functionality, which necessitates rigorous data enrichment and model evaluation.

The partnership comes in the wake of OpenAI’s recent launch of fine-tuning for its GPT-3.5 Turbo model. OpenAI also revealed plans to introduce fine-tuning capabilities for its GPT-4 model in the upcoming fall. Fine-tuning allows businesses to tailor AI models to specific tasks, enhancing their utility and performance. For instance, companies can modify a model to align with their brand voice or respond in a particular language.

Developers now have the flexibility to optimize the GPT-3.5 Turbo model by training it on custom datasets. This enhancement aims to make the model more adaptable to specific tasks. For instance, a health chatbot powered by a fine-tuned model could produce more accurate and relevant responses compared to a generic system. 

Alexandr Wang, the CEO of Scale AI, expressed his enthusiasm about the collaboration in a press release, stating, “Prompting alone — atop even the best LLMs like GPT-3.5 — is not enough model customization to produce the most accurate, efficient results. As with software, an incredible amount of value comes from fine-grained optimizations, and fine tuning is critical for that.”

Brex’s Success with Fine-Tuning

A notable success story from this partnership is the fintech company Brex’s utilization of the fine-tuned GPT-3.5 model. Brex has employed large language models to generate high-quality expense memos, which assist in easing compliance requirements for its employees.

The company transitioned from using GPT-4 to a fine-tuned GPT-3.5 model to explore potential improvements in cost, latency, and quality. The results were promising, with the fine-tuned GPT-3.5 model outperforming the standard GPT-3.5 Turbo model 66% of the time. Henrique Dubugras, CEO at Brex, commented on the partnership, noting that fine-tuning GPT-3.5 “unlocks a whole new set of capabilities for us that were previously not viable.”

Last Updated on November 18, 2024 11:33 am CET

SourceOpenAI
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon