HomeWinBuzzer NewsMicrosoft Releases Synthetic Data For AI Training, Shows Performance Gains Across Benchmarks

Microsoft Releases Synthetic Data For AI Training, Shows Performance Gains Across Benchmarks

Microsoft's new AgentInstruct framework automates data generation for AI training, improving efficiency and overcoming data limitations.

-

Microsoft’s new Orca-AgentInstruct framework, released last week, might be changing the landscape of AI training with its unique approach to synthetic data generation.

As data scarcity challenges further advancements of AI, this development may mark a notable shift in how AI models can be trained efficiently. Microsofts says that using agentic workflows, AgentInstruct can produce high-quality, varied datasets, addressing the need for diverse training inputs without intensive human involvement.

Synthetic Data and the Concept of Generative Teaching

AgentInstruct leverages Generative Teaching, a process where models use synthetic data to help train other models in new skills. This method, highlighted in a July 2024 paper by Microsoft researchers, uses raw inputs like text files and code to create datasets automatically, allowing AI models to practice tasks ranging from text editing to complex coding.

The 25-million-pair dataset generated by AgentInstruct was used to post-train a Mistral 7-billion-parameter base model. The resulting model, Orca-3, demonstrated notable performance boosts: 40% on AGIEval and a significant 54% on GSM8K.

AGIEval is a human-centric benchmark designed to evaluate the general abilities of foundation models, especially in tasks related to human cognition and problem-solving. GSM8K is a benchmark dataset consisting of 8,500 high-quality, linguistically diverse grade school math word problems.
 
AgentInstruct performance improvements AGIEval via Microsoft

The agentic flows within AgentInstruct enable the system to self-refine its output, employing tools and reflection to maintain quality while reducing human intervention. This design counters potential issues such as model collapse.

AI Model Collapse refers to a degenerative phenomenon that occurs when generative AI models are trained on synthetic data—data generated by other AI models—rather than on real-world, human-generated data.

Over successive generations of training, this process leads to a gradual degradation in the model’s performance, resulting in outputs that become increasingly nonsensical and disconnected from reality. This phenomenon is particularly concerning for large language models (LLMs), variational autoencoders (VAEs), and other generative models.

The phenomenon is well documented in a recent study published in Nature, which found that “indiscriminate use of model-generated content in training causes irreversible defects in the resulting models.” 

In one experiment, the researchers used Meta’s OPT model to generate synthetic data from Wikipedia articles and then retrained the model using this synthetic data. After five generations, the model’s outputs had deteriorated into incoherent text.
 
AI Model collapse using synthetic data

The Mechanics Behind AgentInstruct

AgentInstruct’s process is structured in phases that ensure a comprehensive data generation pipeline. The framework begins with content transformation, converting raw data into a form suitable for instruction.

Next, seed instructions are produced by specialized agents, each handling specific subcategories to promote diversity. The final step involves instruction refinement, where agents iteratively improve the data quality and complexity. This approach allows for models to learn broadly, avoiding overfitting to any single benchmark.
 

Tools like GPT-4 and code interpreters are integrated into AgentInstruct’s operation to create well-rounded datasets. The framework’s ability to autonomously generate prompts and responses helps it stand out from traditional data curation methods. Microsoft’s release of a 1-million-pair subset on Hugging Face, complete with detailed generation reports, signals their intent to advance AI research collectively .

The Great Hunger for AI Training Data

As the demand for high-quality data outpaces availability, companies like OpenAI face challenges to add more high quality data to improve their models. OpenAI’s Orion model, is said to have seen slowed progress due to these shortages.

Related:

Analysts project that by 2026, accessible language data of sufficient quality might be depleted. Orion’s development highlighted the reliance on synthetic data to fill the gaps, a move mirrored by Microsoft’s strategic use of AgentInstruct.

NVIDIA’s contribution to synthetic data solutions began with the launch of the Nemotron-4 340B series in June 2024. These models are designed to provide custom, high-quality datasets for training in sectors such as finance and healthcare, showing their versatility on platforms like Hugging Face’s RewardBench.

Wider Industry Implications

Microsoft’s emphasis on synthetic data and generative teaching positions them among major players seeking solutions to modern AI constraints. OpenAI’s approach with Orion has shown that post-training optimization can help fine-tune models without needing a constant influx of new data

By integrating agentic flows and generative teaching, AgentInstruct not only addresses data shortages but also sets a blueprint for scalable, autonomous data creation. This marks an evolution in AI model training, where synthetic data plays a pivotal role in driving future advancements.

Last Updated on December 7, 2024 5:38 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon