HomeWinBuzzer NewsEU Economists See AI "Market Failure"; Urge Public Fund Milestone Model

EU Economists See AI “Market Failure”; Urge Public Fund Milestone Model

In a policy brief, German economists suggest an EU milestone-based funding program to reduce societal risks of generative AI while boosting safety-focused innovation.

-

Generative AI, the technology behind tools like OpenAI’s ChatGPT and image generators such as DALL·E, is rapidly transforming digital interaction and creative production.

OpenAI just released its Sora AI video generator that enables millions of ChatGPT subscribers to create photorealistic short clips.

However, the societal risks associated with these systems—ranging from misinformation to unintended bias—remain largely unaddressed by developers

Economists from the Leibniz Centre for European Economic Research (ZEW), consider AI a case of market failure and are urging the European Union in their December policy brief to adopt a targeted funding model to address these challenges.

In economic theory, “market failure” refers to a situation where the free market fails to allocate resources efficiently, leading to suboptimal outcomes for society as a whole.

They propose a milestone-based incentive program to align developer priorities with public safety needs.

“Current market incentives are insufficient to ensure the development of safe generative AI systems, as companies bear only a fraction of potential safety failure costs while capturing most of the benefits from capability improvements,” the policy brief states, identifying a critical market failure that leaves safety advancements underfunded​.

Related: EU Releases First Draft of General-Purpose AI Code of Practice

Market Failures in Generative AI Development

The rapid growth of generative AI has been fueled by billions in public and private investments. However, these systems, which learn patterns from extensive datasets to generate text, images, or audio, lack built-in safeguards against misuse.

According to ZEW researchers, this imbalance allows developers to prioritize capabilities over safety, leaving the societal costs of risks—such as large-scale disinformation campaigns or failures in critical applications like healthcare—externalized.

The policy brief highlights that safety challenges in generative AI cannot be resolved through conventional training enhancements alone. It argues that pre-training dataset curation or post-training measures like fine-tuning algorithms cannot adequately address the core safety deficits inherent in the technology stack. These gaps underscore the need for substantial technological breakthroughs​.

Related: Meta Leads Tech Giants Urging EU to Rethink AI Regulation Strategy

Proposal for a Milestone-Based Funding Model

To close this gap, the ZEW economists propose an EU-funded program offering milestone-based incentives to developers who achieve predefined safety benchmarks. This “pull” funding mechanism would reward outputs rather than inputs, providing predictable pathways for advancing safety in generative AI systems.

“We propose a milestone-based incentive scheme where pre-specified payments would reward the achievement of verifiable safety milestones,” the brief explains. “The scheme would use robust safety metrics and competitive evaluation to prevent gaming while ensuring meaningful progress”​.

The proposed model would remain technology-neutral, allowing developers to choose innovative methods to meet safety criteria. Metrics for evaluation would include factual accuracy, harm prevention, and resilience under adversarial testing.

Public and independent scrutiny would ensure transparency, accountability, and alignment with societal needs​.

Public Involvement and Accountability

A cornerstone of the proposal is the emphasis on public participation in testing AI models. Adversarial testing, or “red teaming,” would challenge these systems to identify vulnerabilities and ensure robustness.

“It is particularly important to align the incentives of all parties involved and to provide the necessary coordination devices for efficient and effective red teaming,” the policy brief notes, emphasizing the importance of collective validation procedures​.

Transparency measures would include public disclosure of evaluation results, fostering trust and enabling a shared understanding of generative AI’s potential and limitations. By involving a diverse range of stakeholders, the program aims to ensure that safety innovations align with societal expectations.

Embedding the Proposal in EU Policy Frameworks

The proposed milestone-based program complements existing EU regulatory measures such as the AI Act and the updated Product Liability Directive. These frameworks focus largely on banning unsafe applications and imposing penalties for safety failures.

However, the ZEW policy brief highlights a critical gap: existing initiatives prioritize adoption and capability building rather than directly fostering safety innovations.

“The EU also states that it wants to foster and reward the development of safer generative AI with various initiatives initiated during the last legislative term. However, they seem to be more focused on fostering the adoption of generative AI in general rather than developing safe generative AI in particular,” the brief explains, pointing to programs like “GenAI4EU” and “AI Factories” as examples of efforts that overlook safety-specific challenges​.

The Safe Generative AI Innovation Program would add a new dimension to the EU’s AI strategy by explicitly rewarding developers for achieving meaningful safety milestones, thereby bridging the gap between regulation and innovation.

Balancing Safety and Performance

One of the critical challenges of the program is addressing the trade-off between safety and other performance dimensions of generative AI models. The safest AI systems—those that avoid generating potentially harmful content entirely—would also be the least functional.

To strike a balance, the incentive program must prioritize models that meet high safety standards while maintaining competitive performance.

“An increase in safety might well come at the expense of lower performance in other dimensions. To illustrate this point: The safest generative AI is one that does not generate anything at all, but a competitive level of performance is necessary for a model to be of actual use,” the policy brief notes.

The program would use scaling laws and other predictable development pathways to encourage efficiency in integrating safety and performance​.

Building a Competitive Edge for Europe

Beyond its immediate focus on safety, the proposal aims to bolster the EU’s position in the global AI landscape. Europe currently lags behind the United States and China in generative AI development, with only a handful of companies competing on the international stage.

By prioritizing safety, the EU could establish itself as a leader in ethical AI innovation, a growing priority for consumers and regulators worldwide.

“The Safe Generative AI Innovation Program might also re-energise AI innovation in the EU,” the brief states. “By creating clear incentives for safety innovation, this program could help European companies develop a competitive advantage in an increasingly important market dimension”​.

The brief also underscores the broader potential of “pull” funding mechanisms, citing their success in fields like vaccine development and autonomous vehicles. By adapting these approaches to generative AI, the EU could pioneer a new model for responsible technological advancement​.

The policy brief concludes, “The primary goal should remain the focus, with secondary objectives included only if they do not alter incentives, to avoid ‘mission creep'”​.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x