EU Approves €1.3B Tech Fund, Tying AI Regulation to Deployment

The EU has approved €1.3B in funding to support AI, cybersecurity, and digital skills as the AI Act enforcement begins across the bloc.

The European Commission has approved €1.3 billion in funding for artificial intelligence, cybersecurity, and digital skills as part of the Digital Europe Programme for 2025–2027. Originally reported as €1.2 billion in local media on March 27, the Commission confirmed the updated figure a day later in its official announcement.

While the initiative targets supercomputing, cloud infrastructure, and quantum communication, it also aims to align deployment efforts with the enforcement of AI regulations. The funding is intended to help developers build technologies that comply with the EU’s AI Act and related digital laws.

According to the Commission, the investment will support the deployment of critical technologies that are strategic for Europe’s digital future. The EU has opened project proposals under the Digital Europe Programme as part of the program’s implementation.

Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, said: “Securing European tech sovereignty starts with investing in advanced technologies and in making it possible for people to improve their digital competences. With the opportunities under the Digital Europe Programme, we are ensuring that new technologies – and with them new potential – reach European citizens, businesses and public administrations.”

AI Act Enforcement: What’s Already in Effect

The European AI Act officially entered into force last August, but enforcement began in stages. The first major milestone came on February 2, 2025, when bans on “unacceptable risk” systems took effect. These include systems used for real-time biometric surveillance in public spaces, social scoring, and predictive policing.

The act bans certain abusive AI practices, such as using AI to monitor employees’ emotions, manipulating users into financial decisions, AI-driven social scoring, and the use of biometric data for predictive policing.

Additional requirements will come into force on August 2, 2025, for general-purpose AI models, followed by full applicability of the AI Act on August 2, 2026. Certain high-risk systems will have until 2027 to comply.

Systems such as OpenAI’s models and Meta’s LLaMA series must now disclose their data sources and development methods. A Stanford study in mid-2023 showed most AI models then still fell short of the AI Act’s expectations.

Compliance Challenges and Enforcement

In June 2023, a coalition of 150 European companies—including Airbus and Renault—warned that overly strict AI regulation could push innovation outside the EU. Smaller startups in particular have expressed concern about affording legal and technical compliance efforts.

Meanwhile, foreign developers are under growing scrutiny. Chinese AI firm DeepSeek was investigated by Italian authorities in early 2025 over privacy and national security issues related to its language models.

Enforcement is being coordinated by the European AI Office along with national regulatory agencies across the bloc.

Code of Practice and Evolving Compliance Tools

The draft General-Purpose AI Code of Practice, published in November 2024, sets out expectations for documenting training data and assessing system risks. Developed by the AI Office and technical experts, the draft includes a “Safety and Security Framework” for models exceeding 10²⁵ FLOPs, a metric that reflects a system’s computational intensity.

Developers must also report serious incidents and maintain updated risk forecasts. Feedback from nearly 1,000 stakeholders was collected via the Futurium platform, with final guidelines expected in May 2025.

In an October 2024 compliance benchmark, ETH Zurich and LatticeFlow found that major models generally performed well on content safety but fell short on fairness and data privacy. These results highlight the need for ongoing compliance support.

Regulatory Coordination and Future Adjustments

To reduce complexity for businesses, the European Commission is assessing overlaps between the AI Act and other legislation such as the Digital Services Act and Digital Markets Act. So far, no major rollback is expected, but simplification may help companies manage obligations more effectively.

Meanwhile, the EU is doubling down on strategic investment. As AI development accelerates globally, the EU is signaling that its model of regulation with public investment is here to stay. Whether it can keep startups from relocating—and ensure compliance without slowing down innovation—will soon be tested.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x