Over 150 executives from companies like Renault, Heineken, Airbus, and Siemens have signed an open letter urging the EU to rethink its plans over the AI Act. They claim that the law is too strict and disproportionate, and could stifle AI innovation and drive AI providers away from the European market. They are particularly concerned about the rules for generative AI systems, which require them to register their product with the EU, undergo risk assessments, and meet transparency requirements.
In the open letter, the companies argue that these rules are unnecessary and excessive, and could discourage them from using AI to create new products and services. They suggest that the EU should adopt a more flexible and risk-based approach that focuses on the actual use cases of AI rather than the underlying technology.
“We have come to the conclusion that the EU AI Act, in its current form, has catastrophic implications for European competitiveness,” said Jeannette zu Fürstenberg, founding partner of La Famiglia VC, and one of the signatories on the letter. “There is a strong spirit of innovation that is being unlocked in Europe right now, with key European talent leaving US companies to develop technology in Europe. Regulation that unfairly burdens young, innovative companies puts this spirit of innovation in jeopardy.”
The AI Act, which is currently being negotiated by the European Parliament and the Council of the European Union, would impose stringent regulations on high-risk AI systems, such as those that are used for facial recognition or social scoring. However, the law also has implications for generative AI systems, which can create new content such as text, images, or music. Examples of chatbots using large language models with generative AI include OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing Chat.
What is the AI Act What is its Potential Impact?
The Act aims to regulate systems that pose an “unacceptable level of risk”, such as tools that forecast crime or assign social scores. It also introduces new limitations on “high-risk AI”, including systems that could sway voters or damage people's health. Moreover, the legislation also establishes new rules for generative AI, requiring content produced by systems like ChatGPT, Bing Chat or Bard to be labelled.
Additionally, it requires models to disclose summaries of copyrighted data used for training. This could be a major obstacle for systems that create humanlike speech by collecting text from the web, often from sources containing copyright symbols.
Earlier this month, the European Parliament voted in favor of the AI Act. The Act now goes to the Council of the European Union for final approval. The Council is expected to vote on the Act in the coming months. OpenAI CEO Sam Altman previously suggested the company would remove ChatGPT in Europe but later backtracked.
The AI Act is seen as a landmark law that will shape the future of AI in Europe and beyond. However, some critics argue that it could hamper innovation and competitiveness in the field of generative AI, which has many potential applications and benefits for society. The debate over how to balance regulation and innovation in AI is likely to continue as the technology evolves and impacts more aspects of our lives.
OpenAI has been reportedly been lobbying against some aspects of the AI Act. According to Time, OpenAI's lobbying efforts have been successful to some extent. The current draft of the AI Act does not include GPT-4 or other general-purpose AI systems among the list of high-risk AI systems. However, the Act's requirements for transparency, traceability, and human oversight remain in place.
The AI Act is expected to come into force by 2025, but it could face legal challenges and amendments before then. The debate over how to regulate AI is likely to continue as the technology evolves and impacts more aspects of our lives.