The EU is finalizing its AI Act, a set of rules to regulate artificial intelligence. But some companies and organizations want more support for the open-source development of various AI models. They have sent a paper to EU policymakers with their suggestions. Among them are GitHub, Hugging Face, and Creative Commons.
They propose to make the definitions of AI components more precise, to recognize that open-source developers and researchers are not making money from AI, to allow some real-world testing for AI projects, and to adjust the requirements for different foundation models. Github’s senior policy manager Peter Cihon says they want to help lawmakers create the best environment for AI innovation. He also hopes other governments will follow the EU’s example when they draft their AI laws.
“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation,” the companies write in the paper. “By supporting the blossoming open ecosystem approach to AI, the regulation has an important opportunity to further this goal.”
The EU’s AI Act is one of the first attempts to regulate AI in the world. But it has faced some criticism for being too vague about what counts as AI technology and too focused on how it is used.
What are the Goals of the AI Act?
AI is the field of computer science that develops systems that can mimic human abilities, such as seeing, hearing and reasoning. AI can benefit many industries, such as health care, education and manufacturing, but also raise concerns about privacy, security and human dignity.
The EU wants to regulate AI and ban some of its harmful applications, such as facial recognition for mass surveillance. The proposed law, called the AI Act, would also impose heavy fines for breaking the rules. The law is not final yet, but the EU hopes to become a global leader in ethical and human-friendly AI.
Europe’s AI Act also covers generative AI systems, which can create new content such as text, images, or music. The law wants to prevent systems that are dangerous, such as tools that predict crime or assign social scores. Examples of chatbots using large language models with generative AI include OpenAI‘s ChatGPT, Google‘s Bard, and Microsoft‘s Bing Chat.
The law also sets new standards for risky AI, including systems that could affect voters or people’s health. Moreover, the law also requires content produced by AI systems to be labeled as such. Additionally, it requires models to disclose summaries of copyrighted data used for training.
Last Updated on November 8, 2024 12:16 pm CET