HomeWinBuzzer NewsMost AI Models Don´t Meet EU AI Act AI Regulation Standards

Most AI Models Don´t Meet EU AI Act AI Regulation Standards

The researchers at CRFM focused on 12 out of the 22 requirements directed at foundation model providers that could be assessed using publicly available information.

-

A recent study conducted by Stanford University's Center for Research on Foundation Models (CRFM) has revealed that most artificial intelligence (AI) models, including Google's and 's GPT-4, do not comply with the requirements of the European Union's (EU) upcoming AI Act. The Act, which recently received overwhelming support in the European Parliament, aims to impose explicit obligations on foundational model providers like OpenAI and Google to regulate the use of AI and limit its potential dangers.

Evaluation of AI Models

The researchers at CRFM focused on 12 out of the 22 requirements directed at foundation model providers that could be assessed using publicly available information. These requirements were grouped into four categories: data resources, compute resources, the model itself, and deployment practices. The researchers devised a 5-point rubric for each of the 12 requirements and evaluated 10 major model providers, including OpenAI, Google, Meta, and Stability.ai.

Significant Discrepancy in Compliance Levels

The study revealed a significant discrepancy in compliance levels, with some providers scoring below 25 percent. It further highlighted a significant lack of transparency among model providers. Several areas of non-compliance were identified, including the failure to disclose the status of copyrighted training data, undisclosed energy usage and emissions during model training data, and the absence of transparent methodologies to mitigate potential risks.
 

Challenges to Comply with the AI Act

The study indicates that none of the studied foundation models fully comply with the current regulations outlined in the AI Act draft. While there is “ample room for improvement” for providers to align themselves more closely with the requirements, the high-level obligations established in the AI Act may pose a challenge for many companies.

Concerns from Executives

Recently, executives from 150 prominent companies expressed their concerns regarding the tight regulations in an open letter addressed to the European Commission, the parliament, and member states. They warned that the proposed rules could burden companies involved in the development and implementation of AI systems and may prompt companies to consider leaving the EU and investors to withdraw their support for in Europe.

The study suggests there is still an urgent need for enhanced collaboration between policymakers and model providers in the EU to effectively address the gaps and challenges, and find a common ground to ensure the appropriate implementation and effectiveness of the AI Act.

Stanford Researchers' Perspective

The Stanford researchers emphasized that transparency should be the first priority to hold foundation model providers accountable. They identified four areas where many organizations receive poor scores: copyrighted data, compute/energy, risk mitigation, and evaluation/testing. They also found a clear dichotomy in compliance as a function of release strategy, with open releases offering more comprehensive disclosure of resources than restricted or closed releases.

They concluded that enforcing the 12 requirements in the Act would bring substantive change while remaining within reach for providers. They also highlighted that the Act would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.

AI Regulation: Recent Developments

  1. Japan Considers Softer AI Regulations than the (July 3, 2023): Japan is considering a more lenient approach to AI regulations than the EU's strict AI Act, aiming for a flexible policy that promotes innovation while ensuring ethical standards. Despite these regulatory differences, Japan and the EU are exploring a partnership in AI and chip development to reduce their dependence on China. This collaboration could accelerate the responsible and ethical use of AI technologies.
  2. EU Businesses Express Concern Over AI Act (June 30, 2023): Over 150 executives from top European companies, including Renault, Heineken, Airbus, and Siemens, have signed an open letter expressing concerns about the EU's AI Act. They argue that the Act's strict regulations could stifle AI innovation and discourage companies from using AI to create new products and services. The executives suggest a more flexible, risk-based approach that focuses on AI's actual use cases rather than the underlying technology.

  3. OpenAI's Lobbying Efforts Against the AI Act (June 20, 2023): OpenAI has been lobbying European officials to water down the EU's proposed AI Act. The company argues that its general-purpose AI systems, such as , should not be considered “high risk” and should therefore be exempt from the Act's regulations. OpenAI's lobbying efforts have been somewhat successful, as the current draft of the AI Act does not include GPT-4 or other general-purpose AI systems among the list of high-risk AI systems.

  4. EU Parliament Approves AI Act (June 14, 2023): The European Union Parliament has given the green light to the European AI Act. This legislation is designed to regulate AI usage, focusing on systems that pose a high level of risk, such as predictive policing tools and social scoring systems. The Act also introduces new restrictions on high-risk AI systems that could potentially manipulate voters or endanger health. The Act also outlines new rules for , requiring AI-generated content to be clearly labeled and summaries of copyrighted training data to be published. The Act's implications are so significant that OpenAI, the maker of ChatGPT, may consider exiting the European market. The Act is still under negotiation with the European Council.

  5. EU Plans to Label AI-Generated Content (June 5, 2023): The European Union has urged companies like Google, Facebook, and Microsoft to start labeling all AI-generated content, such as deepfakes and synthetic media. The EU's Digital Services Act (DSA) requires online platforms to take measures to prevent the spread of harmful content, including AI-generated content that could be used to mislead or deceive users. The  is set to come into force in 2024.
  6. Australia Plans Regulatory Framework for AI (June 5, 2023): The Australian , under the leadership of Industry and Science Minister Ed Husic, has launched a comprehensive review of AI in response to global concerns. The review, which is set to last eight weeks, aims to establish a new regulatory framework for AI, with a particular focus on high-risk areas such as facial recognition. The review will explore the possibility of strengthening existing regulations, introducing new AI-specific legislation, or a combination of both.

  7.  President Calls for Generative AI Regulations (May 31, 2023): Microsoft's President, Brad Smith, has voiced his support for generative AI regulations. He emphasized the need for a framework that ensures the responsible use of AI technologies, adding his voice to the growing chorus of advocates for .

  8. Microsoft Publishes Governance Blueprint for Future Development (May 26, 2023): Microsoft has shared a blueprint outlining its vision for AI governance. The report, titled “Governing AI: A Blueprint for the Future”, presents five key principles that Microsoft believes should guide AI development and usage. The company's proposed five-step blueprint for public AI governance includes the implementation of government-led  frameworks, the establishment of a new federal agency dedicated to AI policy, and the promotion of responsible AI practices across sectors.

  9. G-7 Leaders Initiate ‘Hiroshima Process' (May 21, 2023): The leaders of the G-7 countries have agreed to establish a governance protocol named the ‘Hiroshima Process' in response to the rapid advancement of generative AI. This agreement aims to ensure that AI development and deployment align with the shared democratic values of the G-7 nations.

  10. OpenAI CEO Calls for Urgent (May 17, 2023): Sam Altman, the CEO of OpenAI, has testified before a US Senate subcommittee, advocating for the regulation of rapidly advancing AI technologies. Altman suggested the establishment of an agency that would issue licenses for the development of large-scale , enforce safety regulations, and require AI models to pass tests before public release.

Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a Master´s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.

Recent News