A recent study conducted by Stanford University's Center for Research on Foundation Models (CRFM) has revealed that most artificial intelligence (AI) models, including Google's PaLM 2 and OpenAI's GPT-4, do not comply with the requirements of the European Union's (EU) upcoming AI Act. The Act, which recently received overwhelming support in the European Parliament, aims to impose explicit obligations on foundational model providers like OpenAI and Google to regulate the use of AI and limit its potential dangers.
Evaluation of AI Models
The researchers at CRFM focused on 12 out of the 22 requirements directed at foundation model providers that could be assessed using publicly available information. These requirements were grouped into four categories: data resources, compute resources, the model itself, and deployment practices. The researchers devised a 5-point rubric for each of the 12 requirements and evaluated 10 major model providers, including OpenAI, Google, Meta, and Stability.ai.
Significant Discrepancy in Compliance Levels
The study revealed a significant discrepancy in compliance levels, with some providers scoring below 25 percent. It further highlighted a significant lack of transparency among model providers. Several areas of non-compliance were identified, including the failure to disclose the status of copyrighted training data, undisclosed energy usage and emissions during model training data, and the absence of transparent methodologies to mitigate potential risks.
Challenges to Comply with the AI Act
The study indicates that none of the studied foundation models fully comply with the current regulations outlined in the AI Act draft. While there is “ample room for improvement” for providers to align themselves more closely with the requirements, the high-level obligations established in the AI Act may pose a challenge for many companies.
Concerns from Executives
Recently, executives from 150 prominent companies expressed their concerns regarding the tight regulations in an open letter addressed to the European Commission, the parliament, and member states. They warned that the proposed rules could burden companies involved in the development and implementation of AI systems and may prompt companies to consider leaving the EU and investors to withdraw their support for AI development in Europe.
The study suggests there is still an urgent need for enhanced collaboration between policymakers and model providers in the EU to effectively address the gaps and challenges, and find a common ground to ensure the appropriate implementation and effectiveness of the AI Act.
Stanford Researchers' Perspective
The Stanford researchers emphasized that transparency should be the first priority to hold foundation model providers accountable. They identified four areas where many organizations receive poor scores: copyrighted data, compute/energy, risk mitigation, and evaluation/testing. They also found a clear dichotomy in compliance as a function of release strategy, with open releases offering more comprehensive disclosure of resources than restricted or closed releases.
They concluded that enforcing the 12 requirements in the Act would bring substantive change while remaining within reach for providers. They also highlighted that the Act would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.