OpenAI, the non-profit research laboratory that created the powerful language model GPT-4 and the ChatGPT chatbot, has been lobbying European officials to water down the European Union's proposed AI Act. The Act, which is currently being negotiated by the European Parliament and the Council of the European Union, would impose stringent regulations on high-risk AI systems, such as those that are used for facial recognition or social scoring.
OpenAI has argued that its general-purpose AI systems, such as GPT-4, should not be considered “high risk” and should therefore be exempt from the Act's regulations. The company has also argued that the Act's requirements for transparency, traceability, and human oversight are too burdensome and would stifle innovation.
According to Time, OpenAI's lobbying efforts have been successful to some extent. The current draft of the AI Act does not include GPT-4 or other general-purpose AI systems among the list of high-risk AI systems. However, the Act's requirements for transparency, traceability, and human oversight remain in place.
It is unclear whether OpenAI's lobbying efforts will be successful in the long run. The European Parliament and the Council of the European Union are still negotiating the AI Act, and it is possible that the final version of the Act will include stricter regulations for general-purpose AI systems.
What is the European AI Act?
The Act aims to regulate systems that pose a “unacceptable level of risk”, such as tools that forecast crime or assign social scores. It also introduces new limitations on “high-risk AI”, including systems that could sway voters or damage people's health.
Furthermore, the legislation also establishes new rules for generative AI, requiring content produced by systems like ChatGPT from OpenAI, Microsoft Bing Chat or Google Bard to be labelled. Additionally, it requires models to disclose summaries of copyrighted data used for training. This could be a major obstacle for systems that create humanlike speech by collecting text from the web, often from sources containing copyright symbols.
Earlier this month, the European Parliament voted in favor of the AI Act. The Act now goes to the Council of the European Union for final approval. The Council is expected to vote on the Act in the coming months. OpenAI CEO Sam Altman previously suggested the company would remove ChatGPT in Europe but later backtracked.
The final version of the AI Act will include some of the provisions that OpenAI lobbied against, such as requirements for transparency, traceability, and human oversight. However, the Act also includes some exemptions for general-purpose AI systems, such as GPT-4.
The debate over the AI Act highlights the tension between the need to regulate AI in order to protect people from harm and the need to allow for innovation. OpenAI's lobbying efforts suggest that some AI companies are more concerned with protecting their own profits than with ensuring that AI is used safely and responsibly.
It is important to have a balanced approach to AI regulation. Regulations should be strict enough to protect people from harm, but they should not be so burdensome that they stifle innovation. The AI Act is a significant step forward in the regulation of AI, but it is important to continue to monitor the Act's implementation and ensure that it is effective in protecting people from harm.
The AI Act is a significant piece of legislation that will set the standard for AI regulation around the world. It is important to monitor the Act's implementation and ensure that it is effective in protecting people from harm.