Sam Altman, the CEO of OpenAI, has clarified his stance on the proposed AI regulation by the European Union, saying that he has no plans to leave the bloc and that he welcomes a proactive approach to AI governance.
Altman's comments came after he sparked controversy earlier this week by saying that OpenAI could “cease operating” in the EU if it was unable to comply with the provisions of the new AI legislation that the bloc is currently preparing. That would mean the company's ChatGPT is not available in Europe.
The legislation, which is still undergoing revisions, aims to create a legal framework for the development and deployment of AI systems in the EU, with a focus on ensuring safety, transparency and accountability. The law also introduces a risk-based classification of AI systems, with stricter requirements for those deemed “high-risk”, such as those used for biometric identification, law enforcement or critical infrastructure.
Altman said that OpenAI had “a lot” of criticisms of the way the act is currently worded, especially regarding the definition and scope of high-risk systems. He said that OpenAI's general purpose systems, such as ChatGPT and GPT-4, which are widely used for natural language processing and generation, are not inherently high-risk and should not be subject to additional regulation.
“Either we'll be able to solve those requirements or not,” Altman said on Wednesday at a panel discussion at University College London. “If we can comply, we will, and if we can't, we'll cease operating… We will try. But there are technical limits to what's possible.”
OpenAI Will Work with Regulators in Europe
However, on Friday, Altman tweeted that he was “excited” to continue operating in the EU and that he had no plans to leave. He also said that he had met with EU regulators to discuss the AI act and that he appreciated their willingness to listen and collaborate.
“We are excited to continue to operate here and of course have no plans to leave,” Altman tweeted. “We think AI regulation is an area where we should be proactive.” He also said that his preference for regulation was “something between the traditional European approach and the traditional U.S. approach”, which he described as “too lax”.
very productive week of conversations in europe about how to best regulate AI! we are excited to continue to operate here and of course have no plans to leave.
— Sam Altman (@sama) May 26, 2023
OpenAI is one of the leading companies in the field of AI research and development, with a mission to create and ensure the safe and beneficial use of artificial general intelligence (AGI), which is AI that can perform any intellectual task that humans can.
The EU's AI act is expected to be finalized by 2024, after a process of consultation and negotiation with various stakeholders, including member states, industry representatives and civil society groups. The act is part of the EU's broader digital strategy, which aims to foster innovation and competitiveness in the digital sector while protecting fundamental rights and values.
Google Bard is Already Unavailable in the EU
It seems like AI chatbots are facing increasing scrutiny in Europe. Google has already announced that its Bard AI will not be available in the European Union. Bard is not available in the EU due to potential GDPR concerns. The General Data Protection Regulation (GDPR) is a law that protects the privacy and personal data of EU citizens. It requires companies to obtain consent from users before collecting and processing their data, and to provide them with the right to access, correct, delete, or transfer their data.
Bard, and other chatbots powered by large language models such as ChatGPT and Microsoft's Bing Chat, rely on massive amounts of data to generate responses. Bard may violate the GDPR by mishandling personal or sensitive data. For instance, it may accidentally expose or store users' identities, locations, preferences, or opinions.
Google has not disclosed the specific GDPR issues that block Bard's launch in the EU, but they may involve the lack of transparency and accountability of its AI system. The GDPR demands companies to clarify how their AI systems operate and make decisions that affect users. However, Bard may not be able to do so, as its responses rely on complex and obscure neural networks that are difficult to understand or audit.
Google may also struggle to ensure that Bard follows the GDPR principles of data minimization, purpose limitation, and accuracy. These principles oblige companies to collect and process only the data that is essential, relevant, and accurate for their specific purposes. However, Bard may collect and process more data than necessary, use it for purposes other than those desired by the users, or generate inaccurate or misleading responses.