HomeWinBuzzer NewsAI Regulation: New Paper from OpenAI, Google DeepMind and Others Calls for...

AI Regulation: New Paper from OpenAI, Google DeepMind and Others Calls for Urgent Action

The experts are warning of what they term "frontier AI models", which are AI models that could pose severe risks to public safety and global security.

-

In a world where AI technologies are rapidly advancing, the need for effective regulation is becoming increasingly urgent. A recent paper published by a diverse group of researchers across various institutions, including OpenAI, Google DeepMind, the University of Toronto, and the Centre for the Governance of AI, discusses the challenges and potential solutions for regulating what they term “frontier AI models”. These models, due to their high capabilities, could pose severe risks to public safety and global security.

Notable contributors include Jade Leung and Cullen O’Keefe from OpenAI, Markus Anderljung from the Centre for the Governance of AI and the Center for a New American Security, Joslyn Barnhart from Google DeepMind, and Anton Korinek from the Brookings Institution, University of Virginia, and the Centre for the Governance of AI.

Proposal of “Regulatory Building Blocks”

To mitigate these challenges, the researchers propose three key building blocks for the regulation of frontier AI models. Development of safety standards through expert-driven, multi-stakeholder processes forms the first building block. Increasing regulatory visibility through mechanisms such as disclosure requirements and monitoring processes constitutes the second. Compliance and enforcement make up the third building block, with the paper suggesting that government intervention may be necessary to ensure adherence to standards.

The authors also suggest an initial set of safety standards. These encompass conducting pre-deployment risk assessments, external scrutiny of model behavior, using risk assessments to inform deployment decisions and monitoring and responding to new information about model capabilities and uses post-deployment.

The Alignment Problem and Frontier AI Models

The alignment problem, a key challenge in AI, refers to the difficulty of ensuring that AI systems reliably do what humans want them to do. This problem is particularly acute with the so-called frontier AI models, which can develop unexpected and potentially dangerous capabilities. The paper’s authors argue that effective regulation of these models requires intervention at all stages of their lifecycle – from development to deployment and post-deployment.

This aligns with recent efforts by OpenAI to tackle the alignment problem, as evidenced by their launch of a new “superalignment” team dedicated to protecting against rogue AI. However, the alignment problem is not the only challenge that needs to be addressed.

The Global Landscape of AI Regulation

The paper’s release comes at a time when the global landscape of AI regulation is evolving. The European Union has recently approved the AI Act, a comprehensive piece of legislation aimed at regulating high-risk AI systems. However, many AI models currently do not meet the standards set by the AI Act, and some European businesses have expressed concerns that the Act could stifle innovation.

Meanwhile, other countries are taking a different approach. Japan, for instance, is considering a more lenient approach to AI regulation, aiming to balance the need for ethical standards and accountability with the desire to avoid imposing excessive burdens on companies.

Aiming for a Balanced Approach

Regarding this. the new paper proposes a balanced approach to AI regulation, advocating for the development of safety standards, increased regulatory visibility, and mechanisms for ensuring compliance. It also suggests initial safety standards, including pre-deployment risk assessments and post-deployment monitoring.

However, the paper also acknowledges the uncertainties and limitations of its proposals, highlighting the need for further analysis and input. This reflects the views of Geoffrey Hinton, often referred to as the “Godfather of AI”, who recently expressed doubts about whether “good AI” would triumph over “bad AI”.

SourceOpenAI
Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

Mastodon