An unprecedented decision materialized at the G-7 summit in Hiroshima, Japan, marking a significant step in AI regulation worldwide. The leaders of the Group of Seven countries, in recognizing the rapid and transformative advancement of generative artificial intelligence (AI), agreed to establish a governance protocol named the ‘Hiroshima Process'. This agreement seeks to ensure that AI development and deployment align with the shared democratic values of the G-7 nations.
Calls for such regulation have been coming from many sides in recent months, even out of the AI community itself. OpenAI CEO Sam Altman has expressed his agreement with lawmakers on the need for regulation in a recent US Senate Hearing. Geoffrey Hinton, the former head of Google's AI division and so-called Godfather of AI because of his groundbreaking work, has become very vocal about possible dangers of AI and been calling for AI regulation since his departure from Google.
Championing Human-Centric and Trustworthy AI
Central to the discussion was the necessity for a “human-centric” approach towards AI development. Japanese Prime Minister Fumio Kishida called for cooperation in the secure cross-border flow of data, committing a financial contribution towards such an effort.
The concern extends beyond Japan, as the G-7 leaders collectively expressed the urgency for technical standards that ensure the trustworthiness of AI. The European Union (EU), currently advancing its legislation to regulate AI technology, further bolstered this discussion. The EU's forthcoming regulation, potentially the world's first comprehensive AI law, sets a compelling example for advanced economies.
Innovation vs. Regulation: The Delicate Balance
The consensus among G-7 leaders is that AI regulation is not only necessary but also urgent. However, the challenge lies in balancing regulation and innovation. A key concern is how to implement protective measures without stifling technological advancement. Recognizing this delicate balance, the leaders have endorsed a “risk-based” approach for AI regulations. This approach aims to maintain a robust AI development environment while vigilantly addressing potential societal risks associated with AI's rapid advancement.
Global Perspectives on AI Governance
Despite their shared commitment to democratic values, G-7 countries exhibit diverse approaches towards AI governance. Japan, for instance, favors soft guidelines rather than strict laws, while the EU leans towards firmer regulatory legislation.
In contrast, the United States, another G-7 member, adopts a more cautious approach. OpenAI CEO Sam Altman has advised the US to consider implementing licensing and testing requirements for AI model development.
Beyond the G-7, nations such as China exhibit a distinct approach to AI governance, adopting restrictive measures that align generative AI-powered services with socialist values.
Hiroshima AI Process: Towards Collaborative and Inclusive AI Governance
In response to these challenges, the G-7 leaders have agreed to create a ministerial forum named the “Hiroshima AI Process”. This forum will explore key issues surrounding generative AI, including copyright challenges and the potential for disinformation. The initial meeting of this forum is expected to take place before the end of the year.
The G-7 leaders also called upon international organizations, like the Organisation for Economic Cooperation and Development (OECD), to analyze the impact of policy developments in the AI field. This collaborative approach is expected to drive a comprehensive understanding of AI's societal implications, paving the way for more informed, effective policies.
The Hiroshima Process is an expression of the collective will of the G-7 nations to navigate the AI governance landscape together, working towards a human-centric, trustworthy AI future that aligns with shared democratic values. The outcomes of this initiative hold the potential to influence AI governance globally, marking an important chapter in the history of AI development.
In regards to Artificiall Intelligence, the G7 Hiroshima Leaders' Communiqué states:
“We are taking concrete steps to[…]advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values. […]
In areas such as AI, immersive technologies such as the metaverses and
quantum information science and technology and other emerging technologies, the governance of the digital economy should continue to be updated in line with our shared democratic values. These include fairness, accountability, transparency, safety, protection from online harassment, hate and abuse and respect for privacy and human rights, fundamental freedoms and the protection of personal data.
We will work with technology companies and other relevant stakeholders to drive the responsible innovation and implementation of technologies, ensuring that safety and security is prioritized, and that platforms are tackling the threats of child sexual exploitation and abuse on their platforms, and upholding the children's rights to safety and privacy online. We continue to discuss ways to advance technology for democracy and to cooperate on new and emerging technologies and their social implementation, and look forward to an inclusive, multi-stakeholder dialogue on digital issues, including on Internet Governance, through relevant fora, including the OECD Global Forum on Technology.
We commit to further advancing multi-stakeholder approaches to the development of standards for AI, respectful of legally binding frameworks, and recognize the importance of procedures that advance transparency, openness, fair processes, impartiality, privacy and inclusiveness to promote responsible AI. We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognize that approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.
We support the development of tools for trustworthy AI through multi-stakeholder international organizations, and encourage the development and adoption of international technical standards in standards development organizations through multi-stakeholder processes. We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organizations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year.
These discussions could include topics such as governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies. We welcome the Action Plan for promoting global interoperability between tools for trustworthy AI from the Digital and Tech Ministers' Meeting.
We recognize the potential of immersive technologies, and virtual worlds, such as metaverses to provide innovative opportunities, in all industrial and societal sectors, as well as to promote sustainability. For this purpose, governance, public safety, and human rights challenges should be addressed at the global level. We task our relevant Ministers to consider collective approaches in this area, including in terms of interoperability, portability and standards, with the support of the OECD.“