In response to growing global concerns about the potential harm of artificial intelligence (AI), the Australian government has initiated a comprehensive review of the technology. The review, led by Industry and Science Minister Ed Husic, aims to gather insights from various stakeholders to shape a new regulatory framework for AI, according to Sky News Australia.
A Consultative Approach to AI Regulation
The review process, which is expected to last eight weeks, was launched with the release of two papers. The first, a ‘rapid response' report commissioned by the Australian National Science and Technology Council (NSTC), examines the opportunities and risks posed by generative AI. The second paper is a consultation document that studies the AI regulation efforts of other nations.
As reported by news.com.au, Husic emphasized the need for a framework that ensures the technology works for the benefit of communities. “We want people to be confident that the technology is working for us and not the other way around”, Husic said. He also highlighted the importance of public involvement in the process, inviting both experts and the community to share their expectations and concerns.
Addressing High-Risk AI Applications Building on Existing Laws
The Australian government is particularly attentive to “high-risk” areas of AI, such as facial recognition technology. If the consultation process identifies such areas that require regulatory intervention, the government is prepared to act. “If facial recognition was being developed and used in ways that were outside what the community think is acceptable, then clearly we will be taking a very deep look at that,” Husic stated.
While the review is underway, AI will continue to operate under existing rules in Australia, which range from sector-specific regulations (such as in healthcare and energy) to general industry standards (privacy, security, and consumer safeguards). The review will consider whether to strengthen these existing regulations, introduce specific AI legislation, or both. As Husic put it, “We need the framework right, that people are confident that it's working in favour or for the benefit of communities – it's really important.”
The Australian government's move aligns with global efforts to regulate AI. The U.S. and the European Union are also grappling with how to manage AI advancements. The review comes in the wake of a statement by the Centre for AI Safety, signed by leading tech experts and academics, warning of a “risk of extinction” from unchecked AI and calling for it to be a global priority.
AI Regulation as a Global Urgency
In the past few months, there have been significant developments in the field of AI regulation. Here's a chronological summary of the key events:
EU's New AI Act (April 28, 2023): The European Union is preparing to finalize its landmark legislation on artificial intelligence, the Artificial Intelligence Act (AI Act). The Act aims to create a common regulatory and legal framework for AI, balancing innovation and the protection of fundamental rights. The legislation faced challenges, especially regarding the use of ChatGPT, OpenAI's natural language processing system. The EU decided to adopt a nuanced approach to AI regulation, classifying it as high-risk only when used for purposes that could cause significant harm to individuals or society. The AI Act is scheduled to come into law later in the year.
OpenAI CEO Calls for Urgent AI Regulation (May 17, 2023): Sam Altman, the CEO of OpenAI, testified before a US Senate subcommittee, expressing his agreement with lawmakers on the need for regulation of rapidly advancing AI technologies. Altman proposed the creation of an agency that issues licenses for the development of large-scale AI models, safety regulations, and tests that AI models must pass before being released to the public. His testimony marked his recognition as a leading figure in AI.
G-7 Leaders Initiate ‘Hiroshima Process' (May 21, 2023): The leaders of the Group of Seven countries, recognizing the rapid advancement of generative artificial intelligence (AI), agreed to establish a governance protocol named the ‘Hiroshima Process'. This agreement sought to ensure that AI development and deployment align with the shared democratic values of the G-7 nations. The Hiroshima Process marked a significant step in AI regulation worldwide.
Microsoft Publishes Governance Blueprint for Future Development (May 26, 2023): Microsoft released a blueprint for how it believes artificial intelligence (AI) should be governed. The report, titled “Governing AI: A Blueprint for the Future”, outlines five key principles that Microsoft believes should guide the development and use of AI. The company proposes a five-step blueprint for public governance of AI, including implementing government-led AI safety frameworks, establishing a new federal agency dedicated to AI policy, and promoting responsible AI practices across sectors.
Microsoft President Calls for Generative AI Regulations (May 31, 2023): Microsoft President Brad Smith called for regulations for generative AI, emphasizing the need for a framework that ensures the responsible use of AI technologies. Smith's call added to the growing chorus of voices advocating for AI regulation.