In a marked contrast to the typically antagonistic tone of congressional hearings involving tech industry executives, Sam Altman, the CEO of OpenAI, the maker of powerful AI tools such as ChatGPT and Dall-E, testified on Tuesday before a Senate subcommittee. He expressed his agreement with lawmakers on the need for regulation of rapidly advancing artificial intelligence (AI) technologies.
Altman’s plea for regulation came amidst concerns over AI’s potential to spread misinformation, eliminate jobs, and match human intelligence. His testimony before Congress was his first and marked his recognition as a leading figure in AI.
Altman’s Vision: A Framework for AI Regulation
Altman, a Stanford University dropout and tech entrepreneur, traded his usual attire for a formal suit and tie as he spoke about his company’s technology at a dinner with House members and met privately with senators before the hearing.
He suggested a need to manage the fast-developing systems that could potentially transform the economy fundamentally. “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that”, Altman said. “We want to work with the government to prevent that from happening”.
In the hearing, Altman proposed the creation of an agency that issues licenses for the development of large-scale AI models, safety regulations, and tests that AI models must pass before being released to the public. This was in response to concerns about AI’s role in job destruction and creation, which he acknowledged would need government intervention to mitigate.
Altman began his testimony by introducing OpenAI and its mission to ensure that artificial general intelligence (AGI), which he defined as “AI systems that can do anything humans can do across domains”, benefits all of humanity. He explained that OpenAI is a nonprofit organization that operates a for-profit subsidiary with profit caps and governance provisions that align with its charter. He also highlighted some of the AI tools that OpenAI has created, such as ChatGPT, a conversational agent that can generate coherent and engaging text; Whisper, a system that can answer questions based on large-scale knowledge; and DALL·E 2, a system that can generate images from natural language descriptions.
“Regulations Should Be Flexible and Adaptable to the Rapid Pace of AI Development”
The CEO of OpenAI then proposed some principles and recommendations for developing regulations and policies for AI. He suggested that regulations should incentivize AI safety while allowing for innovation and experimentation. He also suggested that regulations should be flexible and adaptable to the rapid pace of AI development and the diversity of AI applications. He recommended that policymakers collaborate with AI researchers, developers, users, and stakeholders to ensure that regulations are informed by technical expertise and public input. He also recommended that policymakers support AI education and research, especially in areas such as AI ethics, governance, and social impact.
Altman concluded his testimony by expressing his interest in working with the government to ensure the safe and beneficial development of AI. He said that OpenAI is committed to cooperating with other research and policy institutions, and to serving as a technical leader in AI. He also said that OpenAI is open to sharing its data, code, models, and insights with the government and the public. He stated that he believes that “the best way to ensure a positive future for humanity is to create it together”.
Senators Express Concerns
The senators who questioned Altman expressed their concerns and skepticism about AI. They asked him about how ChatGPT could be used for election interference, military applications, copyright issues, and ethical standards. They also compared AI to other innovations that had negative consequences, such as the atomic bomb.
Senator Richard Blumenthal (D) from Connecticut, opened the proceedings with an AI-generated audio recording that sounded just like him. He said he did this to demonstrate the power and potential of AI, but also the dangers and risks it poses. He said: “Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want”.
Senator Christopher Coons (D) from Delaware said, “I’m really concerned that generative AI technologies can undermine the faith of democratic values and the institutions that we have. The Chinese are insisting that AI as being developed in China, reinforce the core values of the Chinese Communist Party and the Chinese system. And I’m concerned about how we promote AI that reinforces and strengthens open markets, open societies and democracy. In your testimony, you’re advocating for AI regulation tailored to the specific way the technology is being used, not the underlying technology itself.”
Senator John Kennedy (R) from Louisiana expressed his concerns about AI posing a lethal threat to the human race, asking IBM’s Chief Privacy and Trust Officer Christina Montgomery, who was also present at the hearing, “Hypothesis number one, many members of Congress do not understand artificial intelligence. Hypothesis. Number two, that absence of understanding may not prevent Congress from plunging in with enthusiasm <laugh> and trying to regulate this technology in a way that could hurt this technology. Hypothesis number three, that I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. Assume all of those to be true. Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day. Ms. Montgomery.”
Senator Josh Hawley (R) from Missouri raised concerns what the current state of AI already can mean in the context of elections, saying, “I wanna think about this in the context of elections. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses”.
You can find a full transcript of the Senate hearing with Sam Altman here if you want to dig deeper into what has been said and alternatively watch the full session video above.
Last Updated on August 4, 2023 2:06 pm CEST