Generative AI, the technology that can create new content such as text, images, audio and video, is becoming more powerful and accessible every day. However, with great power comes great responsibility, and Microsoft's president Brad Smith warns that generative AI poses significant ethical and legal challenges that need to be addressed soon.
During an interview on Sunday with “Face the Nation” host Margaret Brennan, Smith said that generative AI has the potential to unleash a new wave of creativity and innovation, but also to cause harm and deception. He cited examples such as deepfakes, which are realistic but fake videos or images of people, and ChatGPT, which is a chatbot that can generate human-like text on any topic.
“I was in Japan just three weeks ago and they have a national AI strategy,” Smith says. “The government has adopted it. And it's about participating in the development and use but also regulating this. The world is moving forward. Let's make sure that the United States at least keeps pace with the rest of the world.”
Smith said that Microsoft is working on developing responsible and trustworthy generative AI systems, such as Bard, a chatbot that can generate natural language responses based on the context of the conversation. He also said that Microsoft is collaborating with other companies and organizations to establish standards and best practices for generative AI.
However, Smith said that self-regulation is not enough, and that governments need to step in and create laws and regulations for generative AI. He said that generative AI poses similar challenges as biotechnology or nuclear technology, which require careful oversight and control.
Smith said that he expects generative AI regulation to emerge in the next few years, especially in areas such as privacy, security, intellectual property and human rights. He said that Microsoft is ready to engage with policymakers and stakeholders to shape the future of generative AI in a responsible and ethical way.
Tech Companies are Finally Stepping Up, but Not Stepping Back
Smith's calls for strict regulations come the same week as a group of AI executives and experts have stressed the importance of regulated AI development.
A group of artificial intelligence (AI) experts has warned that AI could pose a threat to humanity if it is not developed and used responsibly. The group, which includes researchers from Google, Microsoft, and OpenAI, published a paper in the journal Nature on May 30th, 2023, outlining their concerns.
Sam Altman, CEO of OpenAI, Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind are the most notable signatories.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads the open letter. The New York Times reports the statement will be released by the Center for AI Safety, a new nonprofit organization. Over 350 AI researchers, engineers, and company executives have co-signed the letter.
AI raises many ethical and legal issues, and many people in the field and in politics are expressing them. However, there is a strange contradiction in the tech world, where advanced generative AI is being developed despite the warnings from influential figures. One might wonder, if leaders like Sam Altman are concerned, why do they keep creating AI systems? Is it just for money, or is it confidence that we can manage this technology?