HomeWinBuzzer NewsOpenAI and Google DeepMind CEOs Sign Statement Warning That AI Poses “Risk...

OpenAI and Google DeepMind CEOs Sign Statement Warning That AI Poses “Risk of Extinction”

Over 350 AI industry experts, including from OpenAI and Google, are warning AI is as dangerous as major pandemics and nuclear war.

-

A group of artificial intelligence (AI) experts has warned that AI could pose a threat to humanity if it is not developed and used responsibly. The group, which includes researchers from , , and , published a paper in the journal Nature on May 30th, 2023, outlining their concerns.

Sam Altman, CEO of OpenAI, Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind are the most notable signatories.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads the open letter. The New York Times reports the statement will be released by the Center for AI Safety, a new nonprofit organization. Over 350 AI researchers, engineers, and company executives have co-signed the letter.

Geoffrey Hinton and Yoshua Bengio are two notable researchers who has signed. These are two of the three researchers who won a Turing Award for developing . Both innovators in AI, Hinton and Bengio are known as the “Godfathers” of AI innovation. Earlier this month, Hinton left his position at Google after becoming frustrated with Google's lack of vision and leadership in AI development. He also warned of the potential dangers of AI.

AI Could Threaten Our Existence

The experts argue that AI could be used to create autonomous weapons that could kill without human intervention. They also worry that AI could be used to manipulate people or spread misinformation. The group calls for a global effort to develop AI in a safe and responsible way.

The warning from the AI experts comes at a time when AI is rapidly becoming more powerful. In recent years, AI has been used to develop self-driving cars, facial recognition software, and even AI-powered . As AI continues to develop, it is important to consider the potential risks as well as the potential benefits.

The AI experts who published the paper in Nature are not the first to warn about the potential dangers of AI. In 2015, and Stephen Hawking co-authored an open letter calling for a ban on autonomous weapons. And in 2017, the Future of Life Institute, a non-profit organization that promotes research on existential risks, published a report on the dangers of AI.

The warnings from the AI experts and other experts are a wake-up call. It is clear that AI is a powerful technology with the potential to do great harm. It is essential that we develop AI in a responsible way and that we take steps to mitigate the risks.

Despite Concerns, It is Full Steam Ahead for AI Development

There are plenty of concerns about AI and lawmakers and even those involved in the AI industry are voicing them. Perhaps the biggest paradox in tech at the moment is that development continues on advanced generative AI despite many important figures speaking out on the dangers. I guess the obvious question is, if executives like Sam Altman are worried, why are they still developing AI systems? Is it purely profit-driven, or is it hubris and a belief we can control this technology?

Sam Altman is the co-founder of OpenAI (along with Elon Musk) and current CEO of the AI research firm. He has become one of the most powerful tech leaders in the last few months as ChatGPT became the most mainstream AI chatbot. Altman has also nurtured a deep partnership with Microsoft following billions of dollars in investments from the Redmond company.

While OpenAI and Microsoft have grown their AI products such as GPT-4, ChatGPT, Bing Chat, Microsoft 365 Copilot, and Copilot. However, the critics of Altman claim that OpenAI has been too quick to develop its generative AI products without waiting for proper regulations. Elon Musk, Altman's former partner at OpenAI, is one such critic, saying the way the company is now is not what he intended.

Musk also joined other leaders in tech in creating the Initiative for Life project, which calls on AI developers to cease development of models more powerful than GPT-4 for at least six months. It is worth noting Altman has recently claimed GPT-5 is not being developed by OpenAI. This week, Altman spoke a congressional hearing in the United States and admitted there need to be clear and strict regulations in place for the development of AI.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News