Stanford University Professor and AI luminary, Andrew Ng, recently claimed that Big Tech is misleadingly propagating an artificial threat of human extinction through AI. The misleading notion, according to him, is to instigate heavy regulation that could potentially stifle AI market competition.
Ng, hailed for his contributions at Google Brain and his tenure as chief scientist at Baidu's Artificial Intelligence Group, criticises the spreading of two erroneous ideas. Firstly, the unfounded assertion that AI could cause human extinction. The second is the impractical means of mitigating AI risks by imposing burdensome licensing requirements on the AI industry. These combined misconceptions, if acted upon, could lead to detrimental policy proposals requiring licensing of AI, which Ng condemns as being harmful to innovation.
Manipulation of Fear for Regulatory Gains
Ng asserts that the AI extinction narrative has served as a tool for large tech firms that are disinterested in competing with open-source AI, leveraging this fear to push for restrictive regulation. In his view, these policy moves would be gravely damaging to the open-source AI community.
Industry pioneers, such as Elon Musk and Apple co-founder Steve Wozniak, had earlier called for a six-month freeze on training powerful AI models. They suggested that curtailing the potential extinction risk posed by AI should be a globally prioritized agenda. OpenAI CEO and co-founder, Sam Altman, has also been vocal about slowing AI development, claiming his company was not creating GPT-5 just yet. However, Ng refrained from discussing Altman's motivations, arguing that large tech corporations would find it beneficial not to compete with open-source AI.
The Necessity of Thoughtful Regulation
While Ng criticized the current trajectory of AI regulation, he concurred that regulation, in essence, is necessary. He expressed concerns over poor quality regulations and implied that their presence might be more harmful than their absence.
Rather than eliminating regulation, he advocated for thoughtful regulation that truly benefits the industry. He admitted that AI has caused harm; self-driving cars have led to casualties and automated trading algorithms have incited market crashes. For him, the essence of “good” regulation lies in transparency from tech companies in order to enable the averting of AI accidents in the foreseeable future.