In an interview with The New York Times, Geoffrey Hinton, the head of Google's AI division, says he is leaving the company after 23 years. According to Hinton, he is frustrated with Google's lack of vision and leadership in AI development. He believes the company is falling behind Microsoft, which has leveraged its partnership with OpenAI to mainstream artificial intelligence (AI).
Hinton's departure is another blow for Google, which has found itself off guard during Microsoft's recent AI surge. Seen as one of the godfathers of the AI industry, Hinton has been part of Google as it has gone from being the biggest player in the AI industry through Google Maps, Google Search, Photos, and YouTube, to falling behind Microsoft/OpenAI.
In recent years, Google has faced criticism for its ethical lapses, internal conflicts, and stagnation in AI. One of the main sources of tension within Google was its relationship with OpenAI, a nonprofit research lab that was co-founded by Elon Musk and Sam Altman in 2015.
Microsoft has become a major investor in OpenAI, putting $10 billion into the company earlier this year. That has given Microsoft the ability to benefit from the partnership to mainstream AI across its services. GPT-4 now underpins now AI solutions such as Office (Microsoft 365 Copilot), Bing (Bing Chat and Bing Image Creator), Microsoft Cloud (Azure OpenAI Service), CRM/ERP (Dynamics 365 Copilot), and programming (GitHub Copilot X).
OpenAI Was Once Close to Google
Google and OpenAI started out as collaborators, with Google providing cloud computing resources and research funding to OpenAI. However, over time, the two organizations drifted apart due to diverging goals and values. Google wanted to commercialize and monetize its AI products and services, while OpenAI wanted to democratize and share its AI research and technology with the world.
Google also became wary of OpenAI's ambition to create AGI, which could pose an existential threat to humanity if not aligned with human values.
OpenAI has since pushed ahead with its goal of developing an AGI. While GPT-4/ChatGPT is not that, the company's ambitions to create a general intelligence are clear. Ironically, since breaking ties with Google, OpenAI has also become a for-profit entity. Microsoft has taken full advantage of this and is said to take 49% of all OpenAI profits.
Elon Musk has since left OpenAI and been very critical of the direction the company took, saying it is not what he intended. Musk also laments Microsoft's invovlvment and is an outspoken critic of unsafe AI development. He believes Microsoft's drive to make money and generate revenue will hamper the safe development of AGI technology.
Microsoft's partnership with OpenAI has given it a competitive edge over Google in the AI race. Microsoft has been able to access and deploy some of the most advanced and powerful AI models in the world, while also supporting OpenAI's vision of creating AGI that benefits everyone. Microsoft has also been able to position itself as a leader and innovator in AI, attracting top talent and customers.
Hinton says that he was impressed by Microsoft's strategy and execution in AI, and that he felt that Google had lost its direction and momentum. He said that he wanted to work on more ambitious and impactful AI projects that could make a positive difference in the world.
Pointing to the Dangers of AI
Hinton is also taking a fresh approach to AI. Once a flag waving proponent of ongoing AI development, Hinton is now warning of the dangers of human-level intelligence or beyond:
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
Others share is concerns. In March, 1,000 tech and thought leaders led by Musk wrote an open letter calling for the pausing of AI development more powerful than GPT-4 for six months. In Italy, regulators brought a ban on ChatGPT and forced OpenAI to return data taken from users. Germany is exploring a similar path.
Regulators are taking notice, with the US Treasury Department calling for certification for AI models, while the EU is set to announce its AI Act later this year. Even Google's CEO, Sundar Pichai, recently spoke of the need for regulations and the potential dangers of AI development.
It is expected AI will be a part of the discussions at the upcoming G7 meeting in Hiroshima next month. World leaders are expected to discuss the potential risks of AI and how the technology can be properly regulated.
In a recent interview with Lex Fridman, OpenAI CEO Sam Altman admits that the AI endgame may not exactly be positive. He says he wants to ensure the safety of OpenAI products and insists that the org takes strict steps to ensure its AI is safe.
Tip of the day: Did you know you can use the Windows built in antivirus Microsoft Defender also with scheduled scans? In our tutorial we give you step-by-step instructions on how to program your personal scan-schedule to keep your free of malware.