Over recent months, Google has taken a lot of heat for its muddled and rushed approach to AI. The narrative has been that Microsoft caught its rival off guard with the release of Bing Chat. By leveraging GPT-4 from partner OpenAI, Microsoft has been able to mainstream AI across its services. Google CEO Sundar Pichai now reveals that the company's AI failures are also down to concerns over the safety of the technology.
Pichai has called for a global regulatory framework for artificial intelligence (AI) in a recent interview with 60 Minutes. He said that AI can be “very harmful” if deployed wrongly and that the technology is moving fast.
Pichai, who leads both Google and its parent company Alphabet, said that he worries about the negative side of AI and that society may not be prepared for what's coming. He compared AI to nuclear arms and said that there should be similar treaties to regulate its use.
He also expressed concern about the potential of AI to create disinformation, such as fake videos or audio of people saying things they never said. He said that Google is being responsible by holding back more advanced versions of its AI-powered chatbot, Bard, for testing.
Google's AI Failings May be More than a Company Struggling to Keep Pace
Google´s recently launched Bard-Chatbot has raised some controversy among content creators, who claim that the chatbot does not properly cite or link to the sources of its information. Bard is now available in limited preview for users in the United States.
It is a natural language processor (NLP) generative AI similar to ChatGPT. However, ChatGPT is more mature and has already been upgraded several times, currently running on the GPT-4 AI model from OpenAI.
Google was taken by surprise by the launch of Bing Chat and rushed to announce Bard. That decision did not sit well with employees and the situation worsened when Bard mostly failed during its introduction demo.
Pichai was also seemingly making strange decisions, such as forcing employees to test Bard for hours each day. In another recent interview, the CEO admits Bard is a prototype and will get better with time. It is clear Google's hand was forced and Pichai now suggests the company would prefer a slower approach to AI development.
Of course, there is an element of Google having its cake and eating it too. The company is lamenting the pace of the AI market while still trying to compete in it. A firmer stance would be nice whereby Google slows down its AI development on moral grounds. However, it seems the company is unwilling to be that drastic.
Microsoft and OpenAI Need to Be More Open
There has been a degreee of wrecklessness in the way OpenAI and Microsoft has mainstreamed AI over recent months. In a recent video with Lex Fridman, OpenAI CEO Sam Altman admits that the AI endgame may not exactly be positive. He says he want to ensure safety of OpenAI products and insists that the org takes strict steps to ensure its AI is safe.
Microsoft seems so giddy by the opportunity to leverage AI to defeat rivals across products that the company is going full steam ahead. For all the headlines Microsoft has made with Bing Chat, Microsoft 365 Copilot, and other integrations of OpenAI GPT-4, none of them are the company talking about long-term safety of AI.
That's because Microsoft has done very little talking on that subject. CEO Satya Nadella excitedly committed to AI becoming a bedrock of the all the company's ecosystem in January. He has been largely silent on how Microsoft plans to regulate its AI and ensure that worst-case scenarios do not happen.
Those worst-case scenarios for the future of AI range from obvious to the absurd, at least on paper:
- Deepfakes: AI is becoming sophisticated enough now to mimic the faces and voices of people on content.
- Cybercrime: If AI is smart enough to be used for good, it will be smart enough to be used for nefarious means.
- Job replacement: AI will take millions of jobs across sectors. Altman and Microsoft insist this will allow people to do more meaningful roles, but don't detail what that will look like.
- Sci-Fi scenarios: Rarely has fiction been so eerily close to painting a potential future than with AI. There are real concerns that AI will one day see humans as an obstacle rather than an ally.
Wherever you sit on those points and the potential danger of AI (honestly, I am somewhere in the middle of AI enhancing our lives or destroying our lives) there is no doubt that not enough is currently being said about how we control AI development. At least not enough from principal players that are driving the creation of increasingly powerful AI.
In fact, Microsoft's silence on the subject raises eyebrows. The company firing its whole AI ethics team is just not a good look, even if there are other divisions within Microsoft that handle AI safety and morality. These are conversations we need to be having now, and publically. Google's approach in that sense is commendable and hopefully Microsoft can become more vocal in the conversation.
Decisions are being Made to Slowdown AI Development
Luckily, it seems the hand of Microsoft, OpenAI, and even Google may be forced in the coming months. We are seeing a growing swell of concern about how AI development is progressing.
The Italian Data Protection Authority has ordered OpenAI to stop providing the chatbot in the country. Germany is also reportedly considering banning ChatGPT and it is likely other countries will follow suit.
In the US, the Treasury Department last week called for AI models to require certification before they can launch in the country.
“It is amazing to see what these tools can do even in their relative infancy,” says Alan Davidson, head of the National Telecommunications and Information Administration, the Commerce Department agency that sent the request. “We know that we need to put some guardrails in place to make sure that they are being used responsibly.”
That's right, the smartphone in your hand has been through a massive amount of regulatory checks and certifications to reach you. AI on the other hand – a technology that has the ability to profoundly impact our lives and futures positively and negatively – does not currently go through such scrutiny.
Slowing Development and Regulations
It is worth noting that Microsoft is not against regulations and has even been vocal in calling for them. Even so, the company's willingness to continue with development and not wait is problematic.
Last month, Elon Musk – who co-founded OpenAI with Altman – led the FutureOfLife initiative, a project that wants to place more controls on AI development over concerns about the emergence of artificial general intelligence (AGI). An open letter from the project urged all AI developers to cease development of AI potentially more powerful than GPT-4 for at least six months.
As for OpenAI, the researcher insists that it takes a safety-first approach. The company opened a $20,000 bug bounty program for ChatGPT, the popular chatbot that is powered by GPT-4. OpenAI has previously said that it takes great care to ensure the safety of its AI. In fact, the company insists it is an advocate of regulated and secure AI models:
“We believe that powerful AI systems should be subject to rigorous safety evaluations,” OpenAI said in a recent blog post. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”
Altman has also recenty confirmed that OpenAI is currently not developing a more powerful GPT-5.
Tip of the day: With many reachable wireless access points popping up and disappearing again, the available networks list can become quite annoying. If needed you can use the allowed and blocked filter list of Windows to block certain WiFi networks or all unknown WiFi networks.