Nearly 200 employees at Google's DeepMind have signed a letter urging the company to abandon its military contracts. The letters, obtained by Time, shows that employees are voicing concerns regarding the implications of utilizing AI technology for warfare.
Project Nimbus Under Scrutiny
The letter, which represents around 5% of DeepMind's workforce, expresses specific worries about Google's defense contract with the Israeli military, known as Project Nimbus. The initiative involves deploying AI for surveillance and target identification in Gaza. In the letter, employees stress that the issue is centered around the ethical use of artificial intelligence, not specific political conflicts.
The letter highlights a growing discord between Google's AI division and its cloud services, which currently serve military clients. Internal conflict initially surfaced prominently during the Google I/O event earlier this year, where pro-Palestinian protests focused on AI projects with ties to military applications, including Project Lavender and the “Gospel” AI initiative.
Reaffirming Ethical Commitments
Upon Google's acquisition of DeepMind in 2014, it was agreed that DeepMind's technology would steer clear of military and surveillance applications. DeepMind employees believe that recent developments compromise their ethical stance and contradict their stated mission and AI guidelines.
They are calling for a thorough review of allegations that Google's cloud services are employed by armed forces and defense contractors, a halt to providing DeepMind's tech to the military, and the establishment of governance structures to prevent future misuse.
Official Response from DeepMind
During a town hall meeting in June, executives were asked to address concerns raised in the letter. Chief Operating Officer Lila Ibrahim reiterated that DeepMind would neither create nor deploy AI for weaponry or large-scale surveillance. She noted that Google Cloud customers are legally bound by terms of service and acceptable use policies. IBM emphasized her pride in Google's adherence to advancing safe and responsible AI practices.
Since DeepMind became part of Google, it has maintained significant autonomy from the company's headquarters in California. However, the increasing competition in AI has pulled DeepMind closer into Google's central operations. A move to secure more independent authority in 2021 was unsuccessful, and by 2023, DeepMind merged with Google's AI branch, Google Brain, further integrating it into the company.
AI Ethics and Governance Issues
DeepMind's initial plans for an independent ethics board were short-lived, and it was replaced by Google's overarching AI Principles policy. These principles claim that Google will avoid AI developments likely to result in “overall harm,” but the wording allows for the development of technologies with potential negative impacts if the benefits are deemed to outweigh the risks. This policy does not exclude selling AI to military clients.