Leaked Docs Reveal that Google Knew Israel’s AI Use is Uncontrollable

Internal Google documents reveal the company knew it would have limited control over Israel's use of its Project Nimbus AI and cloud technology, proceeding with the $3.3B deal despite human rights concerns and consultant warnings.

Internal Google documents reveal the company advanced its controversial Project Nimbus cloud computing and AI contract with Israel fully aware it would possess severely restricted oversight and minimal control over how the Israeli government and military utilized these potent technologies. This foreknowledge, detailed in a confidential report obtained by The Intercept, underscores profound ethical questions and potential legal liabilities for the tech giant.

The situation highlights a critical tension: a major technology provider seemingly prioritizing a lucrative $3.3 billion deal over explicit warnings about the potential misuse of its advanced tools by a nation facing accusations of human rights abuses, including what The Intercept described as a genocide in Gaza.

The internal assessments, predating Google’s 2021 joint bid for Project Nimbus with Amazon, acknowledged the potential for Google Cloud Services to be “used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank,” as stated in the Google report. Despite this, the tender’s terms meant Google would have “very limited visibility” into software use and was “not permitted to restrict the types of services and information that the Government (including the Ministry of Defense and Israeli Security Agency) chooses to migrate” to its cloud, according to The Intercept’s analysis of the documents.

This lack of control is compounded by contractual obligations reportedly requiring Google to resist foreign legal investigations into Israel’s technology use and allowing Israel to extend the contract up to 23 years with little recourse for Google to withdraw. Google Cloud chief Thomas Kurian personally approved the contract with full understanding of these risks, The Intercept reported.

A third-party consultant, Business for Social Responsibility (BSR), hired by Google, even recommended withholding machine learning and AI tools from the Israeli military due to these risk factors. Nevertheless, Google Cloud’s full suite of AI tools was made available to Israeli state customers, including the Ministry of Defense.

Contractual Constraints And Limited Oversight

Google’s internal report on Project Nimbus, internally codenamed “Selenite,” detailed how the contract’s structure would inherently impede meaningful oversight. Should Project Nimbus face legal scrutiny outside of Israel, Google is contractually bound to notify the Israeli government. Furthermore, the report indicated Google must “Reject, Appeal, and Resist Foreign Government Access Requests,” a stipulation that could place the company in direct conflict with international legal orders, particularly as Project Nimbus falls under Israel’s exclusive legal jurisdiction—a state that does not recognize the International Criminal Court, as reported by The Intercept.

The company’s standard terms of service appear to be superseded by a secret, amended policy for Project Nimbus, according to Israeli government documents cited by The Intercept. An attorney from the Israeli Ministry of Finance reportedly confirmed that under the tender requirements, Google could not terminate the service. A subsequent internal Google report further suggested that if a conflict arose between Google’s terms and the Israeli government’s “extensive and often ambiguous” requirements, they would “be interpreted in the way which is the most advantageous to the customer.”

Deep Security Collaboration And Shareholder Scrutiny

Project Nimbus also mandates an unprecedented level of cooperation between Google and the Israeli security apparatus, including a “Classified Team” of Israeli nationals within Google with security clearances. This team is designed to “receive information by [Israel] that cannot be shared with [Google]” and will “participate in specialized training with government security agencies,” and “joint drills,” according to the initial report.

While Google asserts Project Nimbus “is not directed at highly sensitive, classified or military workloads,” non-classified workloads from the Ministry of Defense and Shin Bet (Israel’s internal security agency) are part of the deal. A separate, allegedly classified contract, “Natrolite,” reportedly handles other workloads.

Confirmed Project Nimbus customers include state-owned weapons manufacturer Israel Aerospace Industries and the Israel Land Authority, an agency involved in land distribution in the illegally occupied West Bank. This occurs as Google shareholders are set to vote on June 6 on a proposal demanding an investigation into Project Nimbus’s human rights impact, a proposal Alphabet’s board recommended voting against

Jonathan Greenblatt, CEO of the Anti-Defamation League (ADL), characterized this shareholder proposal as a “thinly disguised ploy to weaken Israel’s national security,” arguing it aimed to undermine the country’s right to defend itself by pressuring Alphabet to withhold vital technology. This view was echoed by Ari Hoffnung of JLens, who argued it was “Alphabet’s shareholders should see this proposal for what it is: an attempt to misuse the proxy process to advance a divisive political agenda that has no place in corporate governance”, according to the ADL. In February Google also amended its AI Principles to remove prohibitions against weapons and surveillance.

Legal Warnings And Industry Parallels

International law experts have voiced concerns. León Castellanos-Jankiewicz of the Asser Institute told The Intercept that Google’s awareness of risks alongside its limited ability to mitigate them is problematic. He later added, in a quote also highlighted by Al Mayadeen English, “It sounds like Google is giving the Israeli military a blank check to basically use their technology for whatever they want.” 

Ioannis Kalpouzos, a visiting professor at Harvard Law, informed The Intercept that “Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations.”

Andreas Schüller of the European Center for Constitutional and Human Rights stated to The Intercept that “If the risk of misuse of a technology grows over time, the company needs to react accordingly,” adding that ignorance or an omission of any reaction to such increasing risks leads to a higher liability risk for the company.

The situation at Google reflects broader industry turmoil. At Microsoft, employee protests over Azure AI use by the Israeli military led to dismissals. Microsoft later released a report claiming no evidence its tech harmed Gaza civilians but acknowledged oversight gaps, a statement activists decried as a “PR stunt.”

Hossam Nasr, a former Microsoft employee, told GeekWire Microsoft’s report was “filled with both lies and contradictions.” This case mirrors Google’s statements about the “very limited visibility” the company knew it would have with Project Nimbus.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x