Microsoft has stated on May 16, that its internal and external reviews found no evidence its Azure cloud and AI technologies were used by the Israeli military to harm civilians in Gaza. This announcement follows months of employee and activist pressure over the company’s contracts with the Israeli government. However, Microsoft also acknowledged significant limitations in its ability to verify how its technology is used on private servers or systems outside its direct cloud services.
The company’s findings, published in an official statement, were swiftly criticized. The employee group “No Azure for Apartheid” labeled the review a “PR stunt.” Hossam Nasr, a group organizer and former Microsoft employee, told GeekWire the admission of incomplete oversight is a key concern, asserting “filled with both lies and contradictions.” The group plans to continue protests, including at Microsoft’s upcoming Build developer conference.
This situation highlights the growing scrutiny on tech companies regarding their roles in global conflicts and the ethical questions surrounding government contracts. While Microsoft aims to reassure stakeholders of its Human Rights Commitments, the acknowledged review gaps and vocal opposition suggest the controversy is ongoing.
Microsoft’s Investigation and Its Scope
Microsoft explained its reviews responded to employee and public concerns stemming from media reports about its Azure platform and AI tools allegedly harming civilians. The company confirmed a standard commercial relationship with the Israeli Ministry of Defense (IMOD), supplying software, cloud infrastructure, and AI services like language translation. Following the October 7 Hamas attacksin 2023, Microsoft provided “limited emergency support” to the Israeli government to aid hostage rescue, a process it described as carefully managed with “approval of some requests and denial of others,” according to its statement.
The tech giant asserted its reviews found “no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza” and no indication the IMOD breached Microsoft’s terms of service or its AI Code of Conduct. Microsoft further noted that militaries typically use proprietary software for surveillance, emphasizing “Microsoft has not created or provided such software or solutions to the IMOD.”
Crucially, Microsoft was transparent about its investigation’s limits, stating its lack of visibility into how customers use software on their own servers. The company specifically clarified it also lacks insight into the IMOD’s government cloud operations, which are supported by other providers—Amazon and Google won the Project Nimbus cloud contract in 2021. “By definition, our reviews do not cover these situations,” Microsoft conceded. The external firm involved in the review was not identified.
Activist Rebuttals and Ongoing Concerns
The “No Azure for Apartheid” campaign quickly challenged Microsoft’s conclusions. Hossam Nasr, speaking to GeekWire, pointed to what he sees as a core contradiction: “In one breath, they claim that their technology is not being used to harm people in Gaza,” while also admitting “they don’t have insight into how their technologies are being used” on Israeli military servers.”
He argued it would not be ethical to sell technology to an army “plausibly accused of genocide,” noting its leaders face accusations at the International Criminal Court, and further characterized the review as an effort “to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military.”
The timing of Microsoft’s announcement on Nakba Day also drew criticism from Nasr, who noted the statement did not mention Palestinians. He claimed an internal Microsoft poll showed 90% employee opposition to the company’s Israeli military ties and that the activist group was not consulted for the review.
The “No Azure for Apartheid” campaign website demands full public disclosure of Microsoft’s ties to the Israeli state and military, alongside an independent audit.
These developments occur against a backdrop of increasing internal and external pressure on Microsoft. Protests have become more visible, notably during the company’s 50th-anniversary event in April 2025, where two employees disrupted proceedings, leading to their dismissal.
Software engineer Ibtihal Aboussad voiced fears that her AI transcription work could assist Israeli military surveillance, directly telling AI CEO Mustafa Suleyman, “you claim that you care about using AI for good, but Microsoft sells AI weapons to the Israeli military. 50,000 people have died, and Microsoft [is facilitating] this genocide in our region.”
These incidents followed earlier dismissals, including those of Hossam Nasr and data scientist Abdo Mohamed, after an October 2024 vigil for Palestinian victims.
Broader Context of Tech and Military Contracts
Employee concerns were reportedly amplified by media accounts, some citing leaked documents, suggesting a “gold rush” by tech firms to service the Israeli military after October 7. These reports included allegations of Israel spending $10 million on Microsoft engineering support and a significant spike in the military’s use of Microsoft and OpenAI AI tools.
Allegations also surfaced regarding the use of AI systems like “Lavender” and “Where’s Daddy?” for targeting in Gaza, as reported by AP News. The “No Azure for Apartheid” campaign has issued a petition with specific demands, and this activism contributed to the BDS movement designating Microsoft a priority boycott target in April 2025.
Microsoft, in response to past disruptions, has stated that while it offers avenues for voices to be heard, this should not cause business disruption. The company concluded its May 16 statement by reaffirming its belief that it has abided by its Human Rights Commitments in Israel and Gaza.