Elon Musk’s Department of Government Efficiency (DOGE) team is reportedly utilizing the xAI chatbot Grok within the U.S. government. This deployment, first detailed by Reuters, allegedly involves a customized version of Grok analyzing federal data. The move ignites profound ethics, security, and potential conflict-of-interest alarms for Musk and his AI venture.
Core concerns center on Grok accessing sensitive information without transparent oversight or established safeguards. Sources indicated to Reuters that DOGE used Grok internally and encouraged its adoption at the Department of Homeland Security (DHS), despite the chatbot lacking official approval there. Such actions could breach federal conflict-of-interest laws if Musk personally directed Grok’s promotion, potentially enriching xAI.
A DHS spokesperson, however, told Reuters the agency hasn’t pushed employees towards any specific tools. The White House and xAI did not respond to Reuters’ requests for comment.
This governmental use of Grok unfolds as xAI itself has faced scrutiny over internal controls and the chatbot’s reliability; Gizmodo has previously noted Grok’s sometimes “too woke” responses for conservatives, while in many cases it actually follows their agenda. For instance, xAI published Grok’s system prompts on GitHub in May 2025 after “unauthorized modifications” led to controversial outputs regarding “white genocide” in South Africa. xAI attributed this to a rogue employee and announced new review processes.
Data Integrity and Influence Peddling Fears Mount
The introduction of Grok into government systems is particularly troubling given xAI’s recent security lapses. In early May is was discovered that an xAI employee inadvertently had published a private API key, exposing internal Grok models for about two months.
Some of these models were reportedly fine-tuned with proprietary SpaceX and Tesla data. Philippe Caturegli of Seralys described this exposure as highlighting “weak key management and insufficient internal monitoring.” Such vulnerabilities amplify fears about government data security. A
lbert Fox Cahn of the Surveillance Technology Oversight Project told Reuters that given DOGE’s data access, using Grok presents “as serious a privacy threat as you get.” Five specialists in technology and government ethics also told Reuters that if sensitive data was used, it could violate security and privacy laws.
Experts also highlighted xAI’s potential for an unfair competitive advantage. If government data trains Grok, or if Musk gains insights into federal contracting, it could skew the AI services market.
Cary Coglianese, a University of Pennsylvania professor, noted to Reuters that xAI “has a financial interest in insisting that their product be used.” The initiative allegedly bypasses standard procurement and lacks full agency authorization. If Grok is being trained or refined using federal data, even indirectly, this could represent a significant privacy violation.
These concerns are further contextualized by Musk’s broader history of conflicts and perceived favorable treatment, alongside DOGE’s limited success in its stated mission. This contrasts with OpenAI and Anthropic, which have more formal partnerships with the U.S. government.
Grok’s Evolving Capabilities Amidst Controversies
Despite these issues, Grok is being positioned as a capable AI. Microsoft has added Grok models to its Azure AI Foundry just recently. This move aims to make Azure a premier cloud for diverse AI models, even those from competitors to Microsoft’s key partner, OpenAI, with whom Musk has an ongoing legal feud.
During Microsoft Build, Elon Musk stated in a videocall that xAI models “aspire to truth with minimal error” and that it’s “incredibly important for AI models to be grounded in reality,” while acknowledging “there’s always going to be some mistakes that are made.” Grok 3 on Azure supports a 131K token context length and is backed by the significant Colossus supercomputer expansion.
xAI has actively developed Grok, rolling out a “Memory” feature in April for conversational recall and Grok Studio for collaborative content creation. However, its commercial API, launched in April, has limitations, including a knowledge cut-off of November 17, 2024. This could impact its utility for government tasks needing current information.
DHS had already halted employee access to all commercial AI tools in May 2025 due to data handling concerns, and Grok was not among previously approved tools for limited use.
DOGE’s AI Mandate and Alleged Overreach
The reported Grok deployment is part of a wider DOGE effort to embed AI in the federal bureaucracy. In its drive to cut government spending, Musk’s DOGE team has accessed secure federal databases with personal information on millions of Americans. According to Reuters, DOGE engineers installed custom parameters on Grok to “feed it government datasets, ask complex questions, and get instant summaries.”
DOGE staffers Kyle Schutt and Edward “Big Balls” Coristine have spearheaded efforts to use AI to find “waste” and “fraud.” As Reuters has learned, DOGE staff attempted to access DHS employee emails and instructed staff to train AI to identify communications suggesting disloyalty to the administration’s political agenda.
Some Department of Defense employees were also reportedly told an algorithmic tool was monitoring computer activity. The Pentagon denied its DOGE team was involved in such monitoring or directed to use Grok, stating that government computers are inherently subject to monitoring.
Using AI for political loyalty tests could violate civil service protections. Richard Painter told Reuters the conflict-of-interest statute is rarely prosecuted but can result in fines or jail time.