API Key Leak from Elon Musk’s xAI Exposes Private AI Models Used by SpaceX and Tesla

Elon Musk's xAI has faced a security breach after an employee leaked an API key providing two months of access to dozens of private and unreleased Grok models.

Amidst a high-stakes period involving a multi-billion dollar corporate merger, ambitious funding rounds, and potential cloud partnerships, Elon Musk’s artificial intelligence venture xAI has suffered a significant security breach.

An employee inadvertently published a private API key—a type of digital credential used to grant software access—on GitHub, leaving it exposed for approximately two months and granting unrestricted access to dozens of internal and unreleased Grok large language models (LLMs), KrebsOnSecurity reported on May 1st.

The exposed models included versions apparently fine-tuned—a process of specializing AI models on specific data—with proprietary information from Musk’s other ventures, including SpaceX and Tesla, triggering concerns about the company’s security posture and internal controls.

Discovery and Delayed Response

The credential exposure, active from early March, was first brought to public attention by Philippe Caturegli of the security consultancy Seralys. His post drew the attention of GitGuardian, a firm that scans code repositories like GitHub for leaked secrets.

GitGuardian’s systems had detected the key and automatically alerted the 28-year-old xAI technical staff member responsible back on March 2nd. Despite this early warning, the key remained valid and accessible for nearly two months. GitGuardian escalated the issue by directly notifying xAI’s security team on April 30th.

According to the Krebs report, xAI initially told GitGuardian to report the matter through its HackerOne bug bounty program, but just a few hours after this exchange, the GitHub repository containing the exposed key was taken down, finally revoking access.

Accessing Internal AI Assets

The leaked key provided access to a substantial collection of AI models not intended for public use. GitGuardian identified at least 60 distinct models accessible via the key, encompassing private, development, and fine-tuned versions of Grok, xAI’s primary LLM. Specific examples cited included grok-spacex-2024-11-04, tweet-rejector, and unreleased development versions like grok-2.5V and research-grok-2p5v-1018.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” GitGuardian’s Eric Fourrier told KrebsOnSecurity.

“I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.” This level of access contrasts sharply with the limitations of the commercial Grok 3 API xAI launched publicly on April 10th, which, according to xAI’s own documentation, features a knowledge cut-off date of November 17, 2024, and a constrained 131,072-token context window – notably less than the 1 million token capacity xAI had previously suggested for Grok 3.

The private models accessed via the leak may have lacked these restrictions or contained more sensitive, up-to-date, or specialized information derived from internal company data.

Security Risks and Broader Concerns

Experts warned of the potential dangers posed by such a leak. GitGuardian’s Carole Winqwist noted that attackers could exploit this access for “prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

Prompt injection involves crafting inputs to trick an AI into unintended actions. Caturegli added, “The fact that this key was publicly exposed for two months and granted access to internal models is concerning… This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

The incident also resonates with wider anxieties about AI safety, particularly given previous reports concerning the use of AI within government contexts. Both the Washington Post and Reuters reported earlier this year on Musk’s Department of Government Efficiency (DOGE) initiative utilizing AI tools, with Reuters specifically noting the DOGE team “has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government,” potentially analyzing sensitive data.

A security failure involving internal models heightens the perceived risks of such applications. The leak, likely stemming from a developer accidentally committing the key file – described by one commenter on the Krebs article as a potential “rookie mistake” – underscores the operational challenges in securing powerful AI development environments.

Strategic Context of the Breach

This security failure occurred shortly after the formation of XAI Holdings Corp., the $113 billion entity created by merging xAI ($80B valuation) and X ($33B valuation including debt) in late March/early April.

The merged company is reportedly seeking around $20 billion in new funding. Furthermore, talks are reportedly underway for Microsoft to potentially host Grok models on its Azure cloud platform, a deal that could be influenced by perceptions of xAI’s security maturity. These developments, combined with Grok’s history of controversial outputs and moderation issues, place XAI Holdings under increased scrutiny following the exposure of its internal AI assets.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x