“`html
An employee at Elon Musk’s artificial intelligence firm xAI revealed a private key on GitHub, which for the last two months could have permitted anyone to interact with private xAI large language models (LLMs) seemingly tailored for handling internal data from Musk’s businesses, encompassing SpaceX, Tesla, and Twitter/X, as reported by KrebsOnSecurity.

Image: Shutterstock, @sdx15.
Philippe Caturegli, the “chief hacking officer” at the French security firm Seralys, was the first to make the leak public regarding credentials for an x.ai application programming interface (API) that surfaced in the GitHub repository of a technical employee at xAI.
Caturegli’s LinkedIn post piqued the interest of researchers at GitGuardian, a company that specializes in identifying and remedying exposed secrets in both public and private environments. GitGuardian’s systems continuously monitor GitHub and other code repositories for exposed API keys, issuing automated alerts to impacted users.
Eric Fourrier from GitGuardian informed KrebsOnSecurity that the exposed API key had access to multiple unreleased models of Grok, the AI chatbot created by xAI. Overall, GitGuardian determined that the key granted access to at least 60 unique data sets.
“The credentials can facilitate access to the X.ai API using the user’s identity,” GitGuardian detailed in an email outlining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc.) but also to what seems to be unreleased (grok-2.5V), developmental (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”
Fourrier discovered that GitGuardian had warned the xAI employee about the exposed API key nearly two months prior — on March 2. However, as of April 30, when GitGuardian notified xAI’s security team about the exposure, the key remained active and usable. xAI requested GitGuardian to report the issue through its bug bounty initiative at HackerOne, but merely hours later, the repository containing the API key was deleted from GitHub.
“It appears that some of these internal LLMs were fine-tuned using SpaceX data, while others were fine-tuned with Tesla data,” Fourrier commented. “I certainly don’t believe a Grok model fine-tuned on SpaceX data is meant to be publicly available.”
xAI did not respond to a request for a statement, nor did the 28-year-old technical staff member whose key was leaked.
Carole Winqwist leads the research team at GitGuardian. Winqwist expressed that granting potentially malicious users unrestricted access to private LLMs is a formula for catastrophe.
“If you’re an attacker and you have direct access to the model and the backend interface for tools like Grok, it’s certainly something you could exploit for further malicious actions,” she observed. “A malicious actor could use it for prompt injection, modifying the (LLM) model to fit their objectives, or attempt to embed code within the supply chain.”
The unintended exposure of xAI’s internal LLMs coincides with Musk’s so-called Department of Government Efficiency (DOGE) having been inputting sensitive government documents into AI applications. In February, The Washington Post reported that DOGE personnel were integrating data from the Education Department into AI tools to scrutinize the agency’s programs and expenditures.
The Post further noted that DOGE intends to replicate this methodology across various departments and agencies, acquiring backend software from different segments of the government and then employing AI technology to extract and analyze information regarding expenditure on personnel and programs.
“Ingesting sensitive data into AI systems places it in the hands of an operator, increasing the likelihood it will be leaked or compromised in cyberattacks,” the Post reporters indicated.
Wired disclosed in March that DOGE has implemented a proprietary chatbot known as GSAi for 1,500 federal employees within the General Services Administration, as part of efforts to automate tasks previously executed by humans amid DOGE’s ongoing downsizing of the federal workforce.
A Reuters report from last month stated that officials from the Trump administration informed some U.S. government employees that DOGE is employing AI to monitor communications in at least one federal agency for hostility towards President Trump and his agenda. Reuters reported that the DOGE team has extensively utilized Musk’s Grok AI chatbot as part of their efforts to streamline the federal government, although they could not ascertain the exact manner of Grok’s application.
Caturegli mentioned that although there is no evidence suggesting that federal or user information could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may inadvertently reveal information regarding internal development initiatives at xAI, Twitter, or SpaceX.
“The fact that this key was publicly visible for two months and provided access to internal models is alarming,” Caturegli stated. “This prolonged credential exposure underscores poor key management and inadequate internal oversight, raising concerns regarding protections for developer access and comprehensive operational security.”
“`