close
close

Xai Dev Leaks API key for private SpaceX, Tesla LLMS – Cancer for safety

An employee of Elon Musk's artificial intelligence company Xai seeped into a private key Github In the past two months, this could have made it possible for everyone to question private Xai major language models (LLMS), which apparently for working with internal data from Musk companies, including SpaceexPresent Tesla And Twitter/x, Krebsonecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturgli“Chief Hacking Officer” at French security advice Serageswas the first to publish the leak of login information for an X.AI application programming interface (API) in the Github code repository of a technical employee at XAI.

Citygli's contribution about LinkedIn attracted the researchers' attention GitguardianA company that specializes in the detection and remedial of exposed secrets in public and proprietary environments. Gitguardian's systems are constantly scanning Github and other code repositors according to exposed API keys and shooting automated notifications to the affected users.

Gitguardian Eric Fourrier Krebsonecurity said that the exposed API key access to several unpublished models from PitchThe Ki chat bot developed by Xai. Overall, Gitguardian found that the key had access to at least 60 different data records.

“With the user's identity, the login information can access the X.AI -API,” wrote Gitguardian in an e -mail in which he explains Xai. “The associated account not only has access to public GROK models (GROK-2-1212 etc.), but also to the apparently unpublished (GROK-2.5V), development (research-grok-2p5V-1018) and private models (tweet-rejector, GROK-Space-2024-11-04).”

Fourrier found that Gitguardian had made the Xai employee aware of the exposed API key almost two months ago -on March 2. As of April 30, when Gitguardian alerted the Xai security team directly on fighting, the key was still valid and usable. Xai said Gitguardian that he should report the matter about his bug bounty program under HackeroneBut just a few hours later, the repository, which contained the API key, was removed from Github.

“It looks as if some of these internal LLMs were finely coordinated by SpaceX data and some were fine with Tesla data,” said Fourrier. “I definitely don't think that a GROK model that is finely coordinated on SpaceX data should be exposed to publicly.”

Xai did not answer a request for comments. The 28-year-old technical employee of Xai, the key of which was exposed.

Carole Winqwist Head the research team at Gitguardian. Winquist said that it is a recipe for a disaster to give potentially enemy users free access to private LLMs.

“If you are an attacker and have direct access to the model and the back -end interface for things like GROK, you can definitely attack something,” she said. “An attacker could use it for a quick injection to optimize (LLM) model for its purposes or try to implant code into the supply chain.”

The unintentional exposure of internal LLMS for Xai is so called Musk Efficiency of the government's Ministry (Doge) has inserted sensitive government records in artificial intelligence tools. In February, The Washington Post The reported officials of Doge feed data from the entire educational department in AI tools to examine the agency's programs and expenses.

In the post, Doge Plant Plant to replicate this process in many departments and agencies, to be able to access the back-end software in various parts of the government and then use AI technology to extract information about employees and programs.

“The feed -in of sensitive data into the AI ​​software brings you into the possession of the operator of a system and increases the likelihood that he is leaked into cyber attacks,” wrote Post reporter.

In March, Wired reported that Dogy used a proprietary chatbot called GSAI to 1,500 federal workers General Services AdministrationPart of the efforts to automate tasks that were previously done by humans as a dog continues his cleaning of the federal employees.

A Reuters The report last month said that the Trump government officials have informed some of the US government employees that Doge Ki will use at least the communications of a federal authority for hostility to President Trump and his agenda. Reuters wrote that the Doge team had used a lot as part of its work that the federal government imposed, although Reuters said it could not find out exactly how GROK was used.

According to Citygli, there is no indication that the Federal Government or User Data via the exposed X.AI -API key could be accessed, but these private models are likely to be trained according to proprietary data and can uncover details in connection with internal development efforts at XAI, Twitter or SpaceX.

“The fact that this key was publicly uncovered for two months and has granted access to internal models,” said Cateregli. “This type of long -lasting exposure substances shows weak key management and inadequate internal surveillance and raises questions about protective measures to access the developer and the broader operational security.”

Leave a Comment