close
close

Xai developer accidentally licks the API key, access to SpaceX, Tesla and X LLMS grants

Xai, an employee of Elon Musk's artificial intelligence company, accidentally gave a sensitive API key for Github open and possibly unveiled large -scale proprietary large language models (LLMS), which are linked to SpaceX, Tesla and Twitter/X.

Cybersecurity specialists estimate that the leak remained active for two months and outsiders the ability to access and queried that access the internal data from Musk's flagship companies and question highly confidential AI systems.

The leak appeared for the first time when Philippe Caturgli, “Chief Hacking Officer” at Seralys, marked the endangered login information for an XAI application programming interface in a Github repository of a technical employee at XAI.

– Advertising –

The announcement of Catereglis about LinkedIn quickly noticed the Gitguardian, a company that specializes in automated detection of exposed secrets in code bases.

Eric Fourrier, a co-founder of Gitguardian, told carcers that the exposed API key had access to at least 60 finely coordinated LLMs, including unpublished and private models.

These included developing versions of XAIS GROK chat bot as well as specialized models that finely matched SpaceX and Tesla data, such as “Grok Space-2024-11-04” and “Tweet-Reajector”.

“The login information could be used to access the XAI -API with all permissions granted to the original user,” said Gitguardian.

“This included not only public GROK models, but also the most modern, unpublished and internal tools that are never intended for external eyes.”

Despite an automated alarm that was sent to the XAI employee on March 2, the login information remained active and active until at least April 30, when Gitguardian escalated directly to the Xai security team.

Just a few hours later, the insulting Github repository was put down quietly.

Carole Winqwist, Chief Marketing Officer from Gitguardian, warned that opponents could manipulate or sabotage these voice models for malicious purposes with such access, including quick injection attacks or even planting of code in the operational supply chain of AI.

“Free access to private LLMs is a recipe for a disaster,” Winqwist emphasized.

The leak also shows growing concerns about the integration of sensitive data in AI tools.

The latest reports indicate that the Ministry of Musk (Doge) and other agencies insert federal data into AI and raise questions about more comprehensive security risks.

While there is no direct evidence that federal or user data has been violated by the exposed API key, Citygli emphasizes the seriousness of the incident: “Long-standing exposures like this show weak key management and poor internal surveillance, which triggered alarms through the world's most valuable technology companies.”

Find this News Interesting! Follow us on Google News, LinkedIn, & X to Get Instant Updates!

Leave a Comment