close
close

Immediate injection errors in the Gitlab -Duo emphasize risks in AI assistants

The coding assistant of Gitlab can analyze malicious AI requests for the comments, the source code, the descriptions of requirements requirements and fade news from public repositories. This technology enabled them to make the chatbot to make malicious code proposals, share malicious links and to inject HTML code of villains into answers that secretly closed the code from private projects.

“Gitlab patched the HTML injection, which is great, but the larger lesson is clear: AI tools are now part of the attack area of ​​their app,” said researcher from the application safety company Legit Security in a report. “If you read from the side, this input must be treated by the data supplied by the user-not trustworthy, chaotic and potentially dangerous.”

The fast injection is an attack technology against large voice models (LLMS) to manipulate your edition to users. And although it is not a new attack, it will be increasingly relevant because companies develop AI agents who analyze user-generated data and take measures regardless of this content.

Leave a Comment