close
close

KI arms increase in cybercrime and data leaks

Since artificial intelligence tools multiply in organizations, security risks rise and make the already challenging landscape of cyber defense. According to a new report by the Cybersecurity -Check Point -software -software -software technologies, companies that want to use the promise of AI are not prepared to manage threats that range from data loss to sophisticated phishing attacks.

The Check Point interviewed more than 1,000 security experts and found that 47% employees upload sensitive data to AI tools last year. The report underlines a growing feeling of urgency between CISOS and IT executives, since the company -KI often hand over the use of robust guardrails.

“Many organizations have difficulty implementing adequate controls,” said Sergey Shykevich, manager of the threat intelligence group at Check Point. “The limit between curiosity and the dangerous data load is thin.”

The quick pace of the AI ​​adoption offers new attack areas for threat actors who develop their tactics in order to use the growing trust in major language models and automation. A majority (70%) of the respondents stated that criminals already use generative AI to facilitate phishing and social engineering campaigns, and their attacks increased the refinement and credibility.

Security managers report specific consequences. Around 16% of those surveyed stated that their companies had suffered incidents with data solutions last year that were directly associated with the use of generative AI applications. In some cases, employees accidentally entered confidential information such as customer records, source code or strategic documents in order to receive external AI services in order to expose them to unintentional parties.

This type of incident is not only hypothetical. In March, Samsung announced that the staff prevented it from using chatt after an engineer had uploaded the sensitive internal code to the tool. Since then, a list of companies that extend with the defense of defense has published similar guidelines, with some tailor -made AI in their own house to avoid data with external providers.

However, even companies with strict guidelines often have difficulty keeping an eye on the Ki tools of third-party ACTOLS when entering their networks. “Shadow Ai”, as described in the Check Point report, refers to employees who deal with official channels to tinker with AI models, sometimes for benign purposes -such as

For many IT teams, enforcement remains a hurdle. “If employees can use something that makes them more productive, the risk of finding a way is to find a way, regardless of corporate policy or training,” says Lisa Plaggemier, Managing Director of the National Cybersecurity Alliance.

In fact, only 28% of the people surveyed by Check Point stated that their organizations had extensive current guidelines that specifically ruled the use of generative AI. For the rest, safety monitoring is often reactive, which deals with incidents when they arise instead of preventing them.

In the meantime, attackers are innovative. The report shows a new stage of cyber threats made possible by AI tools, including deep pawe videos and language imitation, which can be used in speer phishing schemes to manipulate employees or managers. Detection and assignment become more difficult because signs of malicious content are easily overlooked by AI-generated.

Supervisory authorities take note of it. In the United States, the Securities and Exchange Commission has urged companies to disclose AI and cyber risks, while the AI ​​law of the European Union, which was adopted in March, defines strict standards for transparency and accountability.

However, the report proposes practical steps for risk reduction. This includes the upbringing of employees to AI-specific threats. Larger investments in the technologies for the prevention of data loss that can monitor the use of AI tools; And the introduction of approved, organizational-born AI solutions that keep confidential information in trustworthy environments.

The missions, experts, are only higher. “Generative AI is not just a risk amplifier – it is a structural change in information by a company,” said Shykevich. “Defense have to develop as quickly as the technology itself.”

Leave a Comment