As generative artificial intelligence (GenAI) continues to evolve, new cyber threats are emerging, catching the attention of cybersecurity experts worldwide. One of the latest concerns is “slopsquatting,” a supply chain attack that exploits hallucinations produced by large language models (LLMs). According to research from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma, code-generating AI models often invent fake software packages when writing code. Cybercriminals are now taking advantage of these hallucinations by creating malicious packages with the fabricated names suggested by AI tools.
An analysis of 16 AI code-generation models, including GPT-4, CodeLlama, and DeepSeek, revealed that nearly 20% of package recommendations were non-existent. Since programming languages like Python and JavaScript rely heavily on centralized repositories and open-source contributions, this opens a dangerous avenue for supply chain attacks. As enterprises increasingly turn to AI-generated code, the risk of inadvertently downloading and integrating malicious packages grows significantly.
Broader GenAI Security Risks and Internal Data Exposure of Cyber Threats
Beyond slopsquatting, enterprises must contend with other GenAI-driven vulnerabilities. One major concern is the risk of sensitive internal information being inadvertently exposed by LLMs. As companies train GenAI models using their proprietary data and employees interact with AI systems daily, there is a growing danger that chatbots could reveal confidential files without proper safeguards.
A report from The Banker emphasizes that LLMs are notoriously poor at protecting secrets. Without stringent access controls, employees might inadvertently access data they shouldn’t see, such as HR records or confidential financial projections. Evron, a cybersecurity startup founder, stresses the need for implementing need-to-know access protocols. However, enforcing strict data protections remains challenging due to the indiscriminate nature of today’s LLMs.
Additionally, GenAI-driven cyberattacks, including Cyber Threats and shadow AI usage, are becoming more sophisticated. Threat actors are leveraging GenAI capabilities to craft highly convincing phishing campaigns and automate large-scale attacks, amplifying the risks enterprises face in a landscape where traditional cybersecurity measures are no longer sufficient.
Fighting Prompt Attacks and Preparing for the AI Cybersecurity Era
Prompt attacks—where adversaries manipulate GenAI outputs to produce harmful or unauthorized content—pose yet another growing threat. A detailed whitepaper from Palo Alto Networks, Securing GenAI: A Comprehensive Report on Prompt Attacks, categorizes the different forms of such attacks, including information leakage, guardrail bypass, and goal hijacking. Alarmingly, success rates for these attacks can reach as high as 88% on certain models, highlighting the urgency for businesses to adopt stronger AI defenses.
Palo Alto Networks proposes a framework to help organizations understand, categorize, and mitigate prompt-based attacks, emphasizing the critical need to secure AI systems with AI-driven defenses. As GenAI becomes more integrated into operations, from healthcare decision-making to financial modeling, any manipulation could have catastrophic real-world consequences, including privacy breaches, financial losses, and reputational damage.
Experts urge companies to rethink their approach to cybersecurity in this new GenAI-driven era. The traditional methods are no longer adequate. Just as the advent of the automobile required entirely new infrastructure, the future of cybersecurity demands innovative strategies to protect against rapidly evolving AI threats.