Skip to main content
Daniel J Glover
Back to Blog

Slopsquatting: AI Supply Chain Attacks

8 min read
Article overview
Written by Daniel J Glover

Practical perspective from an IT leader working across operations, security, automation, and change.

Published 26 January 2026

8 minute read with practical, decision-oriented guidance.

Best suited for

Leaders and operators looking for concise, actionable takeaways.

Slopsquatting is a new class of software supply chain attack that exploits a fundamental flaw in AI coding assistants: their tendency to hallucinate package names that do not exist. When an AI model recommends installing a package that has never been published, attackers can register that name on public repositories like PyPI or npm and inject malicious code into any developer who follows the AI's suggestion.

The term was coined by Seth Larson, Python Software Foundation developer in residence, as a play on typosquatting - where attackers register misspelled versions of popular packages. But slopsquatting is potentially worse. While typosquatting relies on human error, slopsquatting exploits trust in AI tools that 92% of US developers now use daily.

The Scale of AI Package Hallucinations

Research from Virginia Tech, the University of Oklahoma, and the University of Texas reveals the alarming scope of this problem. The team tested 16 code-generation LLMs by prompting them to generate 576,000 Python and JavaScript code samples. Their findings, published in March 2025, should concern every organisation relying on AI-assisted development.

MetricFindingImplication
Hallucination rate19.7% (roughly 1 in 5)One in five package recommendations points to nothing
Unique fake packages205,474 namesMassive attack surface for adversaries to exploit
Reproducibility43% appear every timeAttackers can predict which fake names will be suggested
Repeat appearances58% appear more than onceMajority of hallucinations are not random noise

The reproducibility finding is particularly concerning. Attackers do not need to scrape massive prompt logs or brute force potential names. They can simply observe LLM behaviour, identify commonly hallucinated names, and register them on public package registries.

Not All Models Are Equal

The research found significant variation between AI models. Open-source LLMs like CodeLlama, DeepSeek, WizardCoder, and Mistral showed the highest hallucination rates. Commercial tools performed better but still posed risk - ChatGPT-4 hallucinated package names in approximately 5% of cases.

For organisations processing thousands of AI-generated code suggestions daily, even a 5% error rate translates into substantial exposure.

How Slopsquatting Attacks Work

The attack chain is straightforward, which makes it dangerous:

Step 1: Discovery - An attacker prompts popular AI coding assistants with common development tasks: "Create a Python script to process CSV files", "Build a Node.js authentication module", or similar requests. They record every package name suggested.

Step 2: Verification - The attacker checks which suggested packages actually exist. Research shows roughly 20% will not.

Step 3: Registration - For non-existent packages that appear reproducibly (43% of hallucinations), the attacker registers matching names on PyPI, npm, or other public registries.

Step 4: Payload - The attacker publishes functional-looking code that includes malicious payloads - credential harvesting, backdoors, cryptocurrency miners, or data exfiltration routines.

Step 5: Exploitation - When another developer prompts the same AI model with a similar request, receives the same hallucinated package name, and runs pip install or npm install, the malicious package enters their development environment.

According to Trend Micro's analysis, the hallucinated package names are "semantically convincing" - they look like real packages with plausible naming conventions. Developers cannot easily spot the deception by sight alone.

Why Slopsquatting Is Worse Than Typosquatting

Traditional typosquatting attacks rely on developers making typing mistakes: reqeusts instead of requests, or lodahs instead of lodash. Security teams have developed defences against this - spell checkers, package manager warnings, and developer training.

Slopsquatting sidesteps all of these protections:

No typing errors to catch - The AI suggests a package name that looks entirely legitimate. There is no misspelling to flag.

Trust in AI tools - Developers increasingly accept AI suggestions without verification, a practice amplified by vibe coding culture where developers "give in to the vibes" and trust AI outputs.

Reproducibility aids attackers - Because 43% of hallucinated packages appear consistently, attackers can target specific AI models and specific prompts with high confidence.

Scale of AI adoption - With 41% of global code now AI-generated, the attack surface is massive and growing.

According to BleepingComputer's reporting, slopsquatting incidents will continue as AI coding adoption accelerates. The combination of widespread AI tool usage and inherent model limitations creates a persistent vulnerability.

Real-World Risk Scenarios

Consider how slopsquatting could affect your organisation:

Scenario 1: Developer Workstation Compromise - A developer uses an AI assistant to generate a data processing script. The AI suggests a hallucinated package that an attacker has registered. Installation grants the attacker access to the developer's machine, source code repositories, and potentially credentials for production systems.

Scenario 2: CI/CD Pipeline Infiltration - A build process pulls dependencies based on AI-generated requirements files. A slopsquatted package enters the pipeline, gaining access to deployment credentials, secrets, and the ability to inject malicious code into production builds.

Scenario 3: Compliance Violation - Regulated industries require software bill of materials (SBOM) documentation and supply chain verification. AI-hallucinated packages lack provenance, auditable histories, or security reviews. Their presence in production code creates compliance gaps.

Scenario 4: Intellectual Property Theft - A malicious slopsquatted package exfiltrates source code, configuration files, or API keys to attacker-controlled infrastructure. By the time the breach is discovered, sensitive data has already been compromised.

Mitigation Strategies for CISOs and IT Leaders

Defending against slopsquatting requires a multi-layered approach that spans developer practices, tooling, and governance.

1. Mandatory Dependency Verification

Establish a policy requiring developers to verify every AI-suggested package before installation:

  • Check the package exists on the official registry
  • Review the package's publish history and maintainer details
  • Examine download statistics - legitimate popular packages have substantial usage
  • Look for security audits or vulnerability disclosures

This adds friction to the development process, but the alternative is accepting unvetted code from potentially malicious sources.

2. Implement Dependency Scanning Tools

Deploy software composition analysis (SCA) tools that can identify suspicious packages. Solutions from Snyk, Socket, and similar vendors now include specific detection capabilities for slopsquatted packages, looking for:

  • Newly registered packages with names similar to common hallucination patterns
  • Packages with minimal download history but wide AI-suggested distribution
  • Code that exhibits suspicious behaviour (network calls, file system access, credential reading)

3. Use Lockfiles and Hash Verification

Package lockfiles (package-lock.json, Pipfile.lock, poetry.lock) pin dependencies to specific versions with cryptographic hashes. This prevents the silent substitution of legitimate packages with malicious ones and makes supply chain tampering detectable.

Require all projects to maintain lockfiles and fail builds if lockfile verification fails.

4. Configure AI Tool Settings

Research shows that LLM "temperature" settings affect hallucination rates. Higher temperature (more randomness) increases hallucinations. Where configurable, set AI coding assistants to lower temperature settings to reduce the frequency of fabricated package suggestions.

This is not a complete solution - even low-temperature models hallucinate - but it reduces exposure.

5. Establish AI Code Governance

Integrate slopsquatting defence into your broader AI governance controls. Define policies for:

  • Which AI coding assistants are approved for use
  • Required verification steps before accepting AI-generated dependency lists
  • Audit trails for AI-suggested code entering production systems
  • Incident response procedures if a slopsquatted package is detected

6. Sandbox AI-Generated Code

Never run AI-generated code directly in production environments or on developer workstations with access to sensitive resources. Test all AI suggestions in isolated sandboxes where malicious code cannot reach valuable targets.

Container-based development environments and virtual machines provide isolation layers that limit blast radius if a slopsquatted package is accidentally installed.

7. Educate Development Teams

Developers need to understand that AI coding assistants are not security tools. Georgetown's CSET research highlights that AI models do not understand your application's risk model, internal standards, or threat landscape. Every AI suggestion - especially package recommendations - requires human verification.

Training should cover:

  • What slopsquatting is and how it works
  • How to verify package legitimacy before installation
  • Red flags that indicate suspicious packages
  • Reporting procedures for potential supply chain attacks

The Bigger Picture: AI Code Security

Slopsquatting is one manifestation of a broader challenge. AI coding assistants introduce multiple security risks beyond hallucinated packages - including insecure code patterns, missing security controls, and logic errors that can compromise applications.

The research is clear: over 40% of AI-generated code contains security vulnerabilities, and this rate has not improved as models have scaled. Security must be a deliberate, human-driven layer on top of AI productivity gains - not an afterthought.

For CISOs and IT leaders, the imperative is clear. AI coding tools are here to stay, and their adoption will only accelerate. The organisations that thrive will be those that harness AI's productivity benefits while implementing robust controls to catch the security gaps that AI cannot see.

Slopsquatting is a solvable problem. But solving it requires acknowledging that AI assistants, however helpful, can be vectors for supply chain attacks - and building defences accordingly.


Want to discuss AI security strategy for your organisation? Connect with me on LinkedIn or explore more on vibe coding security risks.

Share this post

About the author

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Continue exploring

Keep building context around this topic

Jump to closely related posts and topic hubs to deepen understanding and discover connected ideas faster.

Browse all articles

Let's Work Together

Need expert IT consulting? Let's discuss how I can help your organisation.

Get in Touch