Blog

min read

Operation Bizarre Bazaar: First Attributed LLMjacking Campaign with Commercial Marketplace Monetization

By

Eilon Cohen

and

Ariel Fogel

January 28, 2026

min read

Executive Summary

Between December 2025 and January 2026, Pillar Security Research team uncovered a disturbing evolution in AI-focused cyber threats. Our honeypots captured 35,000 attack sessions targeting exposed AI infrastructure.

We have named this campaign Operation Bizarre Bazaar. It represents the first public documentation of a systematic campaign targeting exposed LLM and Model Context Protocol (MCP) endpoints at scale, featuring complete commercial monetization. The investigation reveals how cybercriminals discover, validate, and monetize unauthorized access to AI infrastructure through a coordinated supply chain spanning reconnaissance, validation, and commercial resale.

This overview blog will cover the main parts of the operation. Click here for the full report, including IOCs and full analysis.

What is LLMjacking?

LLMjacking refers to the unauthorized access and exploitation of Large Language Model (LLM) infrastructure. Similar to how cryptojacking operations steal compute resources to mine cryptocurrency, LLMjacking operations target exposed or weakly authenticated AI endpoints to:

  • Steal compute resources for unauthorized LLM inference requests
  • Resell API access at discounted rates through criminal marketplaces
  • Exfiltrate data from LLM context windows and conversation history
  • Pivot to internal systems via compromised Model Context Protocol (MCP) servers and traditional Cloud and Application Security vulnerabilities exploitation.

Organizations running self-hosted LLM infrastructure (Ollama, vLLM, local AI implementations) or deploying MCP servers for AI integrations face active targeting. Common attack vectors include:

  • Exposed endpoints on default ports of common LLM inference services
  • Unauthenticated API access without proper access controls
  • Development/staging environments with public IP addresses
  • MCP servers connecting LLMs to file systems, databases, and internal APIs

The threat differs from traditional API abuse because compromised LLM endpoints can generate significant costs (inference is expensive), expose sensitive organizational data, and provide lateral movement opportunities.

The Criminal Supply Chain

Three interconnected threat actors comprise a complete attack supply chain:

The Scanner: A distributed bot infrastructure systematically probes the internet for exposed AI endpoints. Every exposed Ollama instance, every unauthenticated vLLM server, every accessible MCP endpoint gets cataloged.

The Validator: Once scanners identify targets, infrastructure tied to silver.inc validates the endpoints through API testing. During a concentrated operational window, the attacker tested placeholder API keys, enumerated model capabilities, and assessed response quality.

The Marketplace: silver.inc operates as "The Unified LLM API Gateway"—a commercial marketplace reselling discounted access to 30+ LLM providers without legitimate authorization. Hosted on bulletproof infrastructure in the Netherlands, the service markets on Discord and Telegram while accepting cryptocurrency and PayPal.

We named this campaign Operation Bizarre Bazaar: The silver.inc Operation.

Attack Volume and Targeting Patterns

During our investigation, we captured 35,000 attack sessions—averaging 972 attacks per day. The sustained high-volume activity confirms systematic targeting of exposed AI infrastructure rather than opportunistic scanning.

Common misconfigurations under active exploitation:

  • Ollama running on port 11434 without authentication
  • OpenAI-compatible APIs on port 8000 exposed to the internet
  • MCP servers accessible without access controls
  • Development/staging AI infrastructure with public IPs
  • Production chatbot endpoints (customer support, sales bots) without authentication or rate limiting

The attackers aren't guessing. They're using Shodan and Censys to find you. Once your endpoint appears in scan results, exploitation attempts begin within hours

Attribution: Meet "Hecker"

We traced the operation to a threat actor operating under the alias "Hecker" (also known as Sakuya, LiveGamer101). The evidence is direct:

  • The administrative panel at admin.silver.inc displays: "Hiii I'm Hecker"
  • Infrastructure overlap with nexeonai.com, a service publicly accused of DDoS attacks against competitors
  • Shared Cloudflare nameservers and DMARC records between silver.inc and nexeonai.com
  • Bulletproof hosting with thousands of abuse reports

Timing analysis reveals that silver.inc validation attempts follow public scanning activity by 2-8 hours on average—indicating the operation monitors public scan results or operates its own reconnaissance infrastructure to identify targets for commercial resale.

Organizational Risk: Beyond Compute Theft

LLMjacking operations present risks beyond unauthorized API usage:

Compute Theft: Your infrastructure generates revenue for criminals. silver.inc resells access at 40-60% discounts while you pay full retail for unauthorized usage.

Data Exfiltration: LLM context windows may contain sensitive organizational data. Conversation history, customer information, source code—all accessible through compromised endpoints.

Lateral Movement: Exposed MCP servers become pivot points. Attackers use LLM integrations to navigate file systems, query databases, and access cloud APIs.

Supply Chain Compromise: MCP servers bridge AI systems to internal infrastructure. Any MCP integration—whether connecting to repositories, databases, or internal APIs—becomes a potential entry point when exposed.

A Separate Threat: MCP Reconnaissance Campaign

In addition to Operation Bizarre Bazaar, we observed a distinct campaign targeting Model Context Protocol (MCP) endpoints. By late January, 60% of total attack traffic came from MCP-focused reconnaissance operations—representing a separate threat actor with different objectives.

Why does this matter? MCP servers don't just provide LLM access—they connect AI to your infrastructure:

  • File systems - Read source code, plant backdoors
  • Databases - Dump credentials, exfiltrate customer data
  • Shell access - Execute commands on host systems
  • API integrations - Access Slack, GitHub, cloud providers
  • Kubernetes - Pod execution, secret extraction

A single exposed MCP endpoint can bridge to your entire internal infrastructure. The systematic MCP reconnaissance we observed represents a distinct campaign focused on lateral movement preparation, separate from the silver.inc marketplace operation.

Recommended Mitigation Actions

Organizations can defend against LLMjacking operations through the following controls:

Immediate Actions (Critical Priority)

Enable authentication on all LLM endpoints. Requiring authentication eliminates opportunistic attacks from commercial operations like silver.inc. Organizations should verify that Ollama, vLLM, and similar services require valid credentials for all requests.

Audit MCP server exposure. MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, confirm authentication requirements.

Block known malicious infrastructure. Block known malicious infrastructure. Add the 204.76.203.0/24 subnet (silver.inc/Operation Bizarre Bazaar) to your deny lists. For the MCP reconnaissance campaign, block AS135377 ranges. Complete IOCs for both campaigns available in the full report.

Implement rate limiting. Stop burst exploitation attempts. Deploy WAF/CDN rules for AI-specific traffic patterns.

Audit production chatbot exposure. Every customer-facing chatbot, sales assistant, and internal AI agent must implement security controls to prevent abuse.

Short-Term Actions (High Priority)

Monitor for placeholder API key patterns. Alert on authentication attempts using sk-test, test-token, dev-key patterns.

Deploy behavioral detection. Alert on multi-provider enumeration—single IPs attempting to access multiple LLM frameworks.

Conduct security audits. Enumerate all AI endpoints in production and development. Verify authentication. Confirm firewall rules.

Protecting Public AI Endpoints 

These attackers target the path of least resistance, endpoints with no friction. Even publicly accessible AI services can deter opportunistic abuse through rate limiting, usage caps, and behavioral monitoring. The goal is making your infrastructure less attractive than the next target. For internal services, the calculus is simpler: if it shouldn't be public, verify it isn't—scan your external attack surface regularly.

The Threat is Active and Ongoing

silver.inc continues to operate. The scanner infrastructure maintains consistent targeting. The attack infrastructure remains online.

We're releasing this research because transparency accelerates defense. Security teams need to understand the threat landscape, implement appropriate controls, and share intelligence with industry partners.

Pillar Security Research continues to monitor this operation. We'll provide updates as the threat evolves.

Mapping the findings to MITRE and OWASP

MITRE ATLAS Techniques - The operation demonstrates the following adversarial machine learning techniques:

  • AML.T0049 Exploit Public-Facing Application: Direct API abuse
  • AML.T0034 Cost Harvesting: Unauthorized compute usage
  • AML.T0006 Active Scanning: Systematic endpoint scanning
  • AML.T0040 AI Model Inference API Access: Model family enumeration
  • AML.T0051 LLM Prompt Injection: System prompt extraction attempts
  • AML.T0054 LLM Jailbreak: "Ignore previous instructions" patterns
  • AML.T0056 Extract LLM System Prompt: System prompt leakage attempts

OWASP Vulnerabilities - The campaign exploits critical weaknesses outlined in the OWASP LLM Top 10 (2025) and OWASP Top 10 for Agentic AI Applications:

  • LLM01:2025 Prompt Injection
  • LLM06:2025 Excessive Agency
  • LLM07:2025 System Prompt Leakage
  • ASI02 Tool Misuse & Exploitation
  • ASI04 Agentic Supply Chain Vulnerabilities

Read the Full Report

Read the full report, inlcuding the complete technical analysis includes detailed attack timeline and complete indicators of compromise:

How Pillar Helps

Pillar Security provides AI security solutions for enterprise organizations deploying LLM infrastructure. Our platform is designed to combat threats like Operation Bizarre Bazaar by:

  • Discovering Shadow AI: Our engine scans your environment to identify exposed endpoints and unmanaged AI infrastructure.
  • Validating Security Posture: We perform adversarial testing to find vulnerabilities before attackers do.
  • Protecting Runtime Operations: We enforce governance policies and apply adaptive guardrails to detect and block active attacks in real-time

Subscribe and get the latest security updates

Back to blog

MAYBE YOU WILL FIND THIS INTERSTING AS WELL

The Agent Security Paradox: When Trusted Commands in Cursor Become Attack Vectors

By

Dan Lisichkin

and

January 14, 2026

Research
Not All AI BOMs Are Created Equal

By

Uri Feldman

and

December 31, 2025

Blog
“Ask Gordon, Meet the Attacker” - Prompt Injection in Docker’s Built-in AI Assistant

By

Eilon Cohen

and

December 18, 2025

Research