Blog

min read

The Agent Economy: Who Commands The Fleet

By

Eilon Cohen

and

Ziv Karliner

April 8, 2026

min read

Why the headcount paradigm is dead, the kill chain is collapsing, and the only question left is who commands the fleet.

The modern work paradigm is fracturing. For decades, the equation in a company growth cycle was simple: if a company wanted to move faster, build more, or defend a larger perimeter, it hired more people. Headcount was the ultimate proxy for capability and growth.

The equation is broken.

We have entered the Agent Economy. Smart, small squads operating large fleets of autonomous or semi-autonomous agents are replacing massive hierarchical organizations, not as a prediction, but as the current operating reality. And nowhere is it more visible, or more dangerous, than in the security trenches.

The narrative about "AI replacing humans" misses the point. Organizations don't need fewer people. They need different ones, people who can direct, orchestrate, and make judgment calls across autonomous systems at speed and at scale. Deep specialization in isolation doesn't cut it anymore, not from 2026 onwards.

The View from the Trenches: Security as the First-Class Citizen

In cybersecurity, the shift carries an existential mandate. Threat actors are already leveraging agents to attack. Digesting hundreds of terabytes of data to find vulnerable patterns across millions of codebases, network logs, and telemetry feeds is the baseline requirement for defending an AI-augmented enterprise. Not a theoretical challenge. A daily one.

At Pillar Security, we don't theorize about agent-driven attacks. We instrument the AI stack and catch them in the wild. Our honeypots and threat research teams track a landscape where theoretical vulnerabilities have given way to machine-speed exploitation at a scale never seen before.

The attack surface looks nothing like it did two years ago, even half a year ago. We defend against agents executing end-to-end campaigns, not human operators typing commands into terminals or trading initial access vectors.

The Chaos Agent: Hackerbot-Claw

The "Chaos Agent" campaign, publicly known as Hackerbot-Claw [1], was the first attributed campaign where an AI agent, operating on what appears to be natural-language instructions, conducted an end-to-end attack against production open-source infrastructure, causing one of the most severe attacks on Trivy.

CI/CD pipelines are the automated assembly lines that build and ship software. Compromise one, and every product it touches potentially becomes a weapon. Within hours, the agent identified vulnerable open-source projects across Microsoft, DataDog, Aqua Security, and a CNCF project. It crafted targeted exploits, compromised those pipelines, and published a malicious VSCode extension that turned developers' own AI coding tools into credential-stealing accomplices [1].

The operational tempo was terrifying:

  • 11 seconds between fork creation and first push
  • 59-second probe cycles
  • 11 minutes from confirmed code execution to stealing sensitive data and escalating access

Machine-speed operation, guided by a human strategist. Our researchers call it "promptware in its purest form": millions of lines of sophisticated exploit code replaced by a single natural-language prompt instructing legitimate AI tools to commit the crime [1]. The attack surface has expanded from binaries and code to, well, plain english.

From Probe to Fortune 500 Blast Radius

The Chaos Agent is only the beginning. The access gained through Hackerbot-Claw's CI/CD exploitation was never fully contained, and Rami McCarthy and the Wiz research team tracked the breach downstream, documenting how a threat actor known as TeamPCP turned it into a devastating supply chain cascade [2].

TeamPCP compromised Aqua Security's Trivy vulnerability scanner, injecting credential-stealing malware into official releases and GitHub Actions [2]. They expanded to hit the Checkmarx KICS GitHub Action [3]. Then, using a PyPI API token stolen from the Trivy incident to publish malicious trivy versions, they compromised downstream consumers like LiteLLM, an open-source proxy server present in 36% of AI cloud environments [4].

One probe and one retained credential produced a cascade touching Fortune 500 infrastructure within weeks.

Date Event Source
Feb 27-28, 2026 Hackerbot-Claw probes CI/CD across Microsoft, DataDog, Aqua Security, CNCF Pillar Security [1]
Mar 3, 2026 Pillar Security publishes "Chaos Agent" analysis Pillar Security [1]
Mar 19, 2026 TeamPCP compromises Trivy (scanner, GitHub Actions, setup-trivy) Wiz [2]
Mar 23, 2026 TeamPCP compromises KICS GitHub Action Wiz [3]
Mar 24, 2026 TeamPCP trojanizes LiteLLM Wiz [4]

The Kill Chain Is Collapsing

The cascade above didn't follow the textbook kill chain. It didn't need to.

The traditional kill chain assumes attackers have to earn every inch of access, and they do, but the model was built for human adversaries moving through sequential stages. The framework isn't entirely obsolete, but it fails against the new class of threat. When an AI agent becomes the weapon, the chain collapses.

AI agents already hold access, permissions, and legitimate reasons to move across systems and transfer data every single day. Compromise an agent already living inside your environment, and the early stages of the kill chain simply don't apply.

There is no "initial access" to detect. The agent was already there.

We saw the infrastructure-level version of the collapse with the OpenClaw crisis. Pillar Security's honeypots showed exposed AI gateways under attack within minutes of deployment. Our Operation Bizarre Bazaar research captured 35,000 attack sessions targeting exposed AI infrastructure [5].

The Broader Economic Shift

The pattern extends well beyond security. The Agent Economy rewards orchestration over headcount everywhere, across R&D orgs, startups, and research labs alike.

The Federal Reserve estimates that 78% of the U.S. labor force now works at firms adopting AI, and 54% works at firms actively using LLMs [7]. In Q1 2026 alone, companies like HubSpot reported 97% of committed code done with AI assist, while Meta saw a 30% increase in output per engineer [8].

And the workforce implications are already here. Jack Dorsey's Block cut 40% of its headcount, over 4,000 people, in a single day, explicitly betting AI can replace the traditional org chart [9]. StackBlitz's CEO publicly stated his goal to have more AI agents than human employees by the end of 2026 [10].

The consequences reach further than org charts. For a CEO, engineering velocity decouples from hiring. For an individual contributor, output gets measured by what you can direct, not what you can type. The "one-person unicorn" is becoming a viable business model: a few talented people commanding fleets of specialized agents instead of managing rows of desks.

We see it in our own research operation. A small team of focused researchers, armed with the right agent tooling, now generates a backlog of validated security findings requiring a team three or four times the size eighteen months ago. The bottleneck shifted from discovery to triage. We find more than we can publish, faster than we can responsibly disclose. The Agent Economy plays out inside our own walls, and every R&D-heavy organization is about to confront the same dynamic.

Govern the Fleet

An agent is faster than anyone on your team. The focus has to shift from managing headcount to governing autonomous systems.

When attackers hit exposed AI infrastructure within minutes of deployment, and a single campaign generates 35,000 attack sessions [5], traditional static perimeters fail. Defenders need automated, identity and intent scoped controls so they know exactly which agent is allowed to do what, plus the ability to reconstruct exactly what happened, at machine speed.

You need visibility into which agents operate in your environment, what they connect to, what permissions they hold, and what their normal behavioral baseline looks like. An attacker riding an agent's existing workflow looks normal to traditional detection systems, so organizations need a security layer working regardless of which AI model or vendor powers the agent.

The question for every leadership team has changed. It moved from "how many people do we have" to "how many agents operate in our environment, what is their blast radius if compromised, and can we detect the difference between an agent doing its job and an agent someone weaponized."

The organizations getting it right will count fleets, not heads. But commanding a fleet requires knowing exactly what the fleet is doing and why. Inventory AI agent with write access to your production environment. If you can't produce the list today, your adversaries are already mapping it for you.

References

[1]: Pillar Security. "Hackerbot-Claw: Adversarial Agent Targets Top GitHub Repos." March 3, 2026.

[2]: Wiz Blog (Rami McCarthy). "Trivy Compromised: Everything You Need to Know about the Latest Supply Chain Attack." March 20, 2026.

[3]: Wiz Blog. "Checkmarx KICS GitHub Action Compromised." March 23, 2026.

[4]: Wiz Blog. "Three's a Crowd: TeamPCP Trojanizes LiteLLM in Continuation of Campaign." March 24, 2026.

[5]: Pillar Security. "Operation Bizarre Bazaar: First Attributed LLMjacking Campaign." January 28, 2026.

[6]: Pillar Security. "Caught in the Wild: Real Attack Traffic Targeting Exposed Clawdbot Gateways." January 29, 2026.

[7]: Federal Reserve. "Monitoring AI Adoption in the US Economy." FEDS Notes, April 3, 2026.

[8]: Port.io. "63 earnings calls. 0 engineering outcomes tied to AI." March 31, 2026.

[9]: Forbes. "Jack Dorsey Flags 4000 Job Cuts As AI Reshapes Block's Org Chart." April 1, 2026.

[10]: Business Insider. "Tech CEO Wants AI Agents to Outnumber His Human Employees." February 10, 2026.

Subscribe and get the latest security updates

Back to blog

MAYBE YOU WILL FIND THIS INTERSTING AS WELL

From AI Discovery to Attack Surface Mapping: Announcing the Wiz + Pillar Partnership

By

Dor Sarig

and

March 23, 2026

News
Zero Click Unauthenticated RCE in n8n: A Contact Form That Executes Shell Commands

By

Eilon Cohen

and

March 11, 2026

Research
AI Coding Tools Under Fire: Mapping the Malvertising Campaigns Targeting the Vibe Coding Ecosystem

By

Eilon Cohen

and

March 10, 2026

Research