Blog
min read
Pillar Security researchers have uncovered a dangerous new supply chain attack vector we've named "Rules File Backdoor." This technique enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions into seemingly innocent configuration files used by Cursor and GitHub Copilot—the world's leading AI-powered code editors.
By exploiting hidden unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews. This attack remains virtually invisible to developers and security teams, allowing malicious code to silently propagate through projects.
Unlike traditional code injection attacks that target specific vulnerabilities, “Rules File Backdoor” represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice, potentially affecting millions of end users through compromised software.
A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have rapidly evolved from experimental novelties to mission-critical development infrastructure, with teams across the globe relying on them daily to accelerate coding tasks.
This widespread adoption creates a significant attack surface. As these AI assistants become integral to development workflows, they represent an attractive target for sophisticated threat actors looking to inject vulnerabilities at scale into the software supply chain.
While investigating how development teams share AI configuration, our security researchers identified a critical vulnerability in how AI coding assistants process contextual information contained in rule files.
Rule files are configuration files that guide AI Agent behavior when generating or modifying code. They define coding standards, project architecture, and best practices. These files are:
Here's a Rules File example from Cursor's documentation:
Aside from personally creating the files, developers can also find them in open-source communities and projects such as:
During the research it was found that the review process for uploading new rules for these shared repos is also vulnerable as hidden unicode chars also appear invisible on the GitHub platform pull request approval process.
Our research demonstrates that attackers can exploit the AI's contextual understanding by embedding carefully crafted prompts within seemingly benign rule files. When developers initiate code generation, the poisoned rules subtly influence the AI to produce code containing security vulnerabilities or backdoors.
The attack leverages several technical mechanisms:
What makes “Rules Files Backdoor” particularly dangerous is its persistent nature. Once a poisoned rule file is incorporated into a project repository, it affects all future code-generation sessions by team members. Furthermore, the malicious instructions often survive project forking, creating a vector for supply chain attacks that can affect downstream dependencies and end users.
Cursor's "Rules for AI" feature allows developers to create project-specific instructions that guide code generation. These rules are typically stored in a .cursor/rules directory within a project.
Here's how the attack works:
Step 1: Creating a Malicious Rule File
We created a rule file that appears innocuous to human reviewers:
However, the actual content includes invisible unicode characters hiding malicious instructions:
Step 2: Generate an HTML File
We used Cursor's AI Agent mode with a simple prompt: "Create a simple HTML only page”
Step 3: Observe the Poisoned Output
The generated HTML file now contains a malicious script sourced from an attacker-controlled site.
What makes this attack particularly dangerous is that the AI assistant never mentions the addition of the script tag in its response to the developer. The malicious code silently propagates through the codebase, with no trace in the chat history or coding logs that would alert security teams.
The attack payload contains several sophisticated components.
Let’s go over the different parts and explain how it works:
Together, these components create a highly effective attack that remains undetected during both generation and review phases.
The video below demonstrates the attack in a real environment, highlighting how AI-generated files can be poisoned via manipulated instruction files.
Cursor Demonstration
The following video demonstrates the same attack flow within the GitHub Copilot environment, showing how developers using AI assistance can be compromised.
Github Copilot Demonstration
The "Rules File Backdoor" attack can manifest in several dangerous ways:
Because rule files are shared and reused across multiple projects, one compromised file can lead to widespread vulnerabilities. This creates a stealthy, scalable supply chain attack vector, threatening security across entire software ecosystems.
Our research identified several propagation vectors:
To mitigate this risk, we recommend the following technical countermeasures:
The responses above, which place these new kinds of attacks outside the AI coding vendors' responsibility, underscore the importance of public awareness regarding the security implications of AI coding tools and the expanded attack surface they represent, especially given the growing reliance on their outputs within the software development lifecycle.
The "Rules File Backdoor" technique represents a significant evolution in supply chain attacks. Unlike traditional code injection that exploits specific vulnerabilities, this approach weaponizes the AI itself, turning a developer's most trusted assistant into an unwitting accomplice.
As AI coding tools become deeply embedded in development workflows, developers naturally develop "automation bias"—a tendency to trust computer-generated recommendations without sufficient scrutiny. This bias creates a perfect environment for this new class of attacks to flourish.
At Pillar Security, we believe that securing the AI development pipeline is essential to safeguarding software integrity. Organizations must adopt specific security controls designed to detect and mitigate AI-based manipulations, moving beyond traditional code review practices that were never intended to address threats of this sophistication.
The era of AI-assisted development brings tremendous benefits, but also requires us to evolve our security models. This new attack vector demonstrates that we must now consider the AI itself as part of the attack surface that requires protection.
SCAN NOW: https://rule-scan.pillar.security/
Tags (Unicode block): https://en.wikipedia.org/wiki/Tags_(Unicode_block)
ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes: https://embracethered.com/blog/posts/2024/hiding-and-finding-text-with-unicode-tags/
Subscribe and get the latest security updates
Back to blog