Blog

min read

From Shift Left to Shift Up: Securing the New AI Abstraction Layer

By

Dor Sarig

and

July 14, 2025

min read

The limits of looking left in the intelligence age

The DevSecOps lifecycle and its core "shift-left" principle are foundational for software security. The approach is powerful because it finds and fixes vulnerabilities in human-written, deterministic code before it reaches production. Its value is undeniable. However, its effectiveness diminishes when applied directly to the unique nature of AI systems.

AI introduces characteristics that traditional security models were not designed to handle. These systems are not merely executing pre-written instructions; they are learning, adapting, and acting in ways that create a new, higher-level risk surface.

The Emergence of the AI Abstraction Layer

Three defining characteristics of the AI abstraction layer challenge our conventional security thinking:

  • Autonomous Decisions: AI agents can now execute complex, multi-step business processes without needing direct human oversight. An AI might approve financial transactions, manage customer service workflows, or even provision its own cloud resources. This autonomy means a security vulnerability can manifest directly as a flawed business decision, creating immediate operational risk that goes far beyond a typical software bug.
  • Unhuman Scale: The sheer volume of AI-generated code, content, and decisions makes manual review and oversight impossible. A single model can generate thousands of lines of code or handle millions of customer interactions a day. Your security team cannot possibly inspect every output or action. This scale creates a significant potential blind spot where subtle vulnerabilities or policy drifts can go undetected.
  • Opaque Logic: The "black box" nature of many sophisticated models makes it difficult to fully understand why an AI arrived at a specific conclusion. The decision-making process can be so complex that even the data scientists who built the model cannot always trace the exact path of its logic. Opaque reasoning complicates traditional risk assessment and makes it harder to predict how a system might fail or be exploited by an adversary.

A New Dependency Stack Pushes Risk Upward

The rise of agentic and generative AI has introduced a new plane of operation that sits between an application’s code and its business outcomes. This is the AI abstraction layer. It is the space where AI models interpret instructions, form judgments, and execute tasks independently. 

The AI abstraction layer creates a new and deeply interconnected dependency stack. A flaw anywhere in the technology stack no longer remains isolated at its layer. Instead, the risk is pushed upward, where it can be amplified and exploited by an autonomous system. A minor misconfiguration in cloud infrastructure, a vulnerability in a web application, or a piece of poisoned data can now become a vector to compromise the business-level decisions made by the AI.

Example: The Unmonitored Parallel Lifecycle

Much of modern AI development happens in a parallel lifecycle, outside of governed CI/CD pipelines. Through iterative prompting and configuration developers guide AI systems to generate code, orchestrate workflows, and interact directly with other enterprise tools. This activity often occurs on local workstations, completely bypassing the security controls that monitor the traditional SDLC and creating a form of shadow AI development that introduces unvetted risks. Because this lifecycle lacks the traditional gates of security reviews and staged deployments, a flawed prompt or a misconfigured agent can push a vulnerability from a developer's idea to a live business process in minutes, allowing risks to shift up to the business layer almost instantly.

Example: The Compromised Data Supply Chain

Consider a scenario within the financial sector. A consortium of global banks develops a sophisticated fraud detection system using federated learning, allowing the model to learn from transaction data across all member banks without centralizing the sensitive data itself. The system relies on various external data feeds, including a third-party API that provides market news and sentiment analysis, to enrich its decision-making.

An adversary recognizes that the model's logic is influenced by this external data. They compromise the third-party market-news API, a component entirely outside the banks' direct control. Over several weeks, the attacker subtly injects poisoned sentiment signals into the news feed. These signals are designed to gradually retrain the global fraud model, teaching it that transactions originating from certain shell accounts are "low-risk."

No traditional security tool would catch this. The application code at each bank is secure. The infrastructure is sound. But the AI's decision-making logic has been corrupted. During a coordinated event, the compromised model fails to flag a massive money-laundering operation, and the institution suffers significant financial and reputational damage. The security failure did not happen at the code level; it happened at the AI abstraction layer, proving that a horizontal security view is no longer sufficient.

Introducing the "Shift Up" Imperative

Because risk now flows vertically up the technology stack into the AI abstraction layer, security must follow. This necessity is the foundation of the "Shift Up" philosophy, a core principle of the SAIL Framework. Shifting up means elevating the focus of security from the code and infrastructure to the AI-driven business logic, decisions, and processes that the AI now controls.

Adding a Vertical Axis to Security

A modern security strategy requires thinking in two dimensions. The horizontal axis, covered by "shift left" and "shift right" (runtime security), addresses the software development lifecycle. The "Shift Up" principle introduces the critical vertical axis. This vertical plane of security ensures that protection is applied at every layer of the new dependency stack, from the foundational infrastructure all the way up to the autonomous decisions made by the AI.

Securing Logic, Not Just Lines of Code

The most fundamental change AI introduces is that data is now executable. Prompts, tool responses and documents fed into an LLM as context that could be interpreted as direct commands, and even the configuration files for an AI agent are no longer passive information. They are instructions that directly command the AI's behavior. The focus of security must therefore expand from validating static lines of code to validating the dynamic instructions, data, and logic given to the AI in real time.

What "Shifting Up" Looks Like in Practice

Operationalizing a "Shift Up" strategy requires a new set of tools and practices designed specifically for the AI abstraction layer. It involves moving beyond code scanning and firewall rules to implement controls that can govern autonomous, intelligent systems. 

This strategy requires proactive and continuous testing of the AI's logic and decision-making processes themselves, using adversarial simulations to find exploitable weaknesses in how the system reasons and behaves. It also involves implementing adaptive, real-time controls that govern the autonomous actions of AI agents. These controls are designed to enforce business policies and contain agentic risks, ensuring that an AI's operations align with an organization's intent, even when its behavior is not fully predictable.

How Pillar Security Enables a "Shift Up" Strategy

The Pillar Security platform was built from the ground up on the "Shift Up" philosophy. It provides a unified, adaptive solution designed to secure the entire AI lifecycle, with a specific focus on the new risks introduced at the AI abstraction layer.

Gaining Visibility into the Abstraction Layer

Building an effective AI security roadmap that addresses the new AI abstraction layer requires the discovery of all AI assets. The Pillar platform provides this foundational visibility by integrating across an organization's entire technical environment to discover and catalog every component. This includes:

  • Code, Data, and AI/ML Platforms: Pillar integrates with code repositories, cloud environments, and MLOps tools to find where AI/ML models, datasets, prompts, and pipelines are built and stored.
  • No-Code Agentic Platforms: The platform connects with the no-code tools where business teams often build and deploy AI agents, uncovering workflows that can create unmonitored risk.
  • Developer Endpoints and Workstations: Discovery extends directly to developer workstations, identifying local AI assets like models, datasets, MCP servers, notebooks and coding agents that might otherwise remain invisible to central security teams.

This process creates a complete inventory of every model, dataset, and prompt, providing the unified visibility needed to secure the full AI abstraction layer.

Analyzing Risk Where It Matters Most

Building on this comprehensive inventory, Pillar’s platform provides a multi-faceted approach to risk analysis:

  • AI Security Posture Management (AI-SPM): The platform maps the connections between internal assets to visualize the complete AI attack surface, analyzing it for policy violations, misconfigurations, prompt/model/tool and supply chain risks.
  • Tailored Red Teaming: Automated, adversarial simulations proactively test this new AI app attack surface. These tests are tailored to the specific business logic of each AI application, moving beyond generic checks to find the most relevant vulnerabilities—from prompt injections to complex data extractions—that could impact core operations.
  • 3rd Party Red Teaming: This analysis extends to external dependencies, which create significant security blind spots. Pillar's red teaming capabilities rigorously evaluate third-party AI apps and agentic flows for data oversharing and security risks before they are integrated in the organization's ecosystem. This provides security teams with comprehensive risk assessments and vendor security scorecards to make informed decisions about their AI supply chain.

Applying Purpose-Built Controls  

Pillar’s platform provides the definitive "Shift Up" controls needed to secure the AI abstraction layer through a combination of real-time prevention and containment:

  • Adaptive Guardrails: These operate in real time to monitor and contain the autonomous actions of AI agents. They enforce business policies and prevent threats like prompt injection and data leakage as they happen.
  • Continuous Learning Loop: The guardrails are uniquely tailored and strengthened for each application through a closed feedback loop. This loop integrates intelligence from three sources: the findings from our ongoing AI Red Teaming, a proprietary threat intelligence feed, and an analysis of live user interactions to automatically adapt defenses.
  • AI Activity Tracing: The platform continuously monitors system activity and collects detailed telemetry on all AI interactions, including prompts, outputs, and tool calls . This provides deep traceability for incident response and the audit trails required for regulatory compliance, while also helping to detect anomalies and potential attacks

Conclusion 

The future of effective AI security is multi-dimensional, extending horizontally across the software lifecycle and shifting vertically to protect the new AI abstraction layer.

For CISOs and security leaders, this means re-evaluating enterprise risk management frameworks. Governance must now extend to the autonomous business processes that AI controls, with a clear understanding that a vulnerability anywhere in the stack can lead to a compromise at the highest level of business logic.

For AI product leaders, this new paradigm is an opportunity. Embracing a "Shift Up" approach and building applications with AI-native security controls from the start leads to safer, more reliable, and more trustworthy autonomous systems. In the AI era, robust security is not a barrier to innovation; it is the foundation upon which it is built, and it will become a powerful competitive advantage.

Subscribe and get the latest security updates

Back to blog

MAYBE YOU WILL FIND THIS INTERSTING AS WELL