Last week, OWASP released the 2025 edition of the OWASP Top 10 for LLM Applications, highlighting the rapid advancements in large language model (LLM) capabilities, their expanding use cases, and the evolving security risks. This updated framework aims to help organizations identify and mitigate the most critical security threats in LLM applications throughout the entire AI lifecycle—from development to deployment.
The 2025 list brings attention to three new vulnerabilities that mirror the current state-of-the-art applications and emerging attack vectors:
System prompts often contain essential instructions or sensitive information that guide the LLM's behavior. However, these prompts can inadvertently be exposed in the model's responses. To mitigate this risk, developers should:
LLM applications frequently use vector databases and embeddings to enhance their functionality. Without proper access controls, these systems can become vulnerable to unauthorized data exposure. Developers should:
LLMs have the potential to generate content that is not factually accurate, leading to the spread of misinformation. To address this challenge, developers should:
In addition to the new entries, several vulnerabilities have been removed or reclassified in the 2025 list:
The evolution of the OWASP Top 10 for LLM Applications is driven by a deeper understanding of existing risks and critical updates informed by how LLMs are being utilized in real-world scenarios.
For instance, the addition of System Prompt Leakage as a top vulnerability reflects findings from our recent research. Last month, we published the "State of Attacks on GenAI" report - backed by comprehensive analysis of real-world data from over 2,000 LLM applications. This industry-first report sheds light on the evolving landscape of AI security threats, moving beyond hypothetical risks to uncover actual attack patterns and observed vulnerabilities.
A significant takeaway from our analysis of real-world attacks is the limitations of prompt hardening as a standalone defense. Despite efforts to strengthen system prompts and align instructions, our research uncovered numerous examples of adversaries successfully bypassing these safeguards with surprising ease. This underscores the critical need for robust, multi-layered security strategies that extend beyond prompt-level measures.
As OWASP has highlighted:
"The inclusion of System Prompt Leakage addresses a vulnerability with real-world exploits that the community has been increasingly concerned about. Many developers assumed that system prompts were securely isolated, but recent incidents have demonstrated that this information can inadvertently be exposed."
The unprecedented pace of LLM advancement is reflected in OWASP's annual updates to its LLM Top 10 list (compared to every 4 years for the OWASP Top Ten for traditional web applications). This accelerated review cycle underscores the dynamic nature of LLM security challenges and the critical importance of staying current with emerging threats.
In this evolving landscape, Pillar Security is dedicated to helping organizations develop, deploy, and use AI applications securely. By addressing vulnerabilities across the entire AI lifecycle—from development through production to usage—our platform ensures that businesses can innovate with confidence.
Pillar’s adaptive platform integrates seamlessly with any infrastructure, offering support for model-agnostic, self-hosted, and cloud deployments, as well as compatibility with leading foundation model providers. Key features include:
By offering end-to-end AI lifecycle security, Pillar Security empowers businesses to innovate while protecting their critical assets. Our commitment is to provide organizations with the tools and expertise they need to build and maintain secure, resilient AI applications, ensuring peace of mind in an ever-evolving threat landscape.
Subscribe and get the latest security updates
Back to blog