OWASP Top 10 for Large Language Model Applications – AI Security in Practice
Published: 2025-01-01

The OWASP Top 10 for Large Language Model Applications began in 2023 as a community-driven effort to identify and address security risks unique to AI applications. As this technology rapidly expands across industries, the associated risks are growing just as quickly.
The 2025 list introduces critical updates, reflecting a better understanding of existing risks. Unbounded Consumption now includes resource management and unexpected costs in large-scale LLM deployments. Vector and Embeddings addresses the need for securing RAG and embedding-based techniques. System Prompt Leakage was added to respond to real-world data leaks. Finally, Excessive Agency was expanded due to the rise of autonomous LLM agents, which can take risky actions without proper oversight.
Applications and Vulnerabilities
Large language models, such as ChatGPT and other advanced AI systems, are being deployed in diverse areas, from customer support to optimizing internal processes. Their deeper integration into these systems reveals new vulnerabilities that bad actors might exploit. Additionally, the advancement of this technology calls for developers and security experts to craft effective countermeasures to mitigate these threats.
OWASP Top 10 for LLMs
The OWASP Top 10 for LLMs serves as a curated list of the most significant security risks, helping organizations safeguard their AI-driven applications. Through the collaborative efforts of the global OWASP community, it provides a practical guide to understanding and mitigating key vulnerabilities.
Collaboration and Evolution
As AI technology continues to evolve, so does the range of associated risks and protective strategies. This underscores the importance of community collaboration and OWASP's role in ensuring the secure development of artificial intelligence technologies.
LLM01:2025 Prompt Injection – Manipulating inputs to an LLM can lead to unauthorized access, data breaches, and compromising decision-making processes.
LLM02:2025 Insecure Output Handling – Failure to validate the outputs generated by LLMs can lead to vulnerabilities such as code execution, system compromise, and data leakage.
LLM03:2025 Training Data Poisoning – Malicious alteration of training data can result in unsafe or unethical behavior from the AI model, leading to harmful outcomes.
LLM04:2025 Model Denial of Service – Overloading an LLM with resource-intensive operations can disrupt service availability and drive up operational costs.
LLM05:2025 Supply Chain Vulnerabilities – Dependence on compromised components, services, or datasets undermines system integrity, risking data breaches or failure.
LLM06:2025 Sensitive Information Disclosure – Lack of protection against revealing sensitive information in LLM outputs can result in legal consequences or competitive disadvantage.
LLM07:2025 Excessive AI Autonomy – Over-reliance on AI systems without proper oversight may result in actions that are misaligned with ethical standards or business goals.
LLM08:2025 Input Sanitization Failures – Insufficient filtering of input data can allow attackers to inject harmful or malicious content into the system.
LLM09:2025 Misinformation and Disinformation – The potential for LLMs to generate or propagate false or misleading information can have significant social, legal, and ethical consequences.
LLM10:2025 Lack of Transparency and Explainability – AI models that lack transparency and cannot explain their decisions introduce significant risks in terms of accountability and trust.
When designing systems that leverage advanced language models like LLaMA, GPT-4, and others, it is crucial to ensure their integration within business logic is secure and cohesive. In many cases, in addition to the language model itself, utilizing platforms like Rasa for managing user interactions and decision-making processes provides a more controlled and transparent approach to developing solutions. A key aspect here is ensuring proper security measures are in place, so that AI operates within organizational requirements, without exposing the system to unauthorized changes or data leaks. Designing such systems requires careful control over input data and continuous monitoring of outputs generated by LLMs to identify and eliminate potential risks. This approach enables the development of applications that are not only innovative but also secure in terms of information security.
This blog post uses materials licensed under Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0). You are free to share, adapt, and use the material, provided that you give appropriate credit, indicate changes, and distribute any adaptations under the same license.