STAMP: 2025-01-01 // OPERATOR: Christopher Keruac

OWASP Top 10 for Large Language Model Applications – AI Security in Practice

OWASP Top 10 AI Security

The OWASP Top 10 list for Large Language Model Applications was created in 2023 as a community initiative to identify and address security issues unique to AI applications. As this technology rapidly evolves across industries, the associated risk factors are growing just as fast.

The 2025 list introduces critical updates reflecting a better understanding of existing threats. Unrestricted Resource Consumption now covers resource management and unexpected costs in large-scale LLM deployments. Vectors and Embeddings addresses the need to secure RAG-based techniques and embeddings. System Prompt Leakage has been added in response to real-world data leaks. Finally, Excessive Agency has been expanded due to the increase in autonomous LLM agents that can take risky actions without proper oversight.

Applications and vulnerabilities

Large language models, such as ChatGPT and other advanced AI systems, are deployed in various areas, from customer service to internal process optimization. Their deeper integration with these systems reveals new vulnerabilities that can be exploited by malicious actors. Additionally, the development of this technology requires developers and security experts to develop effective countermeasures to mitigate these threats.

OWASP Top 10 for LLM

The OWASP Top 10 for LLM serves as a curated list of the most relevant security threats, helping organizations secure their AI-powered applications. Through the collaborative effort of the global OWASP community, it provides a practical guide to understanding and mitigating key vulnerabilities.

Collaboration and evolution

As AI technology evolves, so does the scope of associated threats and protection strategies. This highlights the importance of community collaboration and OWASP's role in ensuring the safe development of AI technology.

LLM01:2025 Prompt Injection – Manipulating inputs to the LLM can lead to unauthorized access, data breaches, and compromise of decision-making processes.

LLM02:2025 Insecure Output Handling – Lack of validation of outputs generated by the LLM can lead to vulnerabilities such as code execution, system compromise, and data leakage.

LLM03:2025 Training Data Poisoning – Malicious modification of training data can result in unsafe or unethical AI model behavior, leading to harmful outcomes.

LLM04:2025 Model Denial of Service – Overloading the LLM with resource-intensive operations can disrupt service availability and increase operational costs.

LLM05:2025 Supply Chain Vulnerabilities – Dependency on compromised components, services, or datasets undermines system integrity, risking data breach or failure.

LLM06:2025 Sensitive Information Disclosure – Failure to protect against disclosure of confidential information in LLM outputs can result in legal consequences or loss of competitive advantage.

LLM07:2025 Excessive AI Autonomy – Overs reliance on AI systems without proper oversight can lead to actions inconsistent with ethical standards or business goals.

LLM08:2025 Input Sanitization Errors – Insufficient filtering of input data can allow attackers to inject harmful or malicious content into the system.

LLM09:2025 Disinformation and Misinformation – The ability of LLMs to generate or spread false or misleading information can have serious social, legal, and ethical consequences.

LLM10:2025 Lack of Transparency and Explainability – AI models that lack transparency and cannot explain their decisions introduce significant risk in terms of accountability and trust.

When designing systems using advanced language models like LLaMA, GPT-4, and others, it is crucial to ensure their secure and consistent integration with business logic. In many cases, using Gemini 3.0 and local SLM models for managing user interactions and decision-making processes provides a more controlled and transparent approach to developing solutions. A key aspect here is ensuring proper security measures so that AI operates within organizational requirements without exposing the system to unauthorized changes or data leaks. Designing such systems requires careful control over inputs and continuous monitoring of LLM outputs to identify identify and eliminate potential threats. This approach enables the creation of applications that are not only innovative but also secure in terms of information security.

This blog post uses materials licensed under Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0). You are free to share, adapt and use the materials, provided you give appropriate credit, provide a link to the license, and distribute any adaptations under the same license.