The OWASP GenAI Security Project today published a top 10 list of the potential security threats that organizations are likely to encounter as they build and deploy artificial intelligence (AI) agents.
Announced at the Black Hat Europe 2025 conference, OWASP also published a pair of guides for governing and securing AI agents, along with a visual map detailing the risk level attached to various open-source and commercial agentic AI tools. There is also a reference OWASP FinBot Capture The Flag application being made available that cybersecurity teams can use to test and practice agentic security skills in a…
The OWASP GenAI Security Project today published a top 10 list of the potential security threats that organizations are likely to encounter as they build and deploy artificial intelligence (AI) agents.
Announced at the Black Hat Europe 2025 conference, OWASP also published a pair of guides for governing and securing AI agents, along with a visual map detailing the risk level attached to various open-source and commercial agentic AI tools. There is also a reference OWASP FinBot Capture The Flag application being made available that cybersecurity teams can use to test and practice agentic security skills in a controlled environment.
The top ten list for agentic AI threat, as identified by OWASP, is:
Agent Goal Hike Identify and Privilege Abuse Unexpected Code Execution (RCE) Insecure InterAgent Communication Human Agent Trust Exploitation Tool Misuse and Exploitation Agentic Supply Chain Vulnerabilities Memory and Context Poisoning Cascading Failures Rogue Agents
This list extends the scope of previous OWASP projects that track the top 10 threats for large language models (LLMs) and web applications. While none of the threats idenified by OWASP are at this point particularly novel, they do collectively make it clear that as AI agents are distributed across the enterprise the overall size of the attack surface that needs to be defended is about to dramatically expand, Many cybersecurity teams that are already hard pressed to secure existing IT environments are, as a result, like to be overwhelmed, especially as more AI agents are provisioned with unique identities.
In theory, at least, any AI agent created by a human will inherit the identity and permissions that have been assigned to the person who created it. However, there will be classes of AI agents that have been created to autonomously complete tasks on behalf of the organization, many of which will be assigned a new type of non-human identity and associated permissions that will need to be governed and managed.
Unfortunately, shadow AI issues that are already becoming problematic will also likely be further exacerbated as either internal or external AI agents are employed by end users with little or no regard to the security implications. Cybersecurity teams will soon find themselves regularly conducting scans for signs of rogue AI agent activity.
It’s not clear how proactively organizations are addressing these potential cybersecurity threats, but if history is any guide, it will require a significant number of high-profile incidents before organizations address these issues. Once again, cybersecurity teams are witnessing a rapid adoption of an emerging technology that, from a cybersecurity perspective, has yet to be fully vetted, noted Clinton. On the plus side, many cybersecurity professionals have seen this movie before, so there is an opportunity to more proactively prepare, he added.
Hopefully, cybersecurity teams will be able to put some measures in place to once again protect employees from themselves, but it’s already been shown how, in the absence of any security controls, a trivial prompt injection attack can be used to, for example, convince an AI agent or tool to exfiltrate sensitive data. The challenge now is preventing those attacks from happening without putting cybersecurity teams in the way of AI progress that, at this point, has become the genie that is not going back into the proverbial bottle.
Recent Articles By Author