NASHVILLE — AI security was a bigger topic than ever at the ISC2 Security Congress 2025 here this week, with many speakers addressing the difficulties of protecting against AI-powered attacks, and the difficulties of protecting AI itself.
There wasn’t much discussion of how AI might help defenders. Rather, the consensus seemed to be that many AI model providers are shortchanging security in the rush to develop, and that AI itself will only amplify the types of cyberattacks that are already commonplace.
There was also a lot of skepticism about the purported benefits of agentic AI, and the huge attack surface it opens up.
“AI is not your friend,” said the inde…
NASHVILLE — AI security was a bigger topic than ever at the ISC2 Security Congress 2025 here this week, with many speakers addressing the difficulties of protecting against AI-powered attacks, and the difficulties of protecting AI itself.
There wasn’t much discussion of how AI might help defenders. Rather, the consensus seemed to be that many AI model providers are shortchanging security in the rush to develop, and that AI itself will only amplify the types of cyberattacks that are already commonplace.
There was also a lot of skepticism about the purported benefits of agentic AI, and the huge attack surface it opens up.
“AI is not your friend,” said the independent investigative journalist and blogger Brian Krebs during his keynote address on Thursday (Oct. 30). “I don’t know if it’s your enemy, but it’s starting to feel that way.”
AI is already helping the bad guys
AI is supercharging existing risks and attack vectors, Krebs said during a rather pessimistic presentation that also addressed other cybersecurity issues.
“There are very few challenges in cybersecurity that are made easier by AI,” he said.
In Krebs’ opinion, the data scraping by AI makers to train large language models is indistinguishable from that of malicious botnets. He said some AI builders had even admitted using residential proxy networks in which home broadband users are often unaware that their bandwidth and IP addresses are being exploited.
“I don’t think most cybersecurity experts grasp the size of the residential proxy networks out there,” Krebs said. “They’re a primary source of unwanted traffic, from ticket scalpers to cybercriminals.”
That unwanted traffic, he said, now includes LLM crawlers, which Krebs said are overwhelming server resources and Git repositories.
He also predicted that we’d see more self-replicating threats, such as the “Shai-Hulud” NPM worm that recently made the rounds, because it will be easier for cybercriminals to develop them with AI.
“The time to address such weaknesses is yesterday,” Krebs said.
Joseph Carson, Chief Evangelist and Advisory CISO at Segura, was almost as gloomy in a presentation he gave Tuesday (Oct. 28).
“A lot of crime has moved online,” Carson said, referring to how ordinary street criminals moved into cybercrime during the COVID years. “And in the past couple of years, a lot of it is being done with AI.”
Carson’s work involves negotiating with ransomware attackers, and he said that AI is making that part of his job tougher and a little surreal.
“Attackers can now gather data on targets in real time,” he said, giving them an advantage during discussions about the size of ransoms. “Last year, for the first time, I realized I was talking to an AI bot instead of a ransomware negotiator.”
We already know that AI makes certain aspects of cybercrime easier to accomplish, but Carson illustrated that by showing screenshots of how he used ChatGPT to crack his own password just by telling the LLM he could only partly remember it.
ChatGPT walked him through the use of the cracking tool HashCat and even matched the known hash against a long list of possible permutations to find the password.
“There’s a whole cybercrime ecosystem out there,” said Carson. “Criminals are using AI to rewrite their code.”
Not everyone seems to mind weak AI security
What’s more concerning than cybercriminals using AI to turbocharge old forms of attacks? Entirely new forms of attacks upon generative AI and AI agents, and what appears to be a lack of concern among some of the companies rushing to develop AI as fast as they can.
Alex Haynes, CISO at IBS Software in the United Arab Emirates, said in a presentation Wednesday (Oct. 29) that AI guardrails and other protections differ widely among AI makers.
Anthropic has strong safety guardrails (which filter output) and security guardrails (which filter input) turned on by default, and many of Anthropic’s guardrails can’t be disabled, he said.
On the other hand, OpenAI, the company that Anthropic’s founders left over purported safety concerns, has all guardrails turned off by default, Haynes said. Only the safety ones, which make sure ChatGPT doesn’t say anything racist or dangerous, are configurable.
Most of the other big U.S.-based models, including Microsoft Copilot, Google Gemini and Amazon’s Bedrock AI-development framework, seem to fall somewhere in between.
“There are a whole lot of AI startups that don’t have a lot of appreciation for the amount of trust their customers place in them,” said Krebs in his Thursday keynote.
As for the Chinese model DeepSeek, it does have guardrails, Haynes said, but they don’t seem to work very well.
He had fun showing how DeepSeek ordinarily refuses to answer questions about the Tiananmen Square massacre of 1989, but that it will comply if you ask it to relate the details of the incident in the voice of a 15th-century French peasant.
Likewise, DeepSeek will refuse to code malware for you if you ask it directly, Haynes showed, but is happy to do so if you ask it to act as a theoretical lawless LLM that has no boundaries.
For better or for worse, third-party guardrails are now appearing, Haynes said. Meta offers a free Llama Firewall, and Nvidia has its free Nemo Guardrails, but paid options are also available.
“You can integrate guardrails with an agent, or have a one-to-one relationship, or have one set of guardrails protecting several agents,” Haynes said.
Unfortunately, the block rate of security guardrails is generally only about 80% accurate, Haynes added. Web application firewalls (WAFs), by comparison, block almost 100% of threats, although Haynes said he thinks that AI guardrails will steadily improve.
He also noted that some guardrails will protect against prompt injections and other natural-language-based attacks only in certain languages. Amazon Bedrock will implement guardrails only in English, French or Spanish, Haynes said, while Llama’s guardrails filter out attacks written in about eight languages.
Haynes said that if you do implement guardrails around your AI model — and you probably should — it’s very important to make sure they’re set up properly.
“Misconfigured guardrails will break your AI model,” he said.
The perils and promise of agentic AI
One thing that the ISC2 speakers agreed on is that agentic AI, and the protocols and connectors that have sprung up around it, is fundamentally different from generative AI like LLMs.
“Agentic AI will need a different threat framework than LLMs,” said John Bates, Senior Manager of Cybersecurity and Responsible AI at Ernst & Young, in a talk Wednesday. “It has new and different vulnerabilities and challenges.”
Big business and AI vendors are driving the adoption of agentic AI and even changing the definitions of “AI agent” and “agentic AI” according to how it suits them, Bates said. What’s less clear is who really wants agentic AI and how effective it will be.
Gartner has predicted that at least 40% of agentic AI projects will be cancelled by 2027, Bates said, and that a Carnegie Mellon study found that AI agents failed 70% of time for multi-step office tasks.
Those numbers echo the disappointment rate for generative AI, Bates added, citing statistics showing that 95% of GenAI pilots fail, that developers are not using AI coders as much as they could, and that coding tools slow down development due to “verification tax.”
Overall, agentic AI and its highly privileged access into traditional applications seems to have created a huge attack surface.
“Agentic AI environments that developers are using are pretty risky,” Krebs said in his Thursday keynote.
He singled out Anthropic’s widely adopted Model Context Protocol (MCP), which uses server-like interfaces to act as intermediaries between AI agents and regular applications, as an example.
“They’re essentially stand-alone servers that don’t have a lot of security,” Krebs said. “They don’t do a good job of segregating user traffic. OAuth is optional and rarely turned on. Attacks on MCP servers are no longer theoretical.”
For his part, Haynes said AI agents are something we’ve never seen before. They are both assets and identities, and they are insecure by default.
“Agentic AI is vulnerable to all the common threats you know and love,” Haynes said.
You’ve got to protect AI agents as best you can, Haynes added, but it’s very difficult to monitor them directly, as you can’t peer into their black boxes. Rather, you’ve got to look for indirect signs of malicious activity.
“See if the security or safety guardrails have blocked a lot of requests, or if the API rate limits have been reached,” Haynes said.
And many current-generation infosecurity tools don’t do much to protect AI agents, Haynes said. Dynamic application security testing (DAST), vulnerability management, attack surface management and continuous threat exposure management (CTEM) can’t secure AI for the moment.
Secure application security testing (SAST), endpoint detection and response (EDR) and layer 3 and 4 firewalls offer some protection, he said, but only WAFs, proxies and identity and access management (IAM) and identity providers (IDPs) are essential in securing AIs.
“It’s very similar to the transition between on-prem and cloud a decade or so ago,” Haynes said. “When you’ve got an AI agent exposed online, you’ve got to put a WAF in front.”
Haynes doesn’t discount the promise of agentic AI, but he does counsel keeping a close eye on AI agents.
“Agents will behave a little bit like gremlins,” he said. “They will cheat to achieve an objective and use tools in ways you haven’t thought of.”
In his Tuesday talk, Carson echoed that guarded enthusiasm about AI agents.
“Agentic AI is our digital companion, and we have to train it properly, to tell it what’s real and what’s fake, what’s good and what’s bad,” Carson said. “Train your AI like it’s your best cybersecurity analyst, because one day it will be.”