This is the first part of a two-part series on AI and cybersecurity. Part 2 will look at how AI is being used to counter new cyberthreats.
Cybersecurity experts are warning that generative AI is creating entirely new classes of threats and transforming attack vectors that were once theoretical into terrifying realities.
The new AI-powered threat landscape now includes real-time deepfake video scams, malware that evolves on its own, and advanced prompt injection that fools chatbots into covertly sharing users’ private information with bad actors.
These developments are being driven by generative AI’s ability to lower the barrier to entry for unsophisticated actors, and to act as a powerful force multiplier for sophisticated, state-level attackers.
Evi...
This is the first part of a two-part series on AI and cybersecurity. Part 2 will look at how AI is being used to counter new cyberthreats.
Cybersecurity experts are warning that generative AI is creating entirely new classes of threats and transforming attack vectors that were once theoretical into terrifying realities.
The new AI-powered threat landscape now includes real-time deepfake video scams, malware that evolves on its own, and advanced prompt injection that fools chatbots into covertly sharing users’ private information with bad actors.
These developments are being driven by generative AI’s ability to lower the barrier to entry for unsophisticated actors, and to act as a powerful force multiplier for sophisticated, state-level attackers.
Evidence was on dramatic display in October 2025, when OpenAI unveiled its new agentic AI browser, Atlas. Researchers demonstrated its dangers almost immediately, showing how the browser’s agent mode, which can take over browsing and interact with webpages, is vulnerable to “prompt injection” attacks. This type of attack uses covert instructions hidden on a webpage to make the AI behave in unintended and malicious ways.
For example, one user demonstrated how hidden code on a webpage could trick the Atlas agent into overwriting a user’s clipboard with a malicious link. If the user later pastes, they could be redirected to a phishing site. That led OpenAI’s own Chief Information Security Officer to acknowledge this as a “frontier, unsolved security problem.” And it’s one of many in a new, rapidly expanding threat landscape.
The new face of social engineering
One of AI’s most immediate and tangible impacts is on the sophistication of social engineering.
While AI has been used to refine phishing emails for years, the new generation of generative AI models enables customized mass spear phishing attacks, according to Angelo Huang, CEO of cloud security company Swif. Customized mass spear phishing attacks means that AI can now create “a nearly perfect message tailored to anybody” to entice a click, he said.
This threat now extends convincingly to voice and video. Pratik Singh, a team leader at software development firm Cisin, pointed to the feasibility of real-time voice cloning that can realistically impersonate people in live phone calls and circumvent voice-recognition security systems.
This capability has moved from phishing emails to “real-time deep fake social engineering,” said Ed Fox, Director of Technology at Tarkenton. He mentioned a widely reported 2024 case in which a Hong Kong-based employee was tricked into wiring $25 million after participating in a video call where the company’s CFO and other colleagues were all AI-generated deepfakes.
Beyond simple impersonation, AI allows for the fabrication of synthetic identity, said Singh. An attacker can generate deepfake videos, voice clones, and an entire artificial online trail on social networks, allowing them to build trust, conduct espionage, or commit fraud.
Fox described this as “industrialized synthetic identity and document fraud,” noting that AI can now fabricate realistic invoices, pay slips, and contracts to feed account-opening fraud, create bogus vendors, or perpetrate fake employee scams.
Creating smarter, faster malware
Beyond impersonation, AI is fundamentally changing how malicious code is created and deployed.
Experts warn of adaptive malware evolution, where AI systems develop polymorphic malware that modifies its code dynamically.
This allows the malware to evade detection and enhance its own attack procedures based on feedback from the environment it has infected.
Mackenzie Jackson, Security Advocate at Ghent, Belgium-based software company Aikido Security, agreed, stating that AI can generate polymorphic malware that “changes itself based on the system it is attacking,” making it significantly harder for traditional signature-based scanners to detect.
AI also automates the discovery of new vulnerabilities. Singh noted that AI can scan massive codebases to identify novel security flaws and generate potential exploits, doing in hours what used to take attackers months.
This leads directly to AI-generated code being used in attacks. Attackers no longer have to manually tweak malware code. They can now use an LLM to rewrite entire sections of it with the same effort, helping them better avoid detection.
An army of ‘script kiddies‘
This level of automation has also democratized cyberattacks.
“AI has effectively erased the skill gap,” said Jerry Chen, co-founder of San Jose-based Firewalla. “Unsophisticated actors can now generate working exploit scripts, automate credential-stuffing, and run IoT or router scans with almost no technical background.”
Jackson described this as enabling the next generation of “script kiddies,” or attackers who possess malicious intent but lack deep technical skill.
“They can now generate and debug malware, follow step-by-step instructions, and adapt payloads through simple prompting,” he said. Specially trained offensive AI models, he added, reduce the barrier almost entirely to intent, not ability.
While AI empowers amateurs, it also acts as a force multiplier for attackers who do know what they’re doing, namely sophisticated state-level actors. Chen said these advanced groups use AI for automated vulnerability discovery, rapid adaptation of exploits, and large-scale synthetic spear-phishing operations.
This was on full display recently in a case where AI lab Anthropic identified and combatted what it called the “first reported AI-orchestrated” cyber espionage campaign. AI performed up to 90% of the attack on its own, doing reconnaissance, vulnerability testing, exploitation, and data collection, all at superhuman speed and scale, Anthropic said.
Underrated threats on the horizon
Looking ahead, experts warn of several plausible, high-impact threats that the cybersecurity community may be underestimating.
A primary concern is the rise of fully autonomous systems. Matthew Wright, a professor of cybersecurity at Rochester Institute of Technology, said that fully autonomous malicious agents will come online in the next two or three years.
“They don’t require rest, they will be able to coordinate in teams, and they can process huge amounts of information at lightning speed,” said Wright. “Rather than using a set playbook, they can randomize their operations among the set of effective options. Every system they attack will generate more data for how they can be even better next time.”
Dave Chronister, founder of Parameter Security, pointed to the looming threat of AI-powered botnets. He fears AI will add intelligence and dynamic adaptability to what are already powerful systems, creating “super botnets” that can grow exponentially large while becoming semi- or fully independent and able to launch relentless, machine-speed attacks.
Other experts are concerned about vulnerabilities inadvertently created by technical professionals. Huang worries about the sheer volume of “AI-generated sloppy code” creating a flood of zero-day vulnerabilities that security organizations are already struggling to handle.
Part 2 of this series will explore the AI-powered defenses that are in development or already available to counter new AI cyberthreats.
Logan Kugler is a technology writer specializing in artificial intelligence based in Tampa, FL, USA. He has been a regular contributor to CACM for 15 years and has written for nearly 100 major publications.