Executive Summary
Imagine visiting a webpage that looks perfectly safe. It has no malicious code, no suspicious links. Yet, within seconds, it transforms into a personalized phishing page.
This isn’t merely an illusion. It’s the next frontier of web attacks where attackers use generative AI (GenAI) to build a threat that’s loaded after the victim has already visited a seemingly innocuous webpage.
In other words, this article demonstrates a novel attack technique where a seemingly benign webpage uses client-side API calls to trusted large language model (LLM) services for generating malicious JavaScript dynamically in real time. Attackers could use carefully engineered prompts to bypass AI safety guardrails, tricking the LLM into returning malicious code snippets. These snippets are returned via the LLM service API, then assembled and executed in the victim’s browser at runtime, resulting in a fully functional phishing page.
This AI-augmented runtime assembly technique is designed to be evasive:
- The code for the phishing page is polymorphic, so there’s a unique, syntactically different variant for each visit
- The malicious content is delivered from a trusted LLM domain, bypassing network analysis
- It is assembled and executed at runtime
The most effective defense against this new class of threat is runtime behavioral analysis that can detect and block malicious activity at the point of execution, directly within the browser.
Palo Alto Networks customers are better protected through the following products and services:
- Advanced URL Filtering
- Prisma AIRS Prisma Browser with Advanced Web Protection
The Unit 42 AI Security Assessment can help empower safe AI use and development across your organization.
If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team.