The Google Threat Intelligence Group put out a report today showing some pretty novel applications of AI in malware. There have already been a lot of examples of AI making cyberattacks easier (Anthropic had a report on this back in August). But Google’s new report shows how bad actors (many of which are state-backed actors in places like Russia, North Korea, and China) have moved past just using AI to write the code for viruses — they’re making viruses that use AI as a core part of how they work internally, at runtime. One of the most interesting examples is “PROMPTFLUX,” which tries to avoid antivirus detection by using AI to constantly rewrite its own source code. Dynamic code obfuscation is already a common technique for malware, but there’s a limit to how much it can vary its code with…
The Google Threat Intelligence Group put out a report today showing some pretty novel applications of AI in malware. There have already been a lot of examples of AI making cyberattacks easier (Anthropic had a report on this back in August). But Google’s new report shows how bad actors (many of which are state-backed actors in places like Russia, North Korea, and China) have moved past just using AI to write the code for viruses — they’re making viruses that use AI as a core part of how they work internally, at runtime. One of the most interesting examples is “PROMPTFLUX,” which tries to avoid antivirus detection by using AI to constantly rewrite its own source code. Dynamic code obfuscation is already a common technique for malware, but there’s a limit to how much it can vary its code without human programmer oversight. PROMPTFLUX isn’t subject to that limitation, and it might be quite challenging for antivirus software to adapt to this change. The malware doesn’t actually seem to have been widely deployed, but it’s a sign of the cybersecurity threats that AI is making possible.