Google’s Threat Intelligence Group (GTIG) has seen several new and interesting ways in which malware has been leveraging artificial intelligence, going beyond its use for productivity gains.
For some time now cybercriminals and state-sponsored threat actors have been leveraging AI to develop and enhance malware, plan attacks, and create social engineering lures.
The cybersecurity industry has also observed and demonstrated the potential for malware to utilize AI during execution.
For instance, the…
Google’s Threat Intelligence Group (GTIG) has seen several new and interesting ways in which malware has been leveraging artificial intelligence, going beyond its use for productivity gains.
For some time now cybercriminals and state-sponsored threat actors have been leveraging AI to develop and enhance malware, plan attacks, and create social engineering lures.
The cybersecurity industry has also observed and demonstrated the potential for malware to utilize AI during execution.
For instance, the PromptLock ransomware, which made headlines a few months ago over its use of AI to generate scripts on the fly and perform various actions on compromised systems, is an experimental proof-of-concept developed by researchers.
However, Google researchers have come across several other pieces of malware that use AI during an attack. While some of them have been described as “experimental threats”, such as PromptLock, others have been used in the wild.
Another experimental AI-powered malware seen by Google is PromptFlux, a dropper that can “regenerate” itself by rewriting its code and saving the new version in the Startup folder for persistence.
“PromptFlux is written in VBScript and interacts with Gemini’s API to request specific VBScript obfuscation and evasion techniques to facilitate ‘just-in-time’ self-modification, likely to evade static signature-based detection,” GTIG researchers explained.
One of the pieces of malware seen in the wild is FruitShell, a reverse shell written in PowerShell that enables arbitrary command execution on compromised systems. The malware includes hardcoded AI prompts designed to bypass detection and analysis by AI-powered security solutions.
Advertisement. Scroll to continue reading.
Another malware family highlighted by GTIG is PromptSteal, a Python-based data miner that leverages the Hugging Face API to query the Qwen2.5-Coder-32B-Instruct LLM in order to generate one-line Windows commands for collecting system data and documents from specific folders.
The last example highlighted by Google is QuietVault, a credential stealer developed in JavaScript designed to collect NPM and GitHub tokens. The malware uses an AI prompt and AI command-line interface tools installed on the compromised host to look for other secrets on the system.
“While still nascent, this represents a significant step toward more autonomous and adaptive malware,” GTIG researchers said, later adding, “We are only now starting to see this type of activity, but expect it to increase in the future.”
Google’s report also describes other aspects related to the use of AI by threat actors. The tech giant has seen how threat actors are using prompts that can be described as ’social engineering’ to bypass AI guardrails.
The company also warns that the underground marketplace for AI tools is maturing. Its researchers have seen multifunctional tools designed for malware development, phishing, and vulnerability research.
“While adversaries are certainly trying to use mainstream AI platforms, guardrails have driven many to models available in the criminal underground,” explained Billy Leonard, tech lead at Google Threat Intelligence Group. “Those tools are unrestricted, and can offer a significant advantage to the less advanced. There are several of these available now, and we expect they will lower the barrier to entry for many criminals.”
In addition, nation-state actors linked to China, Iran and North Korea have continued to use Google’s Gemini to enhance reconnaissance, data exfiltration, command and control systems, and other components of their operations.
Related: How Software Development Teams Can Securely and Ethically Deploy AI Tools
Related: Claude AI APIs Can Be Abused for Data Exfiltration
Related: AI Sidebar Spoofing Puts ChatGPT Atlas, Perplexity Comet and Other Browsers at Risk