Credit: Gavin Phillips / MakeUseOf
At this point, everyone knows that generative AI tools have a dark side. I’m not talking about the problems with plagiarism and other forms of content theft, or even the huge amounts of energy and water required to keep data centers up and running.
It’s the use of AI to craft dangerous malware and phishing schemes, making it easier than ever for criminals to build entire scam campaigns from just a few prompts.
Vibescamming, as its known, is quickly becoming the biggest problem created by AI—but there are a few ways you can stay safe.
Vibescamming makes fraud simple
Just like vibe coding, all you need is an AI chatbot
In short, vibescamming is phishing powered by AI. It borrows from “vibe coding,” the process of building software by promp…
Credit: Gavin Phillips / MakeUseOf
At this point, everyone knows that generative AI tools have a dark side. I’m not talking about the problems with plagiarism and other forms of content theft, or even the huge amounts of energy and water required to keep data centers up and running.
It’s the use of AI to craft dangerous malware and phishing schemes, making it easier than ever for criminals to build entire scam campaigns from just a few prompts.
Vibescamming, as its known, is quickly becoming the biggest problem created by AI—but there are a few ways you can stay safe.
Vibescamming makes fraud simple
Just like vibe coding, all you need is an AI chatbot
In short, vibescamming is phishing powered by AI. It borrows from “vibe coding,” the process of building software by prompting a generative AI tool until you get what you want.
In the same way, VibeScamming lets almost anyone launch a phishing scam or cyberattack by describing it to an AI agent. Even people with no coding skills or hacking experience can generate malicious emails, fake websites, and malware just by prompting an AI chatbot.
So, imagine a would-be scammer wants to get involved with some password theft, but has no idea how to code the program to do the job. For the purpose of this, we can also assume they don’t head to the dark web and attempt to buy some malware online. Instead of having to learn how to program a specific tool to steal the passwords or develop a phishing campaign to lure people in, they can prompt an AI tool to do the work instead.
Vibescamming is wild because it lowers the barrier barrier of entry for cybercrime like never before. In the past, a criminal might need to know how to design websites, write convincing English, or code malware. Now they can have an AI do it all.
Another danger is speed and scale. AI allows scammers to automate tasks and scale up attacks much faster than a human could. For example, an AI can quickly personalize phishing emails for thousands of targets (by scraping public info and having the AI draft custom messages for each person). It can also adapt on the fly. If a phishing page link gets blocked, the scammer can ask the AI to modify the code or text and spin up a new version. This agility means phishing campaigns can evolve rapidly to evade defenses.
It doesn’t work on any old chatbot, mind
Some chatbots are far more cautious than others
Vibe coding works with any generative AI chatbot. Some do it better than others, but generally, most of them will give it a go. Thankfully, the same can’t be said for vibescamming.
Most AI chatbots have safety guardrails designed to protect against predictably dangerous uses. For example, ChatGPT declines prompts like “Help me create a website that looks like a Microsoft login page and an SMS message to get people to click it,” explaing that it’s “fraud/phishing and illegal, and I won’t assist with it.”
It’s a similar story across other chatbots; Opera’s agentic Neon browser classified my request as suspicious, while Grok rejected my request as it “violates safety guidelines against social engineering attacks.”
However, as said, some chatbot are more susceptible to such requests. 2025 research from Guardio Labs, who coined the term “vibescamming,” found a newer AI tool could be tricked into delivering the goods. Loveable, an app designed to facilitate vibe coding, got straight to work planning and designing a phishing campaign for the researchers, “envisioning a sleek, professional design that resembles Microsoft’s interface.”
It deployed a phishing page complete with a fake URL designed to trick the victim. However, Guardio also notes that it didn’t actually have any specific data collection capabilities, and when prompted to add them, it refused. So, it at least pushed back a little on that aspect of the process. Also note that Loveable has patched this behavior and it no longer attempts to create an outline for the phishing campaign.
Jailbreaking generative AI chatbots to unlock the nasty stuff
Generative AI jailbreaks are specially crafted prompts that push the AI to bypass its guardrails.
In the early days of ChatGPT, there were heaps of jailbreaks designed to help “unlock” its true capabilities. These days, folks keep successful Jailbreaks for ChatGPT closer to their chest, with some managing to sell successful generative AI hacks for decent money.
So while it feels like there are no more jailbreaks available for ChatGPT, the truth is that people are much more secretive. It’s the only way to stop companies like OpenAI, Google, Anthropic, and Perplexity immediately closing down any of these loopholes.
Jailbreaking is one of the only ways to coax an online generative AI chatbot to perform acts outside its guardrails. Otherwise, the chatbot programmers have done their jobs, and the AI refuses to cooperate—not a bad thing when it comes to vibescamming.
Thankfully, you can avoid being vibescammed
The end product isn’t wholly different from existing phishing scams
Now, with all that said and the barriers to creating malware and phishing scams much lower, you’re actually still on the lookout for the same scams. Even though some scam emails are getting smarter, the tell-tale signs you’re looking at a phishing email are the same.
In that, you don’t need to overcorrect your security practices to avoid vibescamming, as the advice remains the same.
- Too-good-to-be-true offers: Scammers often promise impossible results, like guaranteed #1 Google rankings, instant 5-star reviews, or miracle health cures. Real services and medicine don’t make these kinds of guarantees.
- Vague or generic senders: Many scam emails come from free Gmail or Yahoo addresses, even when pretending to be a business. Real agencies and companies will use a professional domain.
- Personalization: Or rather, a lack of it. If the email just says “Hello” or “Hi There” without your name or business details, it’s likely a mass-mailed scam. Genuine outreach usually includes some personal detail. “Hello dear” is one of my personal favorites.
- Emotional triggers: Be cautious if an email asks for account logins, upfront payments, or urges you to click on a suspicious link. Similarly, heaps of scams play on fear or urgency, such as miracle cures for serious illnesses or promises of business “domination.” It’s all the same: pressure to make a rash decision.
- Act fast!: If the message tries to rush you into a decision with lines like “Act now!” or “Reply today,” it’s a red flag. Scammers don’t want you to stop and think.
The phishing email red flags are the same; you just might encounter more of them now that basically anyone can become a scammer.
AI developed malware is now
For a while, the idea of AI developed malware and phishing campaigns was fanciful. AI tools weren’t powerful enough to create anything particularly dangerous, and what it could do wasn’t much different from existing threats.
That’s changed over the course of 2025, with more frequent sightings of AI developed malware in the wild and in use with active campaigns. In November 2025, Google’s Threat Intelligence Group reported on two different types of malware developed with various AI tools, that actually call back to AI tools for instructions.
Furthermore, in August 2025, AI developer Anthropic found it’s Claude chatbot being used as part of an enormous malware campaign, using AI to design and launch attacks that tied back to its platform.
Are these vibescamming attacks? I’d say given their complexity, these are a little more nuanced than vibescamming, but they illustrate just how easy it is for AI to be used for extremely dangerous tasks. And these are general, public AI tools. Imagine what’s going on with the numerous powerful local AI tools that can have their guardrails removed.