8 min readJust now
–
Cybersecurity vendors peddling nonsense isn’t new, but lately we have a new dimension — Generative AI. This has allowed vendors — and educators — to peddle cyberslop for profit.
I’m starting a new ongoing series called CyberSlop, to record a cluster of activity.
Activity cluster one — MIT and Safe Security
Earlier this year, MIT released a working paper and made a webpage around 80% of ransomware attacks using Generative AI. Here’s a snapshot of their website:
And here’s a snapshot of the paper:
Press enter or click to view image in full size
The paper is almost complete nonsense. It’s jaw droppingly bad. It’s so bad it’s difficult to know where to start.
It’s credited to Michael Siegel and Sander Zeijlemaker at MIT, along with Vidit Baxi and Sharav…
8 min readJust now
–
Cybersecurity vendors peddling nonsense isn’t new, but lately we have a new dimension — Generative AI. This has allowed vendors — and educators — to peddle cyberslop for profit.
I’m starting a new ongoing series called CyberSlop, to record a cluster of activity.
Activity cluster one — MIT and Safe Security
Earlier this year, MIT released a working paper and made a webpage around 80% of ransomware attacks using Generative AI. Here’s a snapshot of their website:
And here’s a snapshot of the paper:
Press enter or click to view image in full size
The paper is almost complete nonsense. It’s jaw droppingly bad. It’s so bad it’s difficult to know where to start.
It’s credited to Michael Siegel and Sander Zeijlemaker at MIT, along with Vidit Baxi and Sharavanan Raajah at Safe Security. It is unclear to me how those involved didn’t realise the problems. Even the central premise of the research doesn’t make sense.
Michael Siegel from MIT sits on the technical advisory board of Safe Security, along with Debayan Gupta from MIT.
I highlighted the paper online, as CISOs kept forwarding it to me, saying I’m wrong about Generative AI not being a major part of ransomware group operations (it isn’t, by the way — I track them). So I read the paper, started posting about it online, and it disappeared:
Press enter or click to view image in full size
Before I get started, MIT have talked to a journalist — she reports:
I’m surprised anybody is baffled, given you can see how it was presented above. The authors also presented it at their own conference:
It’s also in multiple MIT PDFs, pages and the cybersecurity press.
The PDF itself has been replace with one claiming you’re accessing the “Early Research Papers” section of the MIT website… which is, well, an angle — and certainly not a good one, given how far this had spread.
Press enter or click to view image in full size
Press enter or click to view image in full size
Press enter or click to view image in full size
MIT’s own website, not calling it an “Early Research Paper”
I’ve noticed the MIT website has been through some rewrites, for example from this:
Press enter or click to view image in full size
To this:
Press enter or click to view image in full size
The Generative AI narrative has just completely been removed and there’s no note anywhere that anything has changed.
So, what’s wrong with the paper? What’s right might be a better question.
I’ll leave Marcus to start:
The paper lumps almost every ransomware group into using AI, without a source (well, they appear to cite CISA — who don’t say that). It is, of course, just nonsense.
The paper talks about things like Emotet as being AI-like (total nonsense, it’s also a historic threat), ransomware groups which disappeared before Generative AI using GenAI… There’s just so much going on here that it’s unbelievable this was put into the public domain like this.
The paper is gone from the internet… although you can find it on Internet Archive forever, if you want a laugh (albeit at the expense of the harm this kind of thing causes downstream defenders):
Even today, the Financial Times cited the piece:
Press enter or click to view image in full size
Financial Times piece today
The Financial Times piece link points to the website where MIT silently removed the claims.
There’s really just so much wrong with it that, well, it’s depressing. The authors must have known it was nonsense — as soon as got critique online, it was scrubbed. It carried the name of the Director of Cybersecurity for MIT Sloan on the front page, and was being widely circulated online unchallenged for months.
So what’s going on here?
CISOs are scared by Generative AI threats. Because vendors and now educational institutions would should know better scare them — not because of the actual level of threat, but for financial motivations.
In this case, MIT Sloan are paid by Safe Security, and the principal researcher of the report sits on Safe Security’s board. Safe Security provide Agentic AI solutions. MIT’s report pitches needing to a new approach, with the cites linking to Safe Security webpages. Safe Security cite developing their product with MIT.
The incentives are… not well managed here, and the industry is very sick. Everybody just played along with it, and it results in CISOs being presented the wrong information.
The Generative AI craze started in 2022. It’s over 3 years in. If you ask any serious cyber incident response company what initial access vectors drive incidents, they all tell you the classics — credential misuse (from info stealers), exploits against unpatched edge devices etc.
This isn’t a theory — this is from the actual incident response data of the people responding to cyber incidents for a living. I do it. Generative AI ransomware is not a thing, and MIT should be deeply ashamed of themselves for exclaiming they studied the data from 2800 ransomware incidents and found 80% were related to Generative AI. There’s a reason MIT deleted the PDF when called out.
Press enter or click to view image in full size
If you don’t believe me, you can ask the truck load of actual experts:
Or even ask an AI chatbot, if that’s all you’ll believe (irony, I know):
Press enter or click to view image in full size
Or go read the yearly vendor reports.
Press enter or click to view image in full size
Should be noted — CrowdStrike’s report doesn’t mention Generative AI or GenAI once.. but they do summarize at the end the #1 thing orgs need to do is implement their agentic AI product as “threat actors adopt AI to strike faster, scale operations, and evade detection” — but no evidence is offered of this at all.
The single best thing you can do to counter real world cyber incidents — not LinkedIn GenAI pornography — is to follow through with foundational security.
For example, phishing is just phishing. If your entire org can be crippled by one phishing email, you don’t need GenAI to go bankrupt — you just need somebody to email to 100 staff from an AOL account using broken English. Somebody will open it.
The fault is yours for flying in a plane where the front may fall off at any moment if a passenger reclines their seat incorrectly — that isn’t the passenger’s fault, and telling them not to press the Front Fall Off button isn’t good management.
Nobody will deny Generative AI can be misused. But everybody working real world ransomware incidents will tell you GenAI isn’t the real world problem now — it is, in every case, lack of security foundations.
Press enter or click to view image in full size
I spent minutes on this
You need to make **your **organisation cyber resilient by doing foundational cybersecurity and IT — by which point, you will be equipped to deal with future threats. Generative AI’s potential is more of the same.
Customers doing foundational security doesn’t make as much money for vendors (or any, for AI specific vendors) compared to getting you into an ecosystem of new products and services. Remember that incentive.
Don’t do this. Break the habit.
What can be done about it?
I think we, cybersecurity practitioners in the trenches, need to have a voice to be able to call nonsense out. The balance of power is tilted incorrectly. CyberSlop, this series, will be a place for this.
Cyberslop, the word, is your power. Let’s be very clear of the need to investigate when things don’t pass the sniff test, and call out organisations peddling cyberslop.
I don’t think MIT should allow researchers to submit papers for organisations they sit on the boards of , without even any disclosure in the papers of the relationship — they really need to look at the overall picture of what happened here as it feels wrong. The paper, obviously, needs to disappear and have extensive peer review — including by somebody, anybody, who has actually responded to a ransomware incident.
If you look at Safe Security’s website, it’s wave after wave of baffling claims about Generative AI — it’s so nuts, it’s also difficult to know where to start with that either. Why are MIT Sloan staff sat on the board of this company?
Press enter or click to view image in full size
Emotet isn’t a ransomware family and didn’t use AI, Ryuk never used AI, LockBit didn’t use AI models — the website is just full of junk.
I intent to use CyberSlop to call out indicators of compromise, where people at organisations are using their privilege to harvest wealth through cyberslop. I am hoping this encourages organisations publishing research to take a healthier view around how they talk about risk, by imposing cost on bad behaviour.
My belief is cyber defenders can push back against cyberslop by pointing and laughing at it. It’s damaging the ability to defend. We need our voice back. The sighing stops, even if it means the GenAI money train stops.
IoCs
cams.mit.edu 18.4.38.193
safe.security
https://cams.mit.edu/wp-content/uploads/Safe-CAMS-MIT-Article-Final-4-7-2025-Working-Paper.pdf
Michael Siegel Sander Zeijlemaker Vidit Baxi Sharavanan Raajah
TTPs
- Linking random ransomware groups to Generative AI usage, without foundation.
 - Linking ransomware groups who predate Generative AI with Generative AI usage.
 - Saying trojans are ransomware groups, and linking them to AI incorrectly.
 - The lead named researcher sitting on the board of the company paying for the research.
 - Running the paper online all over the place, then when questioned deleting the PDF and rewording things, and pretending the paper was
 
MIT’s response
I contacted MIT’s PR people earlier today, but have not heard back. I will update this story if I get a response. I may also run a further story, as I have uncovered additional details.
Call to action for vendors and academic researchers
Please speak to incident response staff. If you get asked to put your name to something you know isn’t true, don’t put your name on it.
Next on CyberSlop
Safe Security are not alone here. This is, sadly, going to be a series until there’s signs that cybersecurity vendors slow down wee’ing their cyberslop everywhere over their customers. More clusters of cyberslop activity have been identified and IOCs for Hunting are coming.