Let’s say you left your computer unlocked at a café, and somebody wanted to take advantage of that. How would they do it? Well, they’d methodically search through all your data looking for anything that could be useful. They’d adapt as they go, looking in all sorts of areas: reading files in your downloads+documents folders, looking through browser history, emails, cloud storage, etc. And then take immediate, customized actions based on that info.
They wouldn’t, for example, just look for a crypto wallet, and then if there isn’t one say, "Ah well, no crypto wallet, I guess there’s nothing useful on this laptop."
But that’s how most computer viruses work. Most viruses go into a device with a fixed goal of, say, looking for a crypto wallet. Because that’s what traditional softwar…
Let’s say you left your computer unlocked at a café, and somebody wanted to take advantage of that. How would they do it? Well, they’d methodically search through all your data looking for anything that could be useful. They’d adapt as they go, looking in all sorts of areas: reading files in your downloads+documents folders, looking through browser history, emails, cloud storage, etc. And then take immediate, customized actions based on that info.
They wouldn’t, for example, just look for a crypto wallet, and then if there isn’t one say, "Ah well, no crypto wallet, I guess there’s nothing useful on this laptop."
But that’s how most computer viruses work. Most viruses go into a device with a fixed goal of, say, looking for a crypto wallet. Because that’s what traditional software can do. It’s not very good at adapting.
AI is changing this. AI is extremely adaptable. It can act like the thief in the café, and look for all sorts of ways to take advantage of your information. It can look at all the files, all the applications. It can extract way more value from a single device than a traditional virus.
There are a couple of barriers here. Cost is a big one. It’s expensive to use smart enough AI for these viruses. And if you try, the AI companies might see all your API calls, catch on, and ban you.
But now, hardware companies are making it possible to have on-device AI hardware, so AI can be run locally from your computer. And AI companies are creating AI models designed to work directly on your device: Google’s Gemma, Microsoft’s Phi.
As on-device AI spreads, it’s going to get a LOT easier to make AI viruses. It’ll be cheaper to do and harder to detect. These sorts of adaptive viruses are going to become much more common.
There a ton of big cybersecurity failures each year that result in bad actors getting full access to a ton of computers. But in general, only a small set of people or companies (like, much less than one percent) end up getting materially exploited. (The people with the crypto wallets, e.g.)
When adaptive viruses can just look for whatever might be useful on every laptop, suddenly these cybersecurity problems can get a lot more dangerous.