Anyone who is remotely tech savvy doesn’t need Google to tell them ‘Phishing is one of the top ways bad actors intrude and steal data.’ We recently witnessed a phishing attack on the npm supply chain. Luckily it was caught before having major repercussions. But it got me thinking: what steps can we take to prevent it from happening again? Or at least reduce the probability. I filed away the thought at the time.
A few days later, I was doing some cookie stuff and learned about the Public Suffix List. I found it interesting because it admits ‘there was and remains no algorithmic method’ for a software problem: helping browsers figure out how to restrict cookie sharing ac…
Anyone who is remotely tech savvy doesn’t need Google to tell them ‘Phishing is one of the top ways bad actors intrude and steal data.’ We recently witnessed a phishing attack on the npm supply chain. Luckily it was caught before having major repercussions. But it got me thinking: what steps can we take to prevent it from happening again? Or at least reduce the probability. I filed away the thought at the time.
A few days later, I was doing some cookie stuff and learned about the Public Suffix List. I found it interesting because it admits ‘there was and remains no algorithmic method’ for a software problem: helping browsers figure out how to restrict cookie sharing across domains. It’s not often I see software engineers, myself included, accept algorithmic defeat. I also loved how dead simple the solution is. What can be easier than maintaining a list of suffixes.
It brought me back to phishing. What if my browser had a list of domains I’ve whitelisted as being safe and trustworthy (ex. npmjs.com). When I visit a domain which looks similar, but doesn’t exactly match, one of those domains (ex. npmjs.help), the browser could show me a warning. Not as extreme as the bold red ‘dangerous site’ warning Chrome shows but something to give pause with an option to still continue. Perhaps something similar to what I see when I visit a site without verified SSL.
The list would be curated my me. An algorithm wouldn’t decide whether a domain is added to my anti-phishing list. So when a warning appears, I’m not taken by surprise. I’ll know the thought process behind it. Browsers may not have enough information to maintain this list algorithmically in the first place. Browsing history is often cleared, spread across multiple silos, and incognito mode prevents tracking altogether. The list would also ideally follow me across browsers and devices.
You might be thinking: will people actually spend time to whitelist their frequently visited domains? Most won’t. Just like most people don’t set up 2FA or use password managers. To reduce the burden, a browser could prompt me to add a domain to this list when it sees I visit it often. I would only have to click yes or no. If I click yes too much, worst case I will suffer a false warning which I can easily bypass.
If even 1% of the world’s most popular open source maintainers use this feature to reduce their chances of getting phished by 10%, I still think it would add tremendous business value. And like the PSL which inspired this idea, it has no algorithmic component. It’s just a human-made list.