In October, OpenAI launched Atlas, its ChatGPT-powered browser designed to go head-to-head with Google Chrome. Perplexity has Comet. Opera (remember them?) unveiled Neon. Mozilla, which built its entire brand on being the browser you can trust, just announced an "AI Window" for Firefox. Google keeps weaving Gemini deeper into Chrome.
The message from Silicon Valley is unmistakabl…
In October, OpenAI launched Atlas, its ChatGPT-powered browser designed to go head-to-head with Google Chrome. Perplexity has Comet. Opera (remember them?) unveiled Neon. Mozilla, which built its entire brand on being the browser you can trust, just announced an "AI Window" for Firefox. Google keeps weaving Gemini deeper into Chrome.
The message from Silicon Valley is unmistakable. All these companies are promising browsers that don’t just load web pages but actually understand them. Browsers that can shop for you, summarize your emails, book your travel, and handle the tedious stuff while you do something more interesting. The pitch is compelling. The security situation is not.
The security team for the Brave browser (which is also introducing AI features) published a series this fall showing just how vulnerable AI browsers are to prompt injection, a type of attack where hidden instructions manipulate an AI into doing things the user never asked for. In tests with Perplexity’s Comet, researchers embedded invisible commands inside an image on a web page. When a user asked the browser to summarize the page, it instead navigated to the user’s Perplexity account, extracted their email address, and sent that data to an external server. No approval requested, none given.
Another test demonstrated that OpenAI’s Atlas could be manipulated by instructions hidden in ordinary online documents, causing it to change settings without user consent. OpenAI’s chief information security officer acknowledged on X that prompt injection remains "a frontier, unsolved security problem." The company launched Atlas anyway.
So far, the demonstrated attacks have been fairly limited — email addresses, verification codes, browser settings. But the vulnerabilities don’t get smaller as the capabilities get bigger. Google has already announced a payments protocol that lets AI agents buy things on your behalf while you sleep. The same prompt injection tricks that steal an email today could drain a bank account tomorrow.
The gateway to everything is too valuable to wait
So why the rush? Because browsers aren’t just browsers anymore. For three decades they were windows to the web. Now they’re becoming command centers for AI agents that can access your emails, calendars, documents, shopping carts, and bank accounts.
Controlling that interface means controlling the relationship between users and basically everything online. When Perplexity made a $34.5 billion bid for Chrome earlier this year, the company’s chief business officer explained the logic plainly. The browser gives AI companies "a much bigger surface area" and access to far more context about users.
The financial math is straightforward. Google’s Chrome serves roughly 3 billion users and has dominated the market for a decade. OpenAI’s ChatGPT attracts 800 million weekly users, but many of them access it through Chrome. For OpenAI, getting those users into its own browser means capturing data that would otherwise flow to Google, creating new advertising opportunities, and reducing dependence on a competitor’s infrastructure.
The “everything app” has been a Silicon Valley white whale for years, one that’s never materialized in the West no matter how hard companies have tried to recreate China’s WeChat, which lets users message, pay bills, book doctors, order food, and shop without ever leaving the app. But with hundreds of billions flowing into AI, the industry is making another run at it. Browsers are the fastest path to that vision. Even if they’re not ready for prime time.
The experts are worried but the market isn’t listening
In December, Gartner advised enterprise clients to block AI browsers entirely. The research firm warned that default settings in these products prioritize user experience over security, leaving organizations exposed to prompt injection attacks and data leakage. Analysts also flagged a more mundane risk: employees using AI agents to complete mandatory security training on their behalf.
Security researchers who study these vulnerabilities tend to arrive at the same uncomfortable conclusion. Prompt injection isn’t a bug that can be patched. It’s a class of attacks that will exist as long as AI models read text that attackers can influence. The recommended mitigations include limiting what AI agents can do, restricting access to private data, and maintaining constant human oversight.
But constant oversight defeats the whole point. The value proposition of these browsers is that they handle things autonomously while you do something else. The moment you stop watching, you’re trusting an imperfect system with imperfect defenses to make decisions using your credentials and your data.
The AI companies understand this tradeoff. They’ve decided to ship anyway. In the race to become the new gateway to the internet, being first apparently matters more than being safe. For users, the result is a new generation of tools that promise to handle the internet for us, even as they make us more vulnerable to it than ever.