In a tradeoff between performance vs. privacy + fairness, Google picked performance.
16 min readJust now
–
In October, Google Chrome announced that they would ship a new feature, “Intent to ship: Cache sharing for extremely-pervasive resources.”
Normally, these “Intent to ship” (I2S) announcements garner only a handful of “LGTM” (Looks Good To Me) responses. This one pulled in 46 responses and 8,000+ words of argument.
I’m going to try to summarize the arguments here as best I can, giving you the background you’d need to understand the discussion.
tl;dr
- Around 2020, Google fixed a privacy/security bug (XS history leaks) that allowed any website to know what other websites you visited
- The f…
In a tradeoff between performance vs. privacy + fairness, Google picked performance.
16 min readJust now
–
In October, Google Chrome announced that they would ship a new feature, “Intent to ship: Cache sharing for extremely-pervasive resources.”
Normally, these “Intent to ship” (I2S) announcements garner only a handful of “LGTM” (Looks Good To Me) responses. This one pulled in 46 responses and 8,000+ words of argument.
I’m going to try to summarize the arguments here as best I can, giving you the background you’d need to understand the discussion.
tl;dr
- Around 2020, Google fixed a privacy/security bug (XS history leaks) that allowed any website to know what other websites you visited
- The fix caused popular shared scripts (Google Analytics, YouTube player, Facebook pixel) to be downloaded and redownloaded over and over again for each site that used them
- Google’s planning to cache a short list of very popular scripts across all sites in Chrome, like they used to be in 2019. (Many of those scripts are Google’s own scripts!) – This will give those popular scripts a special advantage over competitive upstarts (as they already did, before 2020)
- Google’s plan would make it easier for websites to “track” you across the web…
- …but Google Chrome (uniquely among major browsers) allows all sites to track you freely via third-party cookies. (You can opt-out of third-party cookies in Chrome settings, but almost nobody does.)
- So, Google plans to make popular scripts cacheable only to users who haven’t opted out of third-party cookies. These users have no tracking protection, and so adding a new tracking method does no further harm.(?!)
- Overall, this is a tradeoff between performance on the one hand, and privacy + fairness on the other hand. Unsurprisingly, Google prioritized performance in this case.
Links I read so you don’t have to
- Intent to ship: Cache sharing for extremely-pervasive resources (8,000 words)
- Chrome’s design doc for the feature. (2,000 words)
- The list of 2,000+ URLs Chrome plans to use single-key caching. (I only skimmed this!)
- Mozilla’s negative position statement on this feature, including replies from Google. (2,000 words)
- Alex Russell’s article, Cache and Prizes. (4,000 words) This is Alex’s attempt to conclusively answer the question, “Why don’t you just put [popular framework] in the browser?”
(“Cache and Prizes” is a great article, but it assumes that you already know what XS history leaks are, what fingerprinting is, what double-key caching is, and what Subresource Integrity is. When you’re done reading my article, you’ll have enough background to read Alex’s article and the other links listed here.)
Background: You don’t want websites to know which web pages are in your browser history (“XS history leaks”)
In the past, privacy bugs in the browser gave websites the ability to detect which other websites you’d visited recently, e.g. which bank sites you use, or whether you’ve visited Pornhub recently.
In the bad old days, websites could deduce this via “cache timing.” It works like this: to tell whether someone has visited Pornhub recently, try requesting an URL from pornhub.com, e.g. the site logo. If that URL returns fast, faster than a typical network request, then the user must have Pornhub’s site logo in their browser cache, so they must have visited Pornhub pretty recently.
It was even possible to figure out which pages the user visited on Pornhub, by timing the cache on those pages (or images linked from those pages).
Using these techniques, it was possible to detect stuff like, “this browser was logged into Bank of America, uses Google credentials, has looked at map tiles in Boise, Idaho and regularly uses the web from 12:00 to 22:00 GMT.” (That’s an actual quote from Google about the privacy impact of XS history leaks.)
This bug stuck around for a surprisingly long time. Google Chrome only got around to fixing it in 2020. Firefox rolled out the fix in 2021. Safari fixed it years before that, at least partially in 2013, and standardized with other browsers on a “double-key caching” solution in 2020.
Browsers fixed this by partitioning the cache by website (“double-key caching”)
The fix was to give each website its own, separate (“partitioned”) cache of all resources, even third-party resources.
Since Chrome 86 (2020), if Medium.com tries to make a request from Pornhub, the browser won’t use Pornhub’s cached site logo; it will download the logo again from scratch, once for each site that requests the same URL from Pornhub.
This is called “double-key caching.” The idea is that when you cache files (“resources” like JavaScript, CSS, images, etc.) you store them in a cache with a “cache key.” In 2019, Chrome used a “single key,” the URL, for all cached files. Since 2020, there are now two keys: the URL, and the website domain that requested that URL.
Google has an article with cute images explaining cache partitioning
(Technically browsers now use an even more refined technique called “triple-key caching,” which includes the key of any website embedded in an iframe. The details aren’t important for this discussion, but you’ll sometimes hear people refer to “triple-key caching.”)
Today, double-key caching is forcing everyone to download the YouTube player N times for N sites we visit (1MB each time)
The embedded YouTube player is enormous. It’s literally a megabyte of JavaScript, CSS, and images (mostly JS), not including any video.
In the old days of single-key caching, the huge size of YouTube’s player wasn’t necessarily so bad, because you could download it once and then keep using it out of the cache.
But now, with double-key caching, each site that loads the YouTube player downloads its own, separate copy of the player, at 1MB each!
And it’s not just the YouTube player. What about Google Maps? Google ReCAPTCHA? Google Analytics? unpkg.com/lodash? We’re all forced to download these files again and again and again, just in case someone tries to use them to violate our privacy.
Despite this, the performance impact of double-key caching was surprisingly low
https://github.com/shivanigithub/http-cache-partitioning includes data that Google used to decide to enable double-key caching (triple-key caching, actually) instead of a single-key shared cache.
When we try to find something in the cache, but it’s not there, that’s called a “cache miss.” We’d expect that switching from single-key caching to triple-key caching would increase cache miss rates, and it did increase… by only 3.6%.
- Total cache miss rates: +3.6%
- Cache miss rates for 3rd party fonts: +33%
- Cache miss rates for 3rd party javascript files: +16%
- Cache miss rates for 3rd party css files: +13%
It would have been great if double-key caching had persuaded YouTube’s team to slim down their player script, but it didn’t
One goal that was hoped for as partitioned caching rolled out was that popular resources that are used on a lot of sites (YouTube, Maps, etc.) would optimize their JavaScript, making the whole web faster for everyone.
lite-youtube is a drop-in replacement for YouTube’s player script; it’s only 3kb. Surely Google could ship something similar?
But, with the benefit of hindsight, that didn’t happen. The YouTube player is still 1MB today, and there’s no sign that they intend to fix this.
Now: Chrome proposes to use a single-key cache for “extremely pervasive resources” (like Google’s own properties)
This would allow the browser to avoid downloading/redownloading those resources, and only those, because they’re used *everywhere *across the web.
The core idea is that you’re not going to successfully violate anyone’s privacy just by knowing that they’ve downloaded Google Analytics recently or the YouTube player, because everybody’s done that.
Here’s Chrome’s proposed list of 2,000+ URLs to put in a single-key cache. A few of the top items on the list of URLs:
- Google Analytics
- ReCAPTCHA
- Facebook Pixel
- Cloudflare Insights Beacon
- Google Maps
- YouTube player
- unpkg.com/lodash
These URLs are the “winners” Chrome has picked, and now, they’ll get faster than other URLs
Were you considering using PostHog instead of Google Analytics? Well, there’s a problem with that: PostHog’s script isn’t on the list of extremely pervasive resources.
As a result, Google Analytics will be basically free to download. PostHog will take extra time and extra disk space.
Were you considering adding the Vimeo embedded player on your site? The user will have to download it. If you use YouTube’s player instead, the user will probably already have it downloaded.
To mitigate some privacy concerns, Chrome will only turn this feature on for users who haven’t opted out of an unrelated privacy setting
This is one of the weirdest parts of this proposal!
Background: Third-party cookies in Chrome can track you from website X to website Y
I mentioned earlier that websites used to be able to forcibly extract your browser history with cache timing. Well, it turns out that cache timing had another use, too: “cross-site tracking.”
“Tracking,” in this sense, means recognizing that the user who just visited website X is the “same user” as the user who visited website Y, where both websites X and Y actively want to do that (i.e. not Pornhub). For example, perhaps website Y is Amazon, and they want to show you ads on website X for products you were recently browsing.
In the old days, cross-site tracking was trivially easy with third-party cookies. Website Y would set a cookie in your browser; when you navigate to website X, it could make a request to website Y. The cookie would be attached, and then Y would know that you’re the same visitor it saw before, and Y could tell X who you are.
Firefox and Safari have disabled third-party cookies, but Google Chrome hasn’t disabled third-party cookies. Google said they were going to disable third-party cookies (“3PCD”), then backtracked in 2025. They briefly considered prompting all users with a “standalone prompt” asking users whether they wanted third-party cookies, but Google decided not to do that, either, with the goal of “ensuring a sustainable, ad-supported internet.”
You can turn off third-party cookies in Google Chrome today to protect your privacy, but you have to opt-in to that choice, and, more importantly, you have to know how to opt-in to it.
Single-key caching could be used to track you by “fingerprinting,” even without cookies
“Fingerprinting” is a way of tracking users from website to website by incidental features about them, often probabilistically.
Websites automatically get to know a user’s IP address (which conveys an approximate location) and a “user agent” string, which typically includes the user’s OS version and browser version. Just a little bit more information is often enough to give you a unique fingerprint, an ID that can be used to track you across sites. Perfect protection against fingerprinting is an extremely hard problem (perhaps impossible).
Google’s design doc identifies three ways that single-key caching could make fingerprinting easier, even when restricted to extremely pervasive resources:
- Unique Payload: Google could serve up a unique version of analytics.js for every user, including an ID number
- Direct Fingerprinting: Set a 32-bit user ID fingerprint, by agreeing upon 32 unique URLs and having Website Y visit just some of those URLs. Website X can then do cache timing on those URLs to get 32 bits of information, more than enough to convey a unique ID.
- Ephemeral Fingerprinting: Website X could try to observe existing, out-of-date files in the cache.
Now: Google plans to enable single-key caching only for users who already allow fingerprinting via third-party cookies
This point is subtle, and it requires following along with everything up until this point.
- Single-key caching allowed both XS history leaks (which are terrible) and cookieless fingerprinting
- Single-key caching “extremely pervasive resources” would not allow XS history leaks, but would re-enable cookieless fingerprinting
- But third-party cookies already support easy fingerprinting, and Chrome already supports third-party cookies by default
- So, if the user allows third-party cookies, then adding a new way to fingerprint is no big deal!
Google’s design doc identifies a few other anti-fingerprinting mitigations, but they concede that those mitigations aren’t as good as double-keyed caching. But, overall, since third-party cookies have left the fingerprinting barn door wide open, it’s hardly any worse to have a new fingerprinting method, amirite?
If Chrome does this, opting out of third-party cookies to increase privacy will make the web slower again
Chrome’s plan is to link this performance feature to users who haven’t opted out of third-party cookies.
Under that plan, disabling third-party cookies (which improves your privacy) will revert you back to a fully partitioned cache, forcing you to download the YouTube player N times for N sites. You’ll be back to today’s status quo.
Mozilla’s negative position statement
Here’s Mozilla’s statement in full. My goal for this article is to give you enough background to read and understand Mozilla’s statement as written.
Mozilla regards this feature as detrimental to the health of the web, so takes a strongly negative position.
We agree that this would likely result in performance gains and cost savings for users, as it means that content is retrieved from cache, rather than fetched.*
The drawbacks of this feature far outweigh any benefits:
Privacy — The proposal includes a bunch of mechanisms that it falsely claims improve privacy. These protections are insufficient. The only viable way for a design like this to protect privacy is to have a uniform, fixed cache across a large population of clients.
However, we note that the proposal is to only enable the feature when users have third-party cookies enabled. That makes any privacy protection unnecessary and performative (note that we think Chrome should stop enabling tracking in that way, it is bad for the web). We do want to point out that this creates a performance penalty for choosing to disable tracking cookies. We cannot condone punishing people for choosing to protect their interests in that way.
Gatekeeping — This establishes browsers as gatekeepers of what can or cannot be cached. This might be mitigated by establishing objective rules for inclusion.
Centralization/Stagnation — This sets up a small number of successful players with a competitive advantage on any upstarts. The prerequisite for inclusion in the list is wide success, but a competitor now has a significant performance disadvantage to overcome in order to reach parity with entrenched players.
Consider the YouTube player, which is more than 1Mb of stuff, mostly script, not including any video. If YouTube has a 1Mb head start, that creates a strong incentive to use their player over any competitor without that advantage. It also makes expanding the size of these resources less costly, letting incumbents pack more features in which competitors can’t reasonably do.
Consider also Google Analytics or Facebook’s pixel, which are near the top of this list. Having scripts available will greatly accelerate the display of ads for Google’s and Meta’s customers, giving a strong advantage over competitors. Ads are a highly performance-sensitive business where the tiniest of advantage allows the winner to take a disproportionate amount of the market. This would give these large incumbents an unfair advantage.
We understand that this is being presented as being outside of standardization: Chrome would be able to ship the feature without affecting interoperability. This is true, and to that extent, Mozilla doesn’t have much right to tell Google what it does with its product. However, given our position that this is strongly detrimental to the Web, we feel we must push back on this idea.
*The claims about disk usage are best addressed by deduplication on disk. However, we doubt that it is worth it: effective deduplication spends compute to save disk, but compute is often more scarce. We’d welcome more research on this, particularly on low-end devices.
Google posted a response that you can read for yourself.
Google’s final word on the matter
Google’s official statement comes from Rick Byers, a technical lead working on Google Chrome. He posted on December 16, two months after the initial “Intent to Ship” announcement, thousands and thousands of words of argument later.
Sorry for my long delay. We’ve been talking about this feature in a number of forums and I’ve been doing some research. After a lot of consideration and debate I’ve decided that I’m, on balance, supportive of shipping, LGTM2.
Since this is a particularly difficult and contentious tradeoff I’ve taken the time to summarize my reasoning here for those who are interested.
The debate in this space is fundamentally a tradeoffs among multiple goals all important to Chromium:
Security • This feature has a fundamental security cost in terms of potentially enabling a class of history leak attacks. Certainly the threat model would be simpler if we could argue that origins are always perfectly isolated from each other. • Pat has done a ton to drive the risk to being as low as possible in practice, and has a set of additional mitigations he’s going to add. • As much as I have been a strong proponent for investing in strong isolation (site isolation. visited link partitioning, etc.), personally I don’t believe that history leaks is something we would ever choose to defend perfectly against. eg. sites can trick users into sharing information they got from one website with another (via clipboard, downloaded files), and will likely always be able to find sites they can attack via various side-channels (measuring performance during a background page load) • Clearly the right tradeoff isn’t to have an unpartitioned cache (even though we lived in that world for a long time without it being a total disaster). But the question is whether the other extreme of a perfectly partitioned cache all the time is the sweet spot or not. • While the Chrome security team is not happy about this feature, they are also not arguing that it can’t ship.
Openness (decentralization) • Mozilla argues that this feature advantages popular 3P resources over unpopular ones. That is a legitimate concern. But also that seems to me to be an inherent property of caching generally and I don’t see anyone arguing that browsers shouldn’t have caches at all so that repeat visits to a website are slowed down to be comparable to the performance of visiting a brand new website, or that CDNs are evil for putting their most popular resources closer to the edge or in RAM where they’ll be found quicker by users. • What seems really unique about this feature to me is that it allows less-visited websites to share caching benefits with each other. For example, when a user is shopping by visiting a variety of small e-commerce merchants (who all happen to use similar popular tech stacks), a shared cache can allow most of them to benefit from the load of the first, making the user experience a bit more comparable to one where someone does all their shopping on one single aggregator storefront. • To me, having an architecture which enables a large number of small sites to pool resources is more important to the openness of the ecosystem than trying to slightly offset the effect that caching has on popular resources in general. • Of course this looks very different for browsers where 3PCs are disabled by default, I would not expect such browsers to see this feature as a good cost-benefit tradeoff. But since Chromium has 3PCs enabled by default, I think it’s reasonable that our cost/benefit tradeoff analysis would be much more positive.
Power of the web • I spent some time digging into the history of shared resources across applications in various operating systems. It’s definitely the case that modern operating systems have been moving more and more to an isolated-by-default model. • Nonetheless, every modern OS other than the web has SOME mechanism for cross-app code sharing (often as an exception to the general pattern of isolation, at least on mobile platforms). • – iOS is most limited to specific cases/UI like file providers used by multiple apps, which you could argue is a different sort of app composition model that the web should model (fenced frames using the top level cache?). • – But all desktop OSes rely on some form of cross-app code sharing, especially for very popular libraries like the Microsoft Visual C++ runtime (in the WinSxS cache) • In particular it seems inevitable to me that we’re going to need some mechanisms for sites to be able to share massive AI models with each other — at least on desktop platforms. Perhaps we’ll eventually want to design a purpose-built API for this with security at it’s core, but this feature could be a useful step in exploring the real-world benefits of such an API and what security mitigations may be appropriate (eg. in registering a model file as “pervasive” somehow). • I am personally very reluctant to just declare that cross-origin resource sharing is a capability the web should not have. It feels instead like a tradeoff to be managed (like other powerful capabilities), and the sweet spot for tradeoffs is rarely at one extreme of the spectrum.
Performance • Initial limited experiments show a pretty small positive impact on performance from this feature. Personally I’m a bit underwhelmed. • But the original performance data from cache partitioning showed a small but non-trivial cost, eg. +1% FCP p99, +4% bytes loaded from network. This ceiling seems meaningful enough to not give up on entirely, but also small enough that it’s unlikely to have a big effect on anyone.
Taken together I’ll admit the tradeoff is still non-obvious — some smallish downsides, some smallish upsides. If this is all we were doing I could honestly go either way.
However I think there’s an important asymmetry here: • If we don’t approve this feature, in practice it means Chromium will give up on exploring the space of shared caching for the foreseeable future. This is an architectural problem we’ve long debated and this is the best attempt we’ve ever made, being run by one of the world’s top experts on network performance. • If we do approve this feature, we will continue to invest in managing the tradeoff. If there are security attacks, we will mitigate them or shut the feature off. Sites (like Shopify) will be free to explore the benefits of shared caching more, and we will learn more about this tradeoff space. • Arguably this feature doesn’t even really need API owner approval because it’s an implementation detail with no real web compat implications (a “two-way door”). Even the upper bound of the performance benefits possible are not big enough to create any meaningful pressure on other browsers or web developers.
In addition, I believe it’s important for API owners to work hard to create a culture of “yes if” reviews: what needs to be true for us to be comfortable empowering someone to ship a feature they’ve invested in? We work hard to resist the temptation of falling back to being gatekeepers thinking only “do I personally prefer if Chromium has this feature or not?”.
• Pat has worked tirelessly iterating on this feature with every improvement and mitigation people have given him. I can’t find anything else reasonable to ask him to do. • Therefore, in the absence of a clear and compelling reason why this feature must not ship, I must support it.
In a tradeoff between performance vs. privacy and fairness, Google picked performance
It feels maddening to have to download and redownload these scripts, and I see why Google would want to mitigate that.
But security and privacy often force web browsers to compromise on performance. This should be one of those times. (See also Alex Russell’s excellent article explaining “Why don’t you just put [popular framework] in the browser?”)
Depressingly, if you ask Google to decide between performance and privacy + fairness, you know they’re gonna pick performance every time. (Remember Google AMP?)
The permissionless web is a place where, sometimes, we just can’t have nice things.