Author: Aaron P. MacSween
Published: 2025-09-14
I apologize for the mildly clickbaity title. Now that you’re here, let me quickly answer the question of whether we should trust Google:
HAHA, No.
With that issue settled, I should probably provide a little background about why anyone would possibly ask such a question in 2025.
A quick overview
I posted the following on my Mastodon account back on August 15th:
chrome developers: we are thinking of dropping support for rendering RSS feeds as something other than garbage code. does anyone have any reasons not to do this?
developers from many different backgrounds: yes, I rely on …
Author: Aaron P. MacSween
Published: 2025-09-14
I apologize for the mildly clickbaity title. Now that you’re here, let me quickly answer the question of whether we should trust Google:
HAHA, No.
With that issue settled, I should probably provide a little background about why anyone would possibly ask such a question in 2025.
A quick overview
I posted the following on my Mastodon account back on August 15th:
chrome developers: we are thinking of dropping support for rendering RSS feeds as something other than garbage code. does anyone have any reasons not to do this?
developers from many different backgrounds: yes, I rely on normal people being able to understand RSS for my business. dropping support will be disastrous for me because I can’t rely on people to have some random extension installed.
chrome devs: OK well we’re probably going to do it anyway because we can’t be bothered to support web standards. uwu google is only a teensy wee company uwu
It garnered a fair bit of attention, at least by Fediverse standards, and the resulting notifications made my phone almost unusable for a few days. To date it is one of my most popular posts on the network, second only to a shitpost commemorating the untimely death of Henry Kissinger:
The unserious posts always seem to get the most attention.
In any case, after a few days I followed up a bit:
I wasn’t expecting this thread to get so much traction 😅
I’ve learned a bunch from reading comments from people who dug into different aspects of how this has played out.
I’m thinking of compiling the available information into a blog post, starting with a simplified explanation for what this means for those that aren’t familiar with XLST or even RSS.
This is my third attempt at writing that post because the other two got a bit too long-winded. I’ve decided that it’s probably best to primarily link to other people’s blogs on the topic and provide additional commentary rather than trying to cover the saga in its entirety.
The excellent Molly White wrote a pretty accessible explainer not that long ago on her citation needed blog/newsletter titled “Curate your own newspaper with RSS”.
What if you could take all your favorite newsletters, ditch the data collection, and curate your own newspaper? It could include independent journalists, bloggers, mainstream media, worker-owned media collectives, and just about anyone else who publishes online. Even podcast episodes, videos from your favorite YouTube channels, and online forum posts could slot in, too. Only the stuff you want to see, all in one place, ready to read at your convenience. No email notifications interrupting your peace (unless you want them), no pressure to read articles immediately. Wouldn’t that be nice?
I really recommend reading her article for more detail, but RSS is essentially a simple and well-established way to subscribe to websites by having a type of program called an RSS Reader periodically check it for new updates. If an email newsletter is like getting a newspaper delivered to your house each day, then RSS is like going for a walk to your local newsstand to check what’s been published since yesterday. Some of the time the daily trip won’t yield anything interesting, but as a nice benefit you’ll be up to date on the news without having to tell anyone your address.
I think it was a good call to leave out the deeper technical details, but for the purposes of this article it’s relevant to add that RSS works by downloading a file encoded in a format called XML (eXtensible Markup Language). That file contains the details of a publisher’s recent posts, and your reader app parses out which ones it has seen before, telling you only about the new ones.
What is XSLT?
It used to be that if you opened an RSS feed’s raw XML file in your browser you would be prompted to do something meaningful with it. Firefox would ask you how you wanted to handle the feed, giving you the option to subscribe to it natively in the browser or to do so via an external program like Mozilla’s Thunderbird email client, which is what I use to this day.
Unfortunately, Firefox dropped support for this in 2018, and since then it and most other browsers will simply display the plain text contents of the XML file, which looks like an error as far as most people are concerned. Fortunately, there is a technology called XSLT (eXtensible Stylesheet Language Transformations) which provides instructions on how to transform a plain XML into the same structure as a web page that your browser already knows how to present.
Darek Kay wrote “Style your RSS feed” which explains it pretty well, gives an overview of how to use it for your own RSS feed if you have one, and links to some nice examples of how other people styled their feeds.
Great post. Short and sweet. Five stars. No notes.
Why are Google engineers proposing to drop support for XSLT?
So, Google Chrome uses a software library called libxslt which handles the details of applying XSL styles to an XML document, and that library itself uses another library called libxml2. In cases like this where Google relies on code written by non-Googlers, it’s pretty common for them to get a specialized team to review that code for any security risks that it might pose to their products. The team that handles that sort of thing is called Project Zero, and they’re pretty widely regarded as being quite good at what they do.
The problem is that what they do is report bugs under the assumption that other people will take care of fixing them, and they follow the industry-standard practice of disclosing reported vulnerabilities after a 90 day embargo, which puts considerable pressure on the people maintaining those projects. Nick Wellnhofer (the primary maintainer of libxml2) got pretty tired of this over the years, rightfully regarding it as exploitation of unpaid labour, and responded by publishing “Triaging security issues reported by third parties”.
I have to spend several hours each week dealing with security issues reported by third parties. Most of these issues aren’t critical but it’s still a lot of work. In the long term, this is unsustainable for an unpaid volunteer like me. I’m thinking about some changes that allow me to continue working on libxml2. The basic idea is to treat security issues like any other bug. They will be made public immediately and fixed whenever maintainers have the time. There will be no deadlines. This policy will probably make some downstream users nervous, but maybe it encourages them to contribute a little more.
You can look up Project Zero’s publicly documented list of issues reported to libxml2 and compare that against the history of contributions to the library and confirm that their reports typically correlate quite directly to work performed by Nick. This announcement was made on May 8th, not long before the Chrome team started looking into simply removing support for XSLT rather than doing something to support Nick or directly fixing the bugs they’d discovered. Of course I’ll let you form your own opinion, dear reader, but to me the two policies seem closely related.
Why doesn’t Google just fix the library?
Mason Freed has made statements on behalf of the Chromium team to justify their decision:
So while we (the Chrome team) do understand the posts suggesting a renewal and improvement instead of a removal, we are strongly convinced that this would not be the right way to spend our limited resources.
Doesn’t Google have more money than god?
I wasn’t able to find any recent statistics on how much money god has, but at the time of writing this Google’s parent company (Alphabet, stock-ticker symbol GOOG) has a market capitalization of 2.92 Trillion USD. According to Wikipedia’s “List of public corporations by market capitalization” they are one of only twelve such companies to have ever been listed at more than 1 Trillion USD.
Some people think it’s really important to clarify that Alphabet Inc. is technically a separate legal entity from Google LLC. While I don’t personally understand why someone might feel inclined to spend their limited time alive on Earth defending monopolists (more on that later) valued at Billions or Trillions of dollars, I will concede that they are technically correct.
Why can’t Google just allocate resources to fix the library?
It is pretty standard practice among the big tech companies to avoid paying for anything that can be leveraged for free. Zed Shaw described the people and companies behind this phenomenon as Beggar Barons.
I believe we are in the era of the Beggar Barons. Just like the Robber Barons before, these are fabulously wealthy companies that built their empires by (directly or indirectly) begging for free labor from open source developers.
The Beggar Barons aren’t stealing this labor though, they’re just using unscrupulous business practices and social manipulation to beg for free labor. Robbing would be more what Amazon does when it outright steals open source without crediting the author, or straight up just steals Elastic Search’s trademark.
No, this begging is particularly different because it capitalizes on the good will of open source developers. Microsoft, Apple, and Google are standing on the internet in their trillion dollar business suits with a sign that reads “Starving and homeless. Any free labor will help.” They aren’t holding people up at gun point. Rather they hold out their Rolex encrusted hand and beg, plead, and shame open source developers until they get free labor.
Presumably Google’s executives are smart enough to realize that if they were to pay the maintainer of libxml2 to address its issues, then word would eventually get out, and more of the other people they regularly exploit would start to expect payment for their labour too.
Why should we care about XML?
Oblomov wrote an article called “Google is killing the open web” which focused quite a bit on a history of neglecting open web standards that depend on XML. I don’t agree with all of their positions, but it is a solid overview packed with some details that I’d never heard of before reading it.
Personally, I know enough about XML and related technologies to use them effectively, but it’s not the first tool I reach for unless I know very well that it’s particularly well suited for a given job. I saw people criticizing the article to that effect, essentially saying that they agree with Google’s attempt to kill off XSLT and other XML-related technologies simply on the basis that they (the critics, not Google) find them ugly or otherwise don’t enjoy using them.
To be clear, I think those making such arguments are deeply unserious people. Fundamentally, I believe that technologists ought to make decisions about technologies on a more rational basis than I don’t like it, because decisions about technologies affect other people’s lives.
There’s a solid argument to be made that if things had played out differently a few decades ago then RSS could have been designed to be encoded in some other language. XML was simply a popular option at the time, and popularity is very often arbitrary. With that said, the social coordination required to get people to agree on a standard tends to be far more difficult than designing a proposal for a standard.
Once that work is done, people base their work on top of those standards, and this affords us benefits like how any website based on Wordpress includes an RSS feed by default. At the time that I am writing this, Wordpress-based websites make up 33.99% of the world’s top 1 million CMS-based sites. That’s at least 339,900 websites to which I can subscribe without handing over my email. Likewise, I can choose to subscribe to those fields using a variety of different RSS reader apps.
That degree of interoperability is an incredible achievement. Arguing to throw aside standards like that because you don’t like XML is quite frankly absurd. If anyone were to propose such a thing to my face I would unapologetically laugh at them for being such a deeply silly person. Seriously, if this is you then please go sort yourself out.
No, RSS will continue to work, but if you open an RSS feed in your browser then you might just see some XML that will probably look like nonsense to you. As mentioned above, that’s already the case for RSS feeds in most browsers, but the proposed removal of XSLT would ignore the transformation rules and styles included by authors who had deliberately included them.
I think it’s really important to focus on the matter of intentionality because it genuinely doesn’t seem to be something that Mason Freed understands. In that same Github thread he outlined three apparent use-cases for XSLT, and I’m going to focus on the first:
RSS and Atom Feeds: XSLT is used to make raw RSS and Atom feeds human-readable when viewed directly in a browser. The use case is that a user accidentally clicks on a site’s RSS feed link, rather than pasting that link into their RSS reader, and gets XML rather than something they can read.
➡ A proposal for this use case is to add
<link rel="alternate" type="application/rss+xml">to the (HTML-based) site, rather than an explicit (user-visible)<a href=”something.xml”>that users might accidentally click. This solution allows RSS readers to find the feed if a user pastes in just the website URL, but it also allows human users to see regular HTML content. This also follows the normal web paradigm that HTML is for humans and XML is for machines. Of course this doesn’t solve the case where a user just “has” an RSS link from somewhere, and they paste it into their web browser (rather than their RSS reader).
Let’s zoom in:
The use case is that a user accidentally clicks on a site’s RSS feed link, rather than pasting that link into their RSS reader, and gets XML rather than something they can read.
XSLT is, like so many other web technologies, general purpose. That means that even though it there may be several commonly recommended ways to use that technology, it can just as easily be used in ways that its developers or maintainers didn’t anticipate, and that these other uses are essentially no less legitimate than those intended by the technology’s creators.
So, if someone comes across a link to my RSS feed and happens to open it in their browser because they don’t know how to use it otherwise, my stylesheets will kick in and transform the data into a nice web page that includes instructions on how they could instead use that link to subscribe to my site. That’s great, but XSLT can also be leveraged to transform the RSS feed into a list of all my recent articles along with all their relevant details like their titles, descriptions, publishing date, and so on.
Lots of websites include such a page that lists their recent articles in chronological order. Sure, you could write two separate pages for accidental and deliberate usage, but wouldn’t it be simpler to just make one that handles both cases? If XSLT continues to be supported, then the answer is yes. If Google and other browser vendors opt to drop support because they consider their time supporting XSLT to be worth more than that of all the people who will have to rework their websites, then I guess not.
That wasn’t the end of Mason’s comment about XSLT for RSS feeds, though, so I’ll address the rest:
A proposal for this use case is to add
<link rel="alternate" type="application/rss+xml">to the (HTML-based) site
Personally I already do this for my websites, as do many other authors. Some RSS readers will scan HTML for metadata like this which indicates where the site’s RSS feed can be found, and then follow that level of indirection to directly load the feed. This is called RSS autodiscovery, and while it works as Mason described in some RSS readers, there are still many which expect the exact address of the feed to be provided. For this reason alone it’s prudent to still provide a link to the feed itself.
...rather than an explicit (user-visible)
<a href=”something.xml”>that users might accidentally click
Again, the presumption that there are no reasons to intentionally click on such a link demonstrates Mason’s failure to consider XSLT as a general purpose technology that could be used outside his extremely narrow conception of it.
This solution allows RSS readers to find the feed if a user pastes in just the website URL, but it also allows human users to see regular HTML content.
Again, this only works in some readers. Also, I don’t think most human users care even a little bit whether they are seeing HTML content or XML that has been transformed into HTML. They care whether they see meaningful content that they can understand.
This also follows the normal web paradigm that HTML is for humans and XML is for machines.
Both HTML and XML can be human-readable, but only if you know how to read either one. That’s why we have browsers which are able to transform either markup language into a properly rendered web page. Again, people went to a great deal of trouble to see that both usages were standardized, and this is being thrown away because XSLT support is inconvenient for Google.
Of course this doesn’t solve the case where a user just “has” an RSS link from somewhere, and they paste it into their web browser (rather than their RSS reader).
Correct.
Also, Mason completely overlooks the problem of how a person might decide whether or not it might be worthwhile to try subscribing to a page via RSS. He says you should include metadata in the page, but unless you’ve installed an add-on in your browser which notifies you that the current page offers an RSS feed, then you will need to rely on the website’s author to inform you of its availability.
So, for instance, if I were to follow Mason’s advice as given, I would presumably have to mention that I have an RSS feed available, but avoid linking to it? I honestly think most people would find this pretty confusing or frustrating, especially people whose reader apps don’t support autodiscovery. I could rectify this by making a page explaining how to subscribe and linking to that, but again, my XSLT rules already produce a page which accomplishes this in addition to neatly solving several other problems.
It’s almost as if the Chrome team didn’t put any serious thought into these proposals, but that couldn’t possibly be the case, right?
What percentage of websites actually use XSLT?
There was some discussion about the appropriate methodology for measuring XSLT usage in that big Github thread. Mason Freed settled on somewhere around 1/8000, citing some publicly available measurements of feature usage
The current usage for the higher of the two, XSLTProcessor, is 0.012672%, so the exact ratio would be 1/0.00012672, or 1 out of 7891 page loads. I rounded to 10,000.
Let’s not lose track of the point in these details. For every ~8000 people browsing a site, all of them are at risk from XSLT, and 1 of them will have some pain from this migration. I’m not downplaying that - I’d really like to make the pain as “painless” as possible. E.g. I wrote a polyfill. But I’d like to keep the big picture in mind as we have this discussion.
The XSLTProcessor use is clearly seasonal from the graph, tops higher than 0.1% and shows a distinct growing trend, particularly after 2022 that has been a watershed year for a reversal of the centralization of the web. This goes to support my previous statement about the shift being caused by centralization rather than a general lack of interest.
BTW, More than 0.1% means that more than 1 in 1,000, and that’s only from Chrome, which last time I checked counted 3.5 billion users, give or take. I don’t know how many page loads that is, but even if it’s just one page load per user that’s 3.5 million users affected.
all of them are at risk from XSLT,
This is solved by fixing the XSLT processing library or switching to a better one. Removing the feature is detrimental to the open web.
So, clearly you can measure usage in very different ways and arrive at very different conclusions depending on your preferred metric (or ostensibly according to your intended result). In any case, to adopt either metric as a sufficient grounds for making the decision you would have to assume that a purely quantitative method is sufficient grounds for governing the web, and I have two objections to that notion.
First, suppose we were to take that approach in medicine and decide to only train doctors in procedures applied more frequently than 1 in every 8000 appointments. That would probably be bad, right? Lots of people break multiple bones in their lives, or get recurring throat infections, or get some routine vaccinations, so presumably doctors would still be trained in how to treat those. But according to this article from 2002 there were only “about 3 million people worldwide with pacemakers” at the time. Nevertheless, their use was not discontinued, and I assume that most would consider proposals to do so inhumane. Frequency of use is an insufficient metric for the value technologies provide, particularly if we consider whether another technology could be substituted in its place, which the Chrome team admits is not the case.
Second, I think it’s also worthwhile to reconsider the notion that more frequent usage necessarily implies higher value. At least according to my own habits, most RSS feeds as viewed in the browser are useful exactly once. I review the articles in a site’s feed to confirm whether I’d like to subscribe, then plug the feed into my reader, after which point I can access their contents directly in Thunderbird or jump directly to the relevant page in the browser at my discretion. I consider this a success both as someone who engages with other authors’ publications and as one who publishes in this manner, but by Google’s quantitative method it appears as evidence of the technology’s failure.
Did Google unilaterally decide to kill XSLT?
Eric Meyers is an employee of Igalia, which Wikipedia describes as
a private, worker-owned, employee-run cooperative model consultancy focused on open source software
I don’t personally know Eric, but I’m willing to assume good intentions because of the cooperative values Igalia represents. According to their website:
Igalia has been working on all aspects of the open web platform since 2009. Since 2019 we have been the second largest contributor to WebKit (after Apple), the second or third largest contributor to Chromium (after Google) and in the top few largest contributors to Mozilla’s Gecko (#2 in 2024). Today, we are also the #1 contributors to Servo and Wolvic.
The the Servo project itself deserves much deeper coverage, and I’ll likely write more about it soon, but for now my point is that Igalia employees have an insider’s perspective on how web browser development works. So, I assume that Eric was trying to share insights from that when he wrote “No, Google Did Not Unilaterally Decide to Kill XSLT”.
I can’t fault the article on any technical points, but I do have some criticisms on its tone and how it fits into the wider context surrounding this issue. This will be easier to follow if I break it down into subsections.
Can Google actually kill XSLT?
The short answer is no. XSLT is a technical standard which describes how different XSLT implementations ought to behave. You might as well ask if Google is capable of destroying ideas.
The longer answer is that people tend to talk about complex issues in simplified terms, and at least some of the time when they say things like “Google is killing XSLT”, what they actually mean is that:
Google is taking concrete steps that will predictably reduce the utility of XSLT as a format for publishing on the web, at least as viewed through browsers over which Google has sufficient influence.
Of course, some tone is lost through text-based media, so it can be hard to tell whether a person writing about Google killing things is being hyperbolic out of ignorance or in order to be emphatic in their disapproval of the monopolistic fashion in which Google tends to operate.
So, we find ourselves in a situation where some people are upset over Google killing XSLT, while others like Eric try to clarify that Google is definitely not killing XSLT, and depending on how charitably you’re willing to interpret either party’s statements they might both be correct. This is very unsatisfying, of course, which is at least partially why I feel so compelled to spend my time analyzing these circumstances and some of the very silly discourse surrounding them.
Can Google reduce the utility of XSLT?
This time the answer is a very clear and resounding yes. Google’s Chrome browser alone has a nearly 70% market share on both mobile and desktop devices. That position alone gives them a lot of power over how the web works. If a niche browser vendor like Brave breaks a feature of the web then that’s Brave’s problem, but if Google breaks a feature of the web then that’s the web’s problem.
This website is structured such that its index of articles is presented by transforming its RSS feed with XSLT. If Google drops support for XSLT from Chrome then around 70% of people will not be able to view that list of articles. Just threatening to do this puts some pressure on me to prepare an alternative for that 70% of hypothetical readers, and once I have such an alternative in place I might as well just serve it for everyone unless I want to be stubborn and spiteful (which I might still choose to be).
Now, people who have been paying attention to browser market share over the last decade or so will know that the situation is actually considerably worse than Google controlling just 70% of browser usage. The proprietary Google Chrome browser is based on an open-source project which Google develops called Chromium. Microsoft used to maintain their own browser engine for their Edge browser but switched in 2020 to a version based off of Chromium. The developers of the Opera browser did the same at some point, as did those behind Vivaldi. Then there’s a variety of projects like Brave and Arc which were based on the same code as Chromium right from their inception, but which nevertheless pitch themselves as viable competitors to Chrome.
Taking into account these browsers which are in some way derived from projects over which Google has authoritaty, they control over 70% of the mobile market and around 85% on Desktop. Google only has direct power over their own browser, but their influence over others is undeniable. If they make a change with which the other project maintainers disagree, it becomes their responsibility to undo that change, which takes time and expertise. Even if we assume that these dependent projects have that expertise (though it’s quite clear that many do not) in the vast majority of cases they simply go along with whatever Google decides.
Does the Chrome team have the resources to maintain XSLT support?
According to Eric, no.
Mason mentioned that they didn’t have resources to put toward updating their XSLT code, and got widely derided for it. “Google has trillions of dollars!” people hooted. Google has trillions of dollars. The Chrome team very much does not. They probably get, at best, a tiny fraction of one percent of those dollars. Whether Google should give the Chrome team more money is essentially irrelevant, because that’s not in the Chrome team’s control. They have what they have, in terms of head count and time, and have to decide how those entirely finite resources are best spent.
I’m going to focus on two things here, adding my own emphasis:
Whether Google should give the Chrome team more money is essentially irrelevant, because that’s not in the Chrome team’s control.
I think this sentence really clarifies Eric’s argument. The fact that he believes Google’s trillions of dollars are irrelevant (let’s ignore that those trillions are just a valuation and that they are technically attributed to Alphabet) makes it clear that he’s primarily concerned with the Chrome team as an atomic unit rather than the broader influence Google wields. He’s making the case that Mason Freed is in a really tough position, and being afforded relatively limited resources for the gargantuan task of maintaining a browser used by the majority of web-using humans.
Mason is expected to prioritize between many different choices in a context where someone will very probably be annoyed or angry with him if their preferred feature is neglected. Mason Freed is only human, and his employer is one that routinely lays off thousands of people at a time, most recently with a cut to 35% of its managers. I get it.
“Google has trillions of dollars!” people hooted.
The fact that Eric is characterizing people’s responses as “hooting” seems to reveal at least a little contempt on his part, as though people are foolish for not understanding Mason’s constraints. The thing is that just as how Google’s trillions are irrelevant to whether Mason is a decent person, the quality of Mason’s personal character is irrelevant to whether people should accept Google’s unfair control over the web.
I believe that many of the people doing the hooting to which Eric alluded (myself included) understand the situation perfectly well. Google has not provided the Chrome team with the resources to adequately maintain standard pieces of the web unless they provide material benefit back to Google. That’s our point.
By all accounts this appears to be a deliberate choice, but the executives behind such decisions presumably aren’t going to respond to Github issues holding them accountable, whereas Mason might. Google pays people to maintain their open-source projects at least in part so that they can talk about how much good they are doing for the world and its common resources, even as they steer those common resources away from supporting established standards and towards a vision of the web which is better for Google’s bottom line. It’s a rough job, but this comes along with the role. The considerable unpleasantness associated with working for a terrible company like Google is ostensibly a factor of why those employees who have not yet been laid off are compensated so well when compared to those of other companies.
But Google’s actions aren’t unilateral, right?
Once again, there is a simple answer, that No, Google isn’t trying to kill XSLT unilaterally, but that simple answer isn’t particularly useful to anyone but Google.
As a reminder, all of this started because there are known flaws with the most widely used XSLT implementation. These browsers had to do something, and while the decision to abandon XSLT support entirely is far from the only viable option, there is some sense in considering the option. As Eric mentioned, initially their position on removing XSLT support was basically tentative and exploratory:
First of all, while Mason was the one to open the issue, this was done because the idea was raised in a periodic WHATNOT meeting (call), where someone at Mozilla was actually the one to bring it up, after it had come up in various conversations over the previous few months. After Mason opened the issue, members of the Mozilla and WebKit teams expressed (tentative, mostly) support for the idea of exploring this removal. Basically, none of the vendors are particularly keen on keeping native XSLT support in their codebases, particularly after security flaws were found in XSLT implementations.
...but consider how you might handle yourself in this situation. Suppose that you are a maintainer of one of the many browsers that has only a tiny fraction of Google’s market share. You know that if Google decides to drop support for XSLT, then most of the people who use it for a critical component of their websites will have to come up with an alternative. The majority of people who use Chrome/Chromium or one of its derivatives will not switch to using your browser even if you continue to support the standard, because this just isn’t the sort of feature that will motivate people to make the switch. And, if XSLT does end up being effectively dropped from the web relative to its current level of functionality, then it will likely be Google that takes the majority of the blame.
The only organizations with any meaningful influence here are Apple and Mozilla, and the influence they have is quite frankly pitiful. If we’re going to collectively accept the Chrome team’s argument that they lack the resources to maintain their XSLT implementation, then we ought to also admit that none of the other browser vendors are in much of a position to go against Google. Once we do that, however, we’re back to this looking like either an abuse of market consolidation (itself a result of decades of unfair and illegal business practices) or an abdication from their perceived responsibility as a steward of the web.
Now, you might read this and say that it’s speculative, and I agree, but let’s suppose the opposite were to occur. If Google announced that they would definitely continue to support XSLT, even if the other browser vendors really wanted to drop it, what do you think would happen? The developers of Mozilla’s Firefox and Apple’s Safari would probably reverse course, right?
So, is Google acting unilaterally? No. But is that supposed to be comforting? Is that an indication that the web browser ecosystem is healthy? Arguably not.
Should people take out their frustration with Google on its employees?
Not that long ago I read an interesting framing of a relevant phenomenon, quoted from the Complex Systems podcast and transcribed by Karl Fogel:
“One of the ways the system protects itself is to describe as deviant behavior describing the behavior of the system.”
We see this directly in the Github issue where the removal of XSLT from the web was being proposed, where this comment by Jonathan Hogg was hidden and marked as off-topic:
I know that the browser developers here want to keep this discussion narrowly focused on technical merits and usage statistics, but it is disingenuous to not acknowledge that all decisions are political.
That Chrome’s aging XML support relies upon a piece of open source software which receives limited support, while Google’s resources are spent on developing new browser-based ad tracking technologies is a political decision. I’m not criticising the developers here – this is to be expected when Google makes its money from ads.
However, when a company has expended very significant resources to obtain a market-dominant position you need to understand that you are not making decisions about technical minutiae, but making decisions on what lives or dies on the open web.
“Eh! Just use a polyfill” is a lazy way to avoid acknowledging those decisions.
...after which point the discussion was locked as too heated, with further comments limited to collaborators.

Actions like this, or Eric’s comments about Google’s money being irrelevant can ostensibly be motivated by benign intentions. Eric wants it to be clear that Mason isn’t to blame, and the collaborators who were permitted to continue discussing the strictly technical aspects of the discussion of XSLT’s removal presumably wanted to be able to avoid notifications about factors that are beyond their control.
Still, no alternative is given. A politician might placate their constituents by indicating that they don’t have the funding to address a matter, but then still advocate for those constituents by indicating which committee members would theoretically be responsible for allocating that funding.
Google’s employees will readily say that the matter of funding is above their purview, but never make such suggestions because they aren’t elected, and their only real constituents are their managers. Eric advocates for treating browser vendors better because presumably he might easily find himself in the same position as Mason.
Whatever the motivation, the result is the same. People are told that this isn’t the place to voice your concerns, or that they are voicing their concerns with too aggressive a tone, or that this is all just tentative and nothing has been decided yet (notably, WHATWG’s most recent meeting minutes suggest that these tentative decisions are going to go through). The raging mob of peasants is told to put away their torches and pitchforks, and to be reasonable, and to let the adults in the room get on with governance. As the saying goes, “The purpose of a system is what it does”, and this system seems to overwhelmingly favour Google at the expense of all those unruly peasants.
So, I don’t feel particularly great about endorsing people saying bad things about Mason or Eric, but do not I see much of an alternative at present either. If I were in their position, like that of a feudal lord with a mob at their door, then I might try to convince them that their actual enemy is the king.
Does this not seem like some outrageous conspiracy theory?
Webkit reviewer Ryosuke Niwa eventually suggested that commentors stick to the established facts:
While I understand your frustrations and counter points about wanting to preserve XSLT, please refrain from making speculative accusations about companies / organizations having ulterior motives or ill-intent. Refer to our code of conduct in place.
Okay, yes, but just for a moment please put on your best tin-foil hat and hear me out when I say that Google is a company that regularly engages in conspiracies.
Have you heard of that time Google violated the United States’ Wiretap Act by configuring their Google street-view cars with equipment to collect en masse the unencrypted traffic of wireless devices in their vicinity?
What about when they hired consultants that specialize in union-busting in response to employees’ organization efforts? Or about how they lied in court that they had hired those consultants for other reasons, which was found to be untrue:
“the documents confirm that IRI did not give legal advice but rather was retained to provide antiunion messaging and message amplification strategies tailored to the Respondent’s workforce and the news and social media environment,”
They violate labour law by refusing to bargain with unions.
They illegally surveil and fire employees who engage in activism.
They fire ethicists whenever they do what they were ostensibly hired to do by raising ethical concerns
Oh, and it might be worthwhile to look at the “Antitrust” section of the wikipedia page for litigation against Google. It covers how India found that Google abused its dominant position
“by requiring device manufacturers wishing to pre-install apps to adhere to a compatibility standard on Android”
...or how the US found they violated the Sherman Antitrust Act of 1890
“by illegally monopolizing the search engine and search advertising markets, most notably on Android devices, as well as with Apple and mobile carriers”
...and how a second federal antitrust case in the US ended with a ruling that they had unlawfully monopolized several markets, that they had “substantially harmed” its publishers, and
“ultimately, consumers of information on the open web.”
Or perhaps you’ve heard about how in the proceedings of these antitrust cases it was discovered that the company was so confident in their monopoly that they began explicitly testing how much worse of a service they could afford to provide:
Dahlquist said internal Google documents show that the company, unencumbered by any real competition, began tweaking its ad algorithms to sometimes provide worse search ad results to users if it would increase revenue.
So when someone pleads that we “refrain from making speculative accusations about companies / organizations having ulterior motives or ill-intent”, I really wonder what precise number of antitrust rulings they’d consider an appropriate threshold for making such assumptions. Perhaps they just haven’t heard about these rulings and are speaking purely out of ignorance? Alternatively, perhaps they know all about it and have their own reasons to convince people to continue placing faith in organizations that very clearly determined to abuse their trust.
Would that really be so outrageous?
Is there any foreseeable end to Google’s monopoly?
Ultimately it seems pretty fair to say that Google will continue doing as much as it can to increase profits so long as they are permitted to get away with it. A little meaningful antitrust enforcement would go a long way, but the latest news in that domain is not great.
As mentioned above, Google was found guilty of monopolizing the search market. Sentencing in cases like this involve proposed remedies to address unfair advantages in the relevant market, and the US Department of Justice suggested a forced divestiture of Chrome. Basically this would mean that Google would need to sell Chrome or spin it off into its own independent company, such that it could no longer be used to unfairly prevent meaningful competition in areas they currently control. They made similar recommendations for Google’s Android operating system.
Breaking up Google might have meant that Chrome would no longer be the default browser on Google’s Android OS, and that their search engine would no longer be tightly integrated into either software. Both these properties reinforce the profitability of their search, the proceeds from which give the Chrome team a significant advantage over similar teams from other browser vendors.
This could have significantly impacted the majority browser share they hold, which might have had downstream effects on admittedly niche issues like XSLT support. Unfortunately, Judge Amit Mehta rejected that proposed remedy, however, so we’re not going to see any of that play out even though the role of Chrome in their monopoly was confirmed:
The ruling acknowledges that Chrome’s market position does contribute to Google’s search dominance, but full divestiture can often have unintended consequences. Mehta decided that Google’s use of Chrome as a vehicle for search is not sufficiently linked to anticompetitive conduct to justify forcing a sale. “Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints,” the ruling reads.
In other words, Chrome’s majority position is undeniably a significant factor in Google’s ability to commit crimes, but at least as far as this case is concerned its influence is not severe enough to constitute a crime in itself. Cory Doctorow responded to this ruling by calling it “The worst possible antitrust outcome”, and went on to cite the commentary of many experts on the matter:
It’s impossible to overstate how fucking terrible Mehta’s reasoning in this decision is. The Economic Liberties project calls it “judicial cowardice” and compared the ruling to “finding someone guilty for bank robbery and then sentencing him to write a thank you note”
I know some people object to Cory’s occasional use of coarse language, but I hope my readers can suppress any objections to his tone and judge his arguments on their merit. Mehta’s decisions on this case function as little more than a stern warning for a company that only understands consequences. In my opinion that justifies a little cursing.
Where do we go from here?
For a start, we need to move past debating whether Google and their like are simply being misunderstood. Not only is Google a monopolist, they are one of the most powerful corporations in all of human history. We can’t expect any significant progress to be made without being able to at least agree that there is a problem that needs to be addressed.
I don’t expect XSLT support t