
There are two competing forces in IT, and they are at play during the GenAI era as much as they have ever been during prior eras in the datacenter. One is the desire to provide a completely integrated system, ready for applications, from a single supplier. The other is to use a mix of best of breed components to make a custom system that better fits each unique customer.
Arista Networks has been genius at creating switches and sometimes routers that are best of breed and that are famously used by Microsoft and Meta Platforms in their vast networks to link their homegrown systems at hyperscale. Arista has over 10,000 customers and has sold mo…

There are two competing forces in IT, and they are at play during the GenAI era as much as they have ever been during prior eras in the datacenter. One is the desire to provide a completely integrated system, ready for applications, from a single supplier. The other is to use a mix of best of breed components to make a custom system that better fits each unique customer.
Arista Networks has been genius at creating switches and sometimes routers that are best of breed and that are famously used by Microsoft and Meta Platforms in their vast networks to link their homegrown systems at hyperscale. Arista has over 10,000 customers and has sold more than 100 million ports for somewhere around $32.3 billion in revenues as of the end of 2024, which was two decades after it was founded. And in many cases, it has emerged as a “blue box” supplier to the hyperscalers and cloud builders who want its iron and some of its firmware and management tools but not its EOS Linux-based network operating system because they have their own.
But given the advent of rackscale AI infrastructure, which will dominate the systems market for the coming decade and, for Nvidia at least, has NVSwitch for scale up memory interconnects and Spectrum-X for scale out and scale across networks, it is reasonable to wonder how Arista will do more than carve out its niche, but expand it, and make its next $32 billion in half the time – or even less.
The situation is not much different than when Cisco Systems converged serving and switching with its “California” Unified Computing System back in 2009, tightly tying switching with serving in blade server form factors that are largely forgotten today but shaking up the entire server business for a while. Cisco scared the hell out of the server OEMs back then, much as Nvidia’s full stack integration does today – and importantly leaves little room for the OEMs that do resell Nvidia GPU systems to add value and therefore make some profits of their own.
AMD and Meta Platforms have been working with OpenAI on the “Helios” double-wide AI rack design as a counter to Nvidia’s NVL72 form factor, and Arista is one of the potential switch suppliers for the Ethernet Scale Up Network (ESUN) alternative to NVSwitch for memory coherent switching between XPU accelerators used for AI training and inference within the Helios rack. (AMD is opting for a variant of UALink, which is itself based on its own Infinity Fabric coherent memory protocol, that runs atop Ethernet for the memory fabric across the Instinct GPUs used inside of the Helios racks.)
“Andy Bechtolsheim is personally driving – along with the hardware team – a significant number of these racks,” Jayshree Ullal, chief executive officer at Arista Networks, explained on a call with Wall Street analysts going over the company’s financial results for the third quarter. She was referring to the company’s co-founder and chief architect, a luminary in the datacenter, particularly in HPC compute and networking. “I think at any given time, we have five to seven projects with different accelerator options. Obviously, Nvidia is the gold standard today, but we can see four or five accelerators emerging in the next couple of years. Arista is being sought to bring all aspects – the cabling, the co-packaging, the power, the cooling as well as the connection – to different XPU cartridges, if you may, as the network platform of choice in many of these cases. So we are involved in a lot of early designs.”
“I think a lot of these designs will materialize as the standards for Ethernet are getting stronger and stronger. We now have a UEC spec,” Ullal continued, referring to the Ultra Ethernet Consortium 1.0 specification. “You have heard me talk about the Scale-Up Ethernet spec for ESUN where we can bring different work streams onto the same Ethernet headers, transport headers, data link layer, et cetera. So I think a lot of this will be underway in 2026 and really emerge in 2027 as scale up Ethernet becomes a more important part of that.”
It is shaping up to be a throwdown between UALink, ESUN, and NVSwitch for the scale up networks for AI clusters, and maybe HPC clusters, too. Arista Networks will get its fair share of scale out networks to link traditional server nodes into racks and clusters as well as rackscale nodes into distributed systems and it will probably get more than its fair share of scale across networks linking datacenters into even larger humongoscale complexes spanning regions. You have to compute where you have the power, and if the power is distributed, then the network has to be.
In the meantime, Arista Networks has to sell switches and router-ish gear to hyperscalers, cloud builders, and enterprises, and it is doing that just fine despite some very intense competition from Cisco and Nvidia and now with Juniper Networks inside of Hewlett Packard Enterprise trying to get a bigger piece of the datacenter action. Huawei Technologies is doing its own thing inside of China, where it rules.
In the third quarter ended in September, Arista Networks posted $1.91 billion in product sales, up 25.5 percent. Services revenues came in at $396.6 million, up 38.1 percent. Add it all up and Q3 2025 sales were up 27.5 percent to $2.31 billion, better than expected and growing 21 percent sequentially which is an eyebrow raise that shows something new is afoot. Perhaps Arista Networks just pushed services extra hard with product sales only up 1.8 percent sequentially.
If you drill down into it a little deeper, you see that Arista only had $35.1 million in software subscription revenues in the third quarter, up 11 percent sequentially, so that is not the big boom in the services segment although it did help product revenues grow.
Arista Networks had $978 million in operating income in the quarter, up 24.6 percent year on year but down eight-tenths of a point sequentially, which shows the effect of chasing AI back end and front end network business, we think. Net income came in at $853 million, up 14 percent year on year and down 4 percent sequentially, to which we say, “Ditto.” Still, Arista Networks is bringing 37 percent of revenues to the bottom line, which is – what is the technical term for this? – pretty damned good.
The company ended the quarter with $10.1 billion in cash and equivalents in the bank, $4.69 billion in deferred revenue (nearly double from a year ago and rising because demand is exceeding supply for components) that it has already been paid for, and had $4.85 billion in product backlog besides that. Again, this is pretty good, and a function of a rising AI datacenter interconnect business and a steady core datacenter switching business.
If you strip out the edge and campus businesses, which we try to do each quarter because we want to know what is happening in the datacenter, here is what the trendline looks like for the company’s revenues and profits:
After many years of projecting and modeling, and revenue guidance for both 2025 and 2026, we felt comfortable enough to put together the data behind the chart below, which shows the revenue streams for Arista Networks from 2020, when AI was starting to be material for the company, out to the forecast to 2026. Take a gander:
We realize there is a certain amount of spreadsheet magic in projecting sales of for non-AI switching and routing, but the top brass at Arista Networks gave us the forecast for AI networks and for campus networks as well as for overall revenues, so we are already halfway there.
To be specific, Arista Networks expects $1.5 billion of AI back end and front end network revenues for 2025. It used to just talk about back end AI network sales, but it is getting increasingly hard to separate back end networks (the ones linking the GPUs and sometimes the CPUs together inside of a single memory domain) from the upgraded front end networks that feed data into these systems as the do training or run inference. For the past several years, AI back end networks had been driving somewhere slightly north of 5 percent of the company’s revenues. If this year closes as expected, it will comprise 16.5 percent of $8.87 billion in sales for 2025.
And, based on the forecast given for 2026, AI back end and front end network revenues will increase 83.3 percent to $2.75 billion next year, comprising 25.8 percent of overall sales, which are expected to be $10.65 billion, up 21.7 percent from 2025’s revenue levels.
If you take GenAI out of the picture, Arista Networks would only have grown by 11.4 percent to $7.37 billion in 2025 and only by another 7.2 percent to $7.9 billion in 2026.
It may only take a year more for AI networking to be the largest product category based on our model, and we think this is precisely what will happen.
Ullal stopped short of making that prediction, but she did say this: “We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista is driving network scale of AI XPUs, handling the power and performance. Basically, the tokens must translate to terawatts, teraflops, and terabits. We are experiencing a golden era networking with an increasing TAM now of over $100 billion in forthcoming years. Our centers of data strategy, ranging from client to branch to campus to datacenter and now cloud and AI centers is a very consistent mission for the company.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between. Subscribe now