In the artificial intelligence industry, there has long been a simple credo: whoever has more chips wins. However, Satya Nadella, CEO of Microsoft, has now unexpectedly rejected this logic – and revealed a weakness that has so far only been marginally addressed in the public debate: It’s not a lack of NVIDIA GPUs or AI accelerators, but a lack of energy, space and infrastructure to run them in the first place. In a conversation on the BG2 podcast, Nadella speaks openly about the fact that Microsoft is not currently failing to procure AI chips – but rather to put them into operation. The warehouses are full, but there is a lack of “warm shells” – i.e. pre-equipped server racks with sufficient power supply and cooling.

This statement is far more than a side note. It calls into question the current dynamics of the AI industry. While NVIDIA CEO Jensen Huang continues to claim that there will not be an oversupply of computing power for the next two to three years, Nadella clearly disagrees: it may not be a “compute glut” in the classic sense – but it is very much an energy crisis. No power means no inference, no cooling means no high-performance AI. In plain language, this means that Microsoft can buy GPUs, but not use them. And what use are chips if they gather dust on the shelf?
This bottleneck is not a trifle, but a strategic turning point. The increase in performance of current GPU generations – from NVIDIA Ampere to Hopper and Blackwell, for example – is accompanied by exponentially increasing energy requirements. According to internal estimates, the rack power requirements with upcoming Kyber systems will increase tenfold to a hundredfold compared to Ampere. This is not a linear problem, but an infrastructural one. The world’s power grids, cooling systems and data centers are simply not prepared for this growth. It is no longer a question of individual server rooms, but of industrial mega-projects with electricity requirements at city level.
The consequences for the industry are far-reaching. Hyperscalers such as Microsoft, Google and Amazon have invested massively in AI hardware in recent years – often speculatively, driven by the fear of being left behind. It is now becoming clear that the real limitation is not the hardware, but its sustainable operation. And that means: longer amortization periods, inefficient utilization, delays in AI integration into business solutions. A kind of “cold investment backlog” – expensive, risky and growth-inhibiting For NVIDIA, this development is a double-edged sword. On the one hand, business with AI chips such as H100, GH200 or the upcoming Blackwell continues to flourish – at least on paper. On the other hand, it is becoming clear that pure performance is no longer enough. If you want to deliver today, you also have to think about system integration, energy efficiency and infrastructure solutions. It is no longer enough to have the fastest chip. You also have to be able to operate it – economically.
There is also a regulatory component. Data centers are increasingly being targeted by authorities, citizens’ initiatives and energy suppliers. In countries such as Ireland and the Netherlands, new data centers have already been banned. And this at a time when AI is supposedly on the verge of the “next big leap”. However, if the power is lacking – or is no longer provided politically – the leap will turn into a stumble. Nadella’s statements are therefore a warning signal. The race for the best AI systems is not over, but it is shifting. It’s no longer just about teraflops and tensor cores, but about power supply, heat dissipation and location planning. Anyone buying a GPU today has to ask themselves: do I have the space, the energy and the money to run it? Or will it just end up being an expensive symbol of missed infrastructure policy?
For developers, investors and decision-makers, one thing is clear: the era of unlimited scaling is over. The bottleneck of the future is not chips, but electricity. And that means that those who don’t rethink will soon be left in the dark – despite full warehouses.
Source: Bg2 Pod