During a summer blackout when I was a kid, a neighbor ran an orange extension cord across the street so our freezer wouldn’t thaw. It looked absurd: this thin line humming with borrowed power, keeping the lasagna alive. But it worked. In a pinch, you build your own grid.
OpenAI is doing the grown-up version. Not with cords, but contracts. They’re stringing a private power grid across rival utilities, locking in long-term “compute offtake” so the lights of their AI never flicker.
Look at the map they’ve drawn. They pried open their exclusivity with Microsoft, won the right to buy from any cloud, and immediately signed a [seven-year, $38 billion deal with Amazon](https://www.cnbc.com/2025/…
During a summer blackout when I was a kid, a neighbor ran an orange extension cord across the street so our freezer wouldn’t thaw. It looked absurd: this thin line humming with borrowed power, keeping the lasagna alive. But it worked. In a pinch, you build your own grid.
OpenAI is doing the grown-up version. Not with cords, but contracts. They’re stringing a private power grid across rival utilities, locking in long-term “compute offtake” so the lights of their AI never flicker.
Look at the map they’ve drawn. They pried open their exclusivity with Microsoft, won the right to buy from any cloud, and immediately signed a seven-year, $38 billion deal with Amazon. Then came data center projects with Oracle, SoftBank, and sovereign partners in the Gulf—$500 billion through the Stargate Project. In parallel, they locked in chip supply with Nvidia, AMD, and Broadcom so the turbines behind the meter actually spin. None of this reads like a software roadmap. It reads like a utility prospectus.
For a decade, the cloud dictated terms. Everyone else took what they could get. Now the script flips. Hyperscalers become suppliers. The leading AI buyer aggregates their capacity. When OpenAI complained it couldn’t get enough compute from Microsoft alone, it wasn’t a feature request. It was a reliability concern.
The numbers tell you where the leverage moved. Last year, Amazon, Google, Meta, and Microsoft spent over $380 billion on infrastructure — more than the entire GDP of Finland, spent in a single year by four companies. OpenAI, meanwhile, remains unprofitable. Yet they’re committing $38 billion to Amazon over seven years. That deal alone exceeds Ford’s entire market cap.
The traditional calculus would call this a bubble. The company with no profits dictating terms to the most valuable companies on earth. But that misreads what’s happening. OpenAI isn’t betting they’ll be profitable next quarter. They’re betting that guaranteed access to compute becomes the most valuable asset in technology. They’re securing supply before the shortage arrives.
This is what commodity markets look like when everyone realizes the same thing at once. In 2021, car manufacturers couldn’t build vehicles because they didn’t own chip fabrication. They got outbid by companies that did. Now imagine that dynamic, but with compute instead of semiconductors, and the stakes aren’t empty dealer lots. It’s whether your AI works at all.
The shift isn’t subtle. Strategy used to be about inventing a better model. Now it’s about financing a continent of capacity and keeping it fed. Risk used to be “does it work?” Now it’s “does it arrive on time?” Whoever aggregates demand across utilities starts to look less like a tenant and more like a grid operator.
Consider what that means for the next decade. The breakthrough that matters won’t necessarily be the cleverest algorithm. It will be who locked in supply at 2025 prices before the 2027 shortage. Who secured diversity so a single vendor’s outage doesn’t crater their service. Who convinced a sovereign wealth fund that compute infrastructure is as strategic as oil reserves.
In commodities, advantage compounds quietly. The steel mill that signed iron ore contracts before prices spiked doesn’t celebrate publicly. They just keep running while competitors idle. In AI, we’re approaching the same dynamic. The winners will be the ones who treated compute like the scarce resource it’s becoming, not like the abundant cloud capacity it used to be.
All this infrastructure raises a different question: what happens to the people? Recent analyses from the St. Louis Fed paint a more complex picture than the standard narrative. Occupations with higher AI exposure have experienced larger unemployment rate increases between 2022 and 2025. Computer and mathematical occupations, among the most AI-exposed at around 80%, saw some of the steepest unemployment rises. Meanwhile, blue-collar jobs and personal service roles, which have limited AI applicability, experienced relatively smaller increases.
But the infrastructure being built suggests different stakes than current conditions reveal. Some economists warn that if systems approach human-like general intelligence within years, wages and work could be jolted in ways our social safety nets weren’t designed to handle. The gap between today’s emerging patterns and tomorrow’s possible disruption is the same gap that existed between early subprime mortgage exposure and full-blown crisis. Not everyone sees the bridge until it’s crossed.
The path forward depends less on what AI can do than on whether we invest in reskilling at the same rate we pour concrete for data centers. Either way, the bottleneck won’t be ideas. It will be throughput.
The premium shifts from model architecture to infrastructure literacy. For engineers, understanding how to optimize for constrained compute becomes more valuable than squeezing another point of accuracy. For companies, strategic advantage flows to those who secure capacity now, even at uncomfortable cost. Waiting for prices to drop assumes supply will meet demand. History suggests otherwise.
The boring bets may matter most. Not the sexiest model, but the companies with the longest runway of guaranteed compute. Not the flashiest demo, but the partnerships that ensure it keeps running under load. And for nations, compute dependency becomes a geopolitical wedge. Countries that built domestic chip fabs after recent shortages are now asking the same questions about AI infrastructure. The grid matters more than the code running on it.
We spent a decade believing software eats the world because it scaled like thought. Marginal costs near zero. Distribution instant. Barriers low. The next decade looks different. AI scales like energy: constrained by physical infrastructure, governed by supply contracts, and bottlenecked by whoever controls the flow.
In that world, brilliance still matters. But the decisive move isn’t elegant. It’s securing the breaker box before the lights go out.
How did you like this article?
Enjoyed this article? Subscribe to get weekly insights on AI, technology strategy, and leadership. Completely free.