Stream Data Centers

Stuart Lawrence, VP of AI Innovation at Stream Data Centers (Source: Stream Data Centers)
As Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads continue to evolve, deployment needs and design specifications are becoming increasingly difficult to predict. In a technological climate changing this fast — and in a data center development landscape requiring years of advanced…
Stream Data Centers

Stuart Lawrence, VP of AI Innovation at Stream Data Centers (Source: Stream Data Centers)
As Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads continue to evolve, deployment needs and design specifications are becoming increasingly difficult to predict. In a technological climate changing this fast — and in a data center development landscape requiring years of advanced capacity leasing notice — only a clairvoyant could precisely predict future data hall cooling configuration needs for emerging AI and traditional use cases. However, a lot of capital is at stake when making these difficult decisions, and data center customers and developers need to meet in a middle ground that provides accuracy where possible and flexibility where it’s not.
This is why a hybrid cooling approach continues to be a priority for design and deployment. It might not be a new concept, but its ability to suit traditional and high-performance workloads lends important wiggle room to evolving requirements within new and existing customer IT environments. Hybrid cooling’s value proposition is that it’s a pivot and scale solution, a “needs-must” approach — and it’s also a smart financial decision.
Still, not all hybrid cooling solutions are created equal. As hyperscalers contract for future-ready capacity and developers look to deliver amid shifting or uncertain demands, the hybrid cooling options that offer the most flexibility (and require the least amount of lead time without sacrificing speed and continuity) will win out.
AI, Cooling and Why Things Are ‘Up in the Air’
Ultimately, the way data center cooling is provisioned is based on what hardware is being deployed and the kind of applications that will be running. AI hunger is driving demand for faster chips, increased computational power and more energy-intensive and dense hardware. In fact, Nvidia CEO Jensen Huang said in his keynote at GTC2025 that we might see racks reaching 600 kilowatts by 2027 and possibly even megawatt levels in the foreseeable future. For these advanced instances, liquid cooling has become the standard thanks to its more efficient heat transfer properties. Still, there will always be the need for storage and commercial cloud applications that don’t come with intense compute requirements, and these systems can run perfectly well on standard air-cooled technology.
The challenge comes when we consider that customers are often leasing capacity 18 months to two years out, and they may only know what will go into that space around eight months ahead of time — and even then, the playing field keeps shifting as new designs, hardware options and applications arrive. This means customers are often forced to make an educated guess about their data halls’ heat transfer equipment months if not years before they have concrete knowledge just so development can stay on track.
When customers are asked to estimate their initial heat capture ratio (the ratio of heat rejected to liquid or air modalities), they might anticipate a 70% liquid and 30% air cooling mix without having precise data, as an example. But to make their planning more bulletproof, customers might opt for their data hall to be capable of handling 100% of either cooling method to account for any potential mix of modalities. So, if the hall is designed for 8 megawatts, it should be able to cool 8 megawatts using air, liquid, or any combination in between. More often, data center providers solve this by installing separate systems: 8 megawatts of CDUs for liquid cooling and 8 megawatts of fan walls for air cooling, for instance. This approach doubles the equipment footprint and cost, resulting in 16 megawatts of front-end heat capture infrastructure for an 8 megawatt need. It might be better than facing disruption down the line, but it’s nowhere near as efficient in time, materials or results as it could be.
Conversely, let’s consider what would happen if a customer didn’t make an accurate forecast and didn’t cover all their bases with initial designs: The customer would be looking at stranded resources and sunk costs, and may be forced to deploy liquid-cooled IT in air-cooled data halls with the use of additional apparatus. In these cases, the workaround is often an air-assisted liquid cooling (AALC) device, a side-car liquid-to-air heat exchanger that rejects heat from the direct liquid cooled portion of the rack to the room air cooling system. These workarounds are also expensive and inefficient. For example, if the data hall was originally designed for 8 megawatts of air cooling, now it must provision an added ~500 kilowatts just to power the side-car units. So instead of getting the full 8 megawatts for IT, they’re only getting 7.5 megawatts of usable capacity. That’s 500 kilowatts of expensive critical power infrastructure.
Fortunately, when hybrid cooling strategies are at their best, these kinds of guessing games and undesirable results can be avoided. With dynamic hybrid cooling designs and flexible planning on the supplier side to support these deployments, customers and developers are empowered to late bind heat capture decisions without affecting the ability to source equipment and develop the data center.
The right approach — with a design that is engineered to factor uncertainty into the equation — ensures customers and developers can solve a lot of inefficiencies and get a lot more peace of mind.
Building the Best Hybrid Cooling Solution
With a configurable, modular cooling solution, developers can work with customers’ uncertainty instead of against it: They’ll be able to effect a simple change in the front end and accommodate different heat capture ratios, removing unnecessary spend in both time and resources — and cooling inefficiencies to boot.
But furthermore, supporting these configurable designs with rock-solid supply chain partnerships means flexibility can be prioritized long before the cooling setup is even installed in the data hall. If procurement decisions can be pushed to accommodate uncertainty without sacrificing the ability to get the right equipment, hyperscalers can trust that they’ll get the best results without having to gamble for them.
At Stream, the heart of this kind of hybrid cooling approach is what we call our Server Thermal Unit (STU): A modular, integrated thermal management system that supports both air and liquid cooling through interchangeable modules. Each STU can support up to 800 kilowatts, and STU modules can easily be reconfigured. As thermal conditions evolve, air modules can be replaced with liquid modules. Unused modules can be removed and repurposed in other data halls, optimizing both costs and resource utilization.
Stream’s STU hybrid cooling system delivers vital adaptability, but smart operators know flexibility at the business level is just as important as flexibility at the mechanical level. That’s why we adapt to customers’ own solutions too. For instance, if customers have an in-row or in-rack direct liquid cooling (DLC) solution and want Stream to only deploy a specific amount of air, we can dial that in — and assist with the installation of their chosen DLC equipment.
Still, for customers that want to take advantage of our proprietary offering, STU’s hybrid cooling system can also adjust to accommodate diverse or changing workloads and varying heat capture ratios to support both air-cooled and liquid-cooled IT equipment. When ratios vary between data halls, Stream’s STU allows configurable heat capture ratios, enabling even late-stage adjustments. The STU’s design allows data halls to be provisioned for up to 100% air or liquid cooling, or any combination in between, and its modularity ensures efficient heat management even when both air-cooled and liquid-cooled systems are deployed simultaneously within the same data hall.
This modular design also reduces costs for both the data center operator and its customers, significantly reducing upfront capital costs by eliminating the need to install 100% of both air and liquid cooling infrastructure from the start, allowing customers to defer expenses based on actual demand. This reconfigurability also enables faster deployment and a lower total cost of ownership, as STUs can be easily adjusted to match evolving cooling ratios.
Not to mention, Stream’s vertically integrated approach offers further cost advantages by consolidating components and services that would otherwise require coordination across multiple manufacturers. On the supplier side, the equipment needs are such that we can late-bind our procurement decisions to give customers the time they need to get more accurate forecasts.
Innovation in data center cooling technologies never ceases, and this means the world’s largest data center consumers are trying to hit a moving target from miles away — they’re searching for the bullseye across a lot of time and space. Since none of us can see the future, the best alternative is to build a solution that helps our customers take their shot when they’re ready without sacrificing their time to market or the quality of their results. Hybrid cooling is great for supporting change — It solves the flexibility part of the equation. But only the most configurable options truly solve for accuracy in the face of change.
By easing change when it’s needed, the strain against hyperscale uncertainty can be dramatically reduced — developers don’t need certainty in order to deliver the best results, and customers can trust they’re getting the right build on the right timeline even if they’re not sure what they’ll need.