Note: this report was modified from its original version to fit this Substack. For the full formatted version, including footnotes and additional graphics, please see
“A modern industrial system is the material foundation of a modern country”
Xi Jinping
As you read this report, the screen before you illuminates pixels driven by code, streamed from a server housed in a data center, where meticulously engineered infrastructure ensures the delivery of this digital content to your device to be viewed and absorbed by your occipital lobe. While the digital world is often perceived to contain limitless abundance, it is in fact rooted in physical systems powered by electrons.
As our digital world continues t…
Note: this report was modified from its original version to fit this Substack. For the full formatted version, including footnotes and additional graphics, please see
“A modern industrial system is the material foundation of a modern country”
Xi Jinping
As you read this report, the screen before you illuminates pixels driven by code, streamed from a server housed in a data center, where meticulously engineered infrastructure ensures the delivery of this digital content to your device to be viewed and absorbed by your occipital lobe. While the digital world is often perceived to contain limitless abundance, it is in fact rooted in physical systems powered by electrons.
As our digital world continues to expand exponentially, it is crucial to understand the physical systems and supply chains that underlie it. Focus on building new infrastructure to meet the demand of digital growth is intense (and overdue) - Jensen Huang stated in his Q2 2025 remarks “We see a $3 trillion to $4 trillion AI infrastructure spend… by the end of the decade” from $600bn today,1 while Morgan Stanley forecasts $2.9tn in domestic datacenter spend through 2028.2 These numbers are so enormous you have to put them in perspective - that annualizes to $900bn of capex on datacenters per year by 2028 - compared to the entire S&P500’s 2024 capex of $950bn. Indeed, this infrastructure build dwarfs prior domestic infrastructure investments by orders of magnitude.
To those sitting on the sidelines: sad! And it’s really a marvel contemplating how far the industry has come - to think within the last decade a large build involved 5-10 megawatts, and we now see headlines about gigawatt+ clusters in development almost constantly. We have written about the domestic datacenter and grid infrastructure build from angles we’re focused on and investing in at Crucible over the past year - the power deficit and the urgent need for nuclear fuel, hurdles facing the power grid and the role of software, opportunities for crypto to transform the energy to compute value chain.
And yet, while we spend our days speaking with datacenter developers, operators and compute consumers, we still receive constant questions from landowners and capital allocators alike on how the very crux of this buildout works. As we at Crucible advise on select datacenter builds alongside our investing practice, we figured there’s no better time than now to put pen to paper - so here’s the guide to datacenter development that you never knew you needed.
Let’s start with the raw materials: land, power, water, and fiber.
Land: You’re gonna need space, but not just anywhere! Ground stability, wind, heat, and proximity to infrastructure (e.g. fiber) are all important. Space for future expansions is even better. Greenfield sites require no demolition but need basic infrastructure such as perimeter security, roadway access, and foundational elements. You’ll need to permit your site with a municipality or city, zoning dependent. The amount of space you’ll need will be dependent on a variety of factors in your data center design. 1.
Power: With the latest rack scale systems commanding 600 kW/rack, reliable baseload power with redundancies is essential. As you know from our prior reports, the domestic grid’s power deficit is severe, to the tune of a 60 GW deficit by 2030 as forecasted by Morgan Stanley.3 Don’t want to deal with securing grid interconnect and dealing with utilities for a year? You may have some options: various iterations of behind the meter power, off-grid, or even newer instances of “be your own utility”- all of which we’ll get to in a bit. 1.
Water: An on-premise water reserve is essential to serve cooling systems to dissipate the heat generated by servers. Water availability ensures efficient temperature control, reduces energy costs, and prevents hardware failures in increasingly high-density datacenters. A 100 MW datacenter typically requires 2 to 4 million liters/day for direct cooling, with indirect water use adding 10 to 50 million liters/day depending on the energy mix.4 1.
Fiber: High-speed fiber optic networks are crucial for low-latency data transmission, connecting the datacenter to the internet’s backbone and enabling seamless communication with users and other facilities. Dark fiber will be an increasingly important solution as lit fiber optimizes for cost rather than performance.
With the necessities outlined, we’ll get into the nitty gritty and discuss how to procure and develop these necessities into their optimal states, processes we’ll categorize into pre-development and development stages of the site build. Note this report covers the *bones *of the datacenter but not the *guts *(GPU procurement and monetization, constructing rack scale systems, PDUs, cooling systems, software tools, and more). We’ll cover these topics in subsequent reports. We love reports.
During pre-development, you’ll need to permit and provision feasibility and load studies on site & ensure you have robust fiber connectivity.
Securing permits is your first critical step. No permits, no datacenter, no flops for you. Your governing bodies need to ensure compliance with zoning laws, environmental regulations, and safety standards - best to start seeking these approvals before all else.
Local Zoning and Land Use Permits: City and county planning boards assess zoning and land use. Takes 3-12 months depending on jurisdiction and community opposition; in some cases, up to 36 months for complex projects.
Environmental Permits: Assessments for air, water, and noise impacts; air permitting challenges can add further delays. State environmental departments and the EPA review EIAs. Takes 6-18 months.
Generally speaking, certain areas of the country suffer from NIMBY (not in my backyard) mentality and thus create unnecessary frictions in the permitting and zoning process. According to Data Center Watch, $64 billion worth of projects were blocked across 28 states due to opposition between May 2024 and March 2025.5
While you secure permits, you’ll concurrently submit a load study to the governing utility in your jurisdiction and begin speccing out your procurement options. A load study is an analytical report created by your friendly regional ISO/RTO (independent system operator / regional transmission organization) to predict power needs and ensure the system can meet demand safely and reliably under a variety of operating conditions. The grid is a complex, dynamic beast and the load study ensures that infrastructure planning, grid reliability, and regulatory compliance are in place.
This information is crucial for ensuring that electrical panels are not overloaded, identifying potential problems before they cause outages, and optimizing energy usage. Load studies are also used to properly size electrical equipment, such as transformers and generators, and to plan for future capacity needs. In advance of submitting your load study to the utility, you can do your homework to understand how stressed the grid in your surrounding region is - how many substations are within range and on what voltage transmission line? How much transmission appears to be deployed out of the nearby substations? If the substations looked stressed, you can game the utility in your front of meter load request by letting them know you’ll build your own substation for your load (but you’ll need to overcome a steep transformer bottleneck to do so - more on this at the end of the section and how Giga, a Texas based team, is filling the bottleneck).
While the above process covers front-of-meter requests for power, note that procurement comes in two forms: front-of-meter (FOM, grid-supplied) or behind-the-meter (BTM, on-site generation). With FOM requests taking 6 months to 1 year and often times more for utility approvals depending on the circumstances, BTM solutions are increasingly attractive to bypass interconnect delays. These require more regulatory clarity in many states, and do come with the tradeoff of isolating your site from the grid and limiting your ability to curtail or tap grid resources as needed.
Regardless of the route you take to power, you’ll want to submit your load request to ensure you have the option to interact with the utility and grid at some point in the future, critical for hyperscale tier tenants who will want both hedged inputs and optionality to sell power back to the grid.
Timelines
Load Study: 1-3 months, conducted by engineering firms or utilities.
FOM Load Request: 6-24 months for <75 MW; 2-5 years for >75 MW due to grid capacity constraints and interconnection queues (see: 411,600 MW pending in ERCOT as of April 2025)
BTM Power Development: 2-5 years depending on power source, including permitting, equipment procurement, and construction.
Process: Request power from the utility, interfacing with regional grid operators (e.g., PJM, ERCOT). Large loads (>75 MW) face delays due to interconnection queues (411,600 MW pending in ERCOT as of April 2025).6
Pros: Lower upfront capital costs as utilities handle infrastructure (substations and transmission, unless you bring your own substation for faster interconnection). Access to existing grid resources.
**Cons: **Long lead times (up to 5 years for large loads). Exposure to grid price volatility and reliability issues. Limited control over decarbonization goals for concerned parties like hyperscalers. Likely higher power prices than BTM.
Notable Deals:
Microsoft’s Texas PPAs: Signed two 15-year Power Purchase Agreements (PPAs) with RWE for 446 MW from wind projects in Texas.7
X Supercluster: Relies on FOM power from the Tennessee Valley Authority (TVA), with1.5 GW secured through a multi-year PPA, leveraging TVA’s nuclear and hydro assets; supplemented with on-site gas via VoltaGrid.8
BTM power involves bringing your own power supply to colocate with your datacenter. The tradeoff of pursuing BTM vs. FOM concerns the time and cost to build your power source vs. the time to interconnect and contract a long-term Power Purchase Agreement (PPA) from your governing utility or Independent Power Producer (IPP). Every Independent System Operator (ISO) prices power differently, so there isn’t a straightforward, blanket answer to determine what makes sense. Because of course!
It’s worth noting location matters. To name a few examples, Ohio and Texas are BTM-friendly states; Texas recently passed SB6 to clarify how to bring BTM in ERCOT,9while Virginia is also emerging as BTM-friendly with Green Energy Partners’ 641-acre SMR project.10 Here’s a breakdown of the primary power sources you’d consider when bringing your own power:
Four dimensional chess, the data center edition.
Process: Build on-site gas-fired power plants, requiring permits from state environmental agencies and turbine procurement (e.g., GE Vernova’s LM2500XPRESS). Lead times for turbines are 2 to 5 years due to supply chain constraints. Fuel supply contracts and infrastructure for gas delivery is also critical. EQT Corporation is the largest supplier of natural gas in the US, vertically integrated with midstream pipelines to deliver natty directly to various endpoints.
Cost per kW: Approximately $722-$1,677/kW for construction, with combined-cycle plants (CCGTs) pricing at $722/kW and internal combustion engines (ICEs) at the upper bound of $1,677/kW.11 CCGTs come at a lower cost per kW and higher thermal efficiency (reducing fuel consumption), fit for large baseload power demand - while ICEs suffer from lower thermal efficiency but can be built in modular fashion for smaller datacenters. Operating costs range from $26-$50/MWh depending on fuel prices.12
Acreage required for 100 MW: Approximately 5-10 acres, depending on the plant design (e.g., simple-cycle vs. combined-cycle) and auxiliary infrastructure like fuel storage and cooling systems.13 Compact gas turbines require minimal land, but safety setbacks and noise mitigation may increase the footprint.
Pros: Faster deployment than grid upgrades (2-5 years), cost predictability, reliable baseload power, independence from grid constraints.
Cons: Permitting challenges and community opposition due to environmental impact. Turbine backlog can delay deployment.
Core Suppliers:
GE Vernova: Provides LM2500XPRESS gas turbines widely used for data center power.
Caterpillar: Supplies gas-fired generators for backup and primary power.
Cummins: Offers high-efficiency gas gensets for industrial applications.
Notable Deals:
Stargate (Abilene, Texas): Permitted 360 MW of gas generation in 2025 with plans for 4,500 MW additional capacity.14
Sailfish (Tolar, Texas): Proposed a 5,000 MW data center cluster with on-site gas plants, bypassing ERCOT grid delays.15
X Supercluster (Memphis, Tennessee): Uses natural gas gensets as backup, with 500 MW capacity for peak demand and hybrid FOM/BTM strategies; initially deployed up to 35 turbines generating 422 MW, though later reduced amid controversy.16
Meta Socrates Station (Ohio, 2025): 200-MW gas-fired plant powering adjacent data center off-grid.17
Process: Contract with an SMR developer for on-site nuclear power. Commercial-scale SMRs are expected to be deployed in commercial production post-2027, requiring regulatory approvals from the Nuclear Regulatory Commission (NRC) and extensive safety testing in the meantime.
Cost per kW: Estimated at $4,000–$6,000/kW for initial deployment,18 with high upfront capital costs but lower operating costs of $33–$45/MWh.19 Upfront capital expenditure is expected to decrease as technology matures.
Acreage for 100 MW: Approximately 10-20 acres,20 including reactor units, cooling systems, safety exclusion zones, and auxiliary facilities. SMRs are compact compared to traditional nuclear plants, but regulatory requirements for setbacks increase land use.
Pros: Carbon-free, reliable baseload power with high uptime. Scalable (20-300 MW per unit). Potential for long-term cost savings on plant life extensions and low opex.
Cons: High initial costs and complex regulatory hurdles. Technology not commercially available until late 2020s.
Core Suppliers:
Oklo: Develops fast reactors for 15-50 MW SMRs, targeting data center applications.
NuScale Power: Offers 77 MW modules, scalable via multi-unit deployments.
X-energy: Provides high-temperature gas-cooled reactors for large-scale projects.
Kairos Power: Focuses on fluoride salt-cooled reactors for modular deployment.
Notable Deals:
Equinix-Oklo (2024): $25M prepayment for 500 MW of SMR power by 2030; expanded partnerships with Vertiv and Liberty in 2025.21
AWS-X-energy (2024): Investment for 5 GW of SMR projects by 2039.22
**Process: **Deploy on-site solar photovoltaic (PV) panels, wind turbines, and battery energy storage systems (BESS). Solar is cheap but requires land or rooftop space, with permitting for grid interconnection or off-grid setups. Wind turbines need sufficient land and wind resources, often facing zoning challenges. Batteries store excess energy to ensure 24/7 power as a backup to solar and wind, requiring integration with energy management systems. Lead times for solar and batteries span 1-3 years,23 while wind can take 3-5 years due to turbine supply and permitting.
Cost per kW:
Solar PV: $1,529-$1,788/kW for construction,24 with levelized costs of $24-$39/MWh due to no fuel costs.25
Wind (Onshore): $1,428-$1,806/kW for construction,26 with levelized costs of $33-$46/MWh.27
Batteries: $150-$350/kWh for storage capacity,28translating to $600–$1,400/kW for a 4-hour system, with levelized costs of $104/MWh for 24/7 solar in sunny regions like Las Vegas.
Pros: Carbon-free. Solar and wind have low operating costs (no fuel). Batteries enable 24/7 power, mitigating intermittency. Declining costs make renewables competitive with fossil fuels. BESS-as-a-service models reduce upfront costs.
Cons: Intermittency requires oversized solar/wind capacity and large battery systems, increasing land and capital needs. Permitting and zoning for wind can be complex. Battery costs remain high for long-duration storage, and supply chain constraints persist.
Core Suppliers:
Solar: First Solar (Cadmium telluride panels), JinkoSolar (high-efficiency PV modules), Canadian Solar (integrated solar + storage solutions).
Wind: Vestas (onshore turbines), Siemens Gamesa (high-capacity turbines), GE Renewable Energy (Haliade-X for large-scale projects).
Batteries: Tesla (Megapack for grid-scale storage), Enersys (Synova Sync charger and NexSys BESS), Natron Energy (sodium-ion batteries for high-density applications).
Notable Deals:
Google-Intersect Power (2024): $20B project co-locating data centers with solar, wind, and battery storage for 24/7 renewable power.29
Stargate (2025): Includes 360 MW of solar power alongside gas, with battery storage for reliability.30
Energy Vault-RackScale Data Centers (2024): Deployed 2 GW battery storage system to support large-scale data center sites with solar and wind integration.31
A critical note on redundancy. Data center power redundancy refers to the implementation of backup systems and multiple power paths to ensure uninterrupted operation during utility failures, maintenance, or faults. This includes components like uninterruptible power supplies (UPS), generators, and dual utility feeds, minimizing downtime for critical IT infrastructure.
The redundancy rating system colloquially referenced among industry players (you probably hear Tier 3 and Tier 4 thrown around a lot) is the Uptime Institute’s Tier Classification, which categorizes data centers into four levels based on infrastructure resilience, including power systems:
Tier I: Basic capacity with no redundancy; single power path susceptible to outages, offering about 99.671% annual uptime (less than 28.8 hours of downtime per year).
Tier II: Partial redundancy with backup components (e.g., N+1 for generators and UPS), allowing some maintenance without shutdown; around 99.741% uptime (less than 22 hours downtime annually).
Tier III: Concurrently maintainable with multiple independent power distribution paths (N+1 overall), supporting planned maintenance without disruption; 99.982% uptime (less than 1.6 hours downtime per year).
Tier IV: Fault-tolerant with fully redundant systems (2N or greater), including dual-powered equipment and automatic failover for any single failure; 99.995% uptime (less than 0.4 hours downtime annually).
Yes, Tier IV is superior to Tier I, we acknowledge that’s confusing. We don’t make the rules.
The Uptime Institute determines and certifies these ratings through a rigorous process involving design reviews, on-site assessments, and operational sustainability evaluations. As Steve Zissou once said, “just do what you gotta do to cover your ass.”
So we covered permitting and powering your site so that you can transform electrons into flops, but how will you export these flops from your GPUs to the rest of the world? Fiber! WE ARE GOING TO NEED MORE FIBERRRRRRRRRRRR.
Screening for fiber optic connectivity involves assessing the availability, capacity, latency, and proximity of fiber infrastructure at your proposed data center site. Data centers require high-bandwidth, low-latency connections for reliable operations, often involving multiple carriers for redundancy. In the US, you can use a combination of public maps, commercial tools, and direct inquiries to source fiber connectivity.
Use the FCC National Broadband Map: Start with the free FCC National Broadband Map32 to check broadband availability, including fiber, at a specific address or location. Enter your site’s address to view reported internet services from ISPs, including technology types (e.g., fiber), maximum speeds, and providers. It shows fixed broadband data submitted biannually by ISPs, helping identify if fiber is deployable or already present. Note limitations: It’s based on self-reported data, may not detail dark fiber (unused but installed fiber), and focuses more on residential/commercial broadband than hyperscale data center needs-cross-verify for enterprise-grade capacity.
**Consult Aggregator Platforms and Fiber Maps: **Tools like FiberLocator33 provide detailed maps of carrier networks, lit buildings (buildings with active fiber), data centers, and on-net locations across hundreds of carriers. You can search by address or ZIP code to see proximity to fiber routes, available bandwidth, and carriers. Other options include Rextag,34 which covers over 500,000 miles of fiber data and 3,000 data centers, or DataCenterMap.com, which lists 3,939 US data centers with connectivity details. These are subscription-based but offer free trials for initial screening.
**Check Provider-Specific Interactive Maps: **Major fiber providers offer online tools to verify coverage. For example:
Lumen (formerly CenturyLink) has network maps showing fiber routes, data centers, and edge connectivity.
US Signal provides an interactive map for fiber availability and data center interconnections.
Other key providers include AT&T, Verizon, Zayo, and Lightpath - visit their sites and use “fiber availability” search tools by entering your address. These often detail enterprise options like dark fiber leasing.
**Evaluate Site-Specific Factors: **Beyond maps, consider latency to internet exchange points (IXPs), subsea cable landings (e.g., in markets like Northern Virginia or Florida for global connectivity), and carrier diversity. Tools like PVcase35 or LandGate36 can provide fiber data overlays for site selection, including ownership and capacity. For data centers, prioritize “carrier-neutral” locations with multiple providers to avoid single points of failure.
**Contact Providers and Consultants: **If maps indicate potential, reach out directly to ISPs for a site survey or quote. For complex needs, hire consultants certified by the Fiber Optic Association (FOA) or telecom engineers to assess dark fiber options or perform field audits.
Top US fiber providers for data centers in 2025 include Lumen, AT&T, Verizon, Zayo, Crown Castle, and emerging players like Lightpath for metro areas. Markets like Northern Virginia, Dallas, and Silicon Valley have the densest fiber due to hyperscale demand.37
f screening shows no existing fiber, eat some Wheaties! Just kidding, building fiber involves extending infrastructure from the nearest point, which can be costly (on average approx $100k per mile, but dependent on terrain)38 and time consuming (6 to 24 months). This is common for remote or greenfield data centers.
Assess Feasibility and Proximity: Use the screening tools above (e.g., FiberLocator or Rextag) to find the closest fiber route. If fiber is within 1-5 miles, extension is viable; beyond that, costs escalate. This is bad. Conduct a site survey to evaluate terrain, existing conduits/poles, and environmental impacts. Consider dark fiber: Lease unused strands from providers and “light” them with your own equipment for control
Partner with Carriers or Contractors: Don’t build alone! Collaborate with major providers (e.g., AT&T, Lumen, Zayo) for “fiber-to-the-premises” extensions. They handle construction for a fee or long-term contract. For custom builds, hire specialized contractors certified by the FOA. If in a rural area, explore federal grants via the FCC’s Broadband Equity, Access, and Deployment (BEAD) program.39
Obtain Permits and Rights-of-Way: Secure approvals from local governments, utilities, and the FCC if crossing public lands. This includes environmental assessments (e.g., NEPA compliance), zoning, and utility locates (call 811 before digging!)40 Aerial installations (using poles) are faster/cheaper; buried (trenching or directional boring) is more reliable but disruptive.
Design and Install the Infrastructure:
Design: Plan outside plant (route from source to site) and inside plant (within the data center). Use single-mode fiber for long distances/high speeds. Include redundancy (e.g., diverse routes).
Materials and Equipment: Procure fiber cables, splicers, connectors (e.g., LC/SC), transceivers, and testing tools (OTDR for verification). For data centers, ensure high-density setups with MPO connectors for scalability.
**Installation Methods: **Aerial (poles), buried (trenches), or micro-trenching for urban areas. Test for loss and certify post-install.
Timeline: Design (1-3 months), permits (3-6 months), build (3-12 months).
Maintenance and Activation: After build, light the fiber with optics (e.g., DWDM for multiplexing). Implement monitoring for faults. For ongoing operations, use managed services from providers.
Costs vary: $20-$100 per foot for cable, plus labor/permits. For a one mile extension, expect up to $2 million in capex. Consider alternatives like microwave or satellite if fiber is impractical, but fiber is preferred for data centers due to bandwidth.
Most organizations run on lit fiber which the above processes concern. Lit fiber is easy to access but shared, congested, and built for cost, not for performance. The real edge for high performance is in dark fiber: private, dedicated capacity that delivers low latency.
Crucible is an investor in DoubleZero, a protocol that aggregates dark and private fiber from global contributors and optimizes routes with hardware acceleration, delivering 30 to 70% lower latency than the public internet. The opportunity is exciting as dark fiber actually outweighs lit fiber as a percentage of installed fiber: data is scarce, but a 2007 FCC report suggests approximately 66% of installed fiber optic cable in the US was dark, totaling 48 million kilometers out of 73.4 million kilometers.41
DoubleZero eliminates the internet’s performance bottlenecks that hold back high-performance distributed systems by aggregating underutilized private links into a global, contributor-powered network. By mainnet launch, the network will operate 70+ dedicated fiber links, with many upgraded to deliver 10× more capacity. With coverage expanding from 8 to 26 cities across 16 countries, DoubleZero transforms fragmented dark fiber into a global, contributor-powered backbone purpose-built to increase bandwidth and reduce latency for the next era of high-performance networking.
“For the next decade, the real constraint for infrastructure builders won’t be compute, it will be bandwidth. Powering AI, blockchain, and other high-performance systems requires more than racks of servers; it requires a network that can actually keep up. At DoubleZero, we see bandwidth as the new foundation layer for scale, and we’re building the roads that let tomorrow’s infrastructure run at full speed.” Austin Federa, DoubleZero
Pre-development complete! Let’s BUILD - errr, contract a developer or construction partner to execute the build. This involves issuing an RFP (request for proposal) to qualified firms, evaluating bids based on cost, timeline, expertise, and alignment with your specifications (e.g., tier level, capacity, sustainability). The developer handles detailed design, permitting refinements, supply chain management, construction, and commissioning, often under an EPC (engineering, procurement, and construction) contract to ensure accountability. Timelines typically range from 12 to 24 months for traditional builds, but modular approaches can reduce this to 6 to 12 months. Key considerations include the developer’s track record in hyperscale vs. edge deployments, integration of AI-ready infrastructure, and adherence to Uptime Institute standards for redundancy.
Data center developers range from large-scale colocation providers offering end-to-end custom builds to specialized modular builders focused on prefabricated, scalable solutions. Large players excel in hyperscale projects with robust infrastructure, while modular firms prioritize speed, flexibility, and edge deployments.
The average cost of a powered shell - all of the beautiful bones discussed in this report - from a large developer is $8 to $12 million for a Tier 3 data center and $11 to $15 million for a Tier 4 data center. A handful of the best known legacy data center developers are showcased below:
These firms among many others (HPE, Schneider Electric) specialize in prefabricated modular data centers, assembled in factories and deployed on-site for faster, more flexible builds. They differ from large players by emphasizing edge and hybrid deployments, with containerized or rack-integrated designs.
AHEM. How are you going to pay for everything we just talked about? The financing landscape for datacenters is evolving rapidly - hyperscalers historically relied on robust free cash flow to self-fund capex and stand as primary equity investors in their own sites. Cumulative capex has almost 10x’d since 2018 and the group are signaling continuous upward revisions in their capex commitments with each quarterly earnings cycle:
But, even while FCF is flowing - with cloud revenue growing 17-35% year-over-year across the above cohort as of Q2 2025 - the MAGs would be remiss not to take advantage of the capital markets hungry to finance builds, and are just now beginning to tap private credit markets in a meaningful way. Meta’s recently publicized $29 billion hybrid financing deal comprised of $26 billion in debt led by PIMCO and $3 billion in equity from Blue Owl Capital to support a massive project in Louisiana marked a pivotal turning point in hyperscalers turning to capital markets rather than paying out of FCF - in addition to marking the largest financing to date in the sector.42
It’s a secret to no one that investor appetite to participate in datacenters is rampant across the capital structure: global PE deals in data centers and relevant sectors have almost doubled over the last four years, climbing to $107.7 billion in the four years through Sept. 16, from $49.9 billion invested in the four years prior, according to PitchBook.43 Indeed, most PE firms now own datacenter developers.
Equity returns in datacenter development range from 12-19+% levered IRR. Working up the capital structure, top private credit firms have stepped in as primary lenders, offering flexible debt solutions tailored to high-capex projects. Coreweave, perhaps the posterchild of leveraged datacenter development, offers a helpful glimpse into market financing terms as a publicly listed company:
The above is a generalization, as each loan has nuanced components such as collateral (H100s have been cited as collateral in earlier cases) draw/paydown schedule, covenants and additional fees - but illustrate financing rates between 9.25%-14.12% depending on the terms. Practically speaking, letters of intent from compute offtake partners (i.e. how you will monetize your datacenter) will enable you to raise equity and debt capital at most favorable terms.
Crucible partners with Hydra Host to place compute capacity for our sites in order to finance most effectively. Hydra is a top 10 NVIDIA Cloud Partner empowering investors and site operators to achieve outsized returns monetizing GPUs. This turnkey solution provides the equipment, software, customers, and financing needed to rapidly scale a GPU rental operation. This model combines three core services:
Hydra’s proprietary Brokkr GPU management platform
Hydra’s sales team, which monetizes equipment for over 50 data centers across the US and the world
Hydra’s cluster design team, which ensures that clusters deployed are in line with the most desirable customers’ demands
Hydra is the only way to harvest the economics taking place inside of unicorn GPU clouds while enjoying the benefits of depreciation, particularly interesting post OBBBA Act given the bonus depreciation benefits.
Hydra leaves nothing to chance. Every part of the platform, from cluster design to colocation to rental and eventual liquidation of equipment, exists to de-risk GPU deployments and maximize ROI. Meanwhile, our financing partners underwrite based on real-time usage data from our multi-channel distribution platform, not generic asset models. This means faster approvals, lower risk, and capital that matches the opportunity.
*“Hydra removes the barriers to what should be a simple, turnkey business model. Launching and running an AI factory is a complex, multi-phased project with countless moving parts. You need a trusted partner by your side who understands everything from architecture design to GPU asset monetization. We have the right blueprints and know-how to build and operate AI factories, making your investment as active or as passive you desire.” *Kai Golden, Director of Hydra Capital
If you made it this far, congrats and thank you. The above captures the steps you need to take to orchestrate the nuts and bolts of datacenter pre-development, development and financing. There is an entire subsequent report to write about GPU procurement, orchestration of rack scale systems and cooling systems, power management, compute monetization and datacenter infrastructure management (DCIM) software solutions to optimize costs and output - stay tuned for more here from us and do reach out if you’d like to collaborate. In the meantime, Crucible will continue to actively invest and finance companies in the sector, in tandem with developing two sites with focused landowners. Our door is always open for collaboration
https://www.investing.com/news/transcripts/earnings-call-transcript-nvidia-q2-2025-strong-earnings-beat-drives-stock-uptick-93CH-4213615
Morgan Stanley. Credit Markets and the AI Financing Gap
Morgan Stanley. 2025 Outlook: Chasing Growth
https://iea.blob.core.windows.net/assets/34eac603-ecf1-464f-b813-2ecceb8f81c2/EnergyandAI.pdf
https://static1.squarespace.com/static/67819031da098341c45ac84a/t/6849bcfe640a951f79e00715/1749662975141/Data%2BCenter%2BWatch%2BReport%2B.pdf
https://www.ercot.com/files/docs/2025/05/15/ERCOT-Monthly-Operational-Overview-April-2025.pdf
https://www.rwe.com/en/press/rwe-ag/2024-05-23-rwe-signs-ppa-with-microsoft/
https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musks-new-worlds-fastest-ai-data-center-is-powered-by-massive-portable-power-generators-to-sidestep-electricity-supply-constraints
https://www.bakerbotts.com/thought-leadership/publications/2025/july/texas-senate-bill-6-understanding-the-impacts-to-large-loads-and-co-located-generation
https://www.datacenterdynamics.com/en/news/nuclear-powered-data-center-campus-in-surry-virginia-gets-rezoning-approval/
https://www.eia.gov/todayinenergy/detail.php?id=63485
https://iea-etsap.org/E-TechDS/PDF/E02-gas_fired_power-GS-AD-gct.pdf
https://www.ngsa.org/wp-content/uploads/sites/3/2020/08/Final-Land-Footprint-March-2017.pdf
https://www.datacenterdynamics.com/en/news/natural-gas-plant-planned-for-stargate-ai-data-center-campus-report/
https://www.datacenterdynamics.com/en/news/sailfish-plans-multi-gigawatt-data-center-park-outside-dfw-texas/
https://www.datacenterdynamics.com/en/news/xai-removes-some-of-controversial-gas-turbines-from-memphis-data-center/
https://www.powermag.com/new-200-mw-gas-fired-plant-in-ohio-will-power-meta-data-center/
https://small-modular-reactors.org/smr-cost-estimates/
https://www.nei.org/CorporateSite/media/filefolder/advanced/SMR-Start-Economic-Analysis-2021-(APPROVED-2021-03-22).pdf
https://www.energy.gov/sites/prod/files/2016/01/f28/SITINGTask4Report20140925_0.pdf
https://www.datacenterdynamics.com/en/news/equinix-signs-deal-to-procure-up-to-500mw-of-nuclear-power-from-oklo-smrs-makes-25m-pre-payment/
https://x-energy.com/media/news-releases/amazon-invests-in-x-energy-to-support-advanced-small-modular-nuclear-reactors-and-expand-carbon-free-power
https://sinovoltaics.com/energy-storage/storage/key-contractual-considerations-for-bess-procurement/
https://www.eia.gov/todayinenergy/detail.php?id=63485
https://www.pv-magazine.com/2023/04/14/average-solar-lcoe-increases-for-first-time-this-year/
https://www.eia.gov/todayinenergy/detail.php?id=63485
https://docs.nrel.gov/docs/fy25osti/91775.pdf
https://bslbatt.com/blogs/current-average-energy-storage-cost-2025/
https://www.utilitydive.com/news/google-intersect-power-co-located-energy-park-data-center-ferc/735198/
https://www.datacenterdynamics.com/en/news/solar-power-and-batteries-planned-for-openais-stargate-ai-data-center-campus-report/
https://www.utilitydive.com/news/energy-vault-rackscale-partner-data-center-battery-storage/735796/
https://broadbandmap.fcc.gov
https://www.fiberlocator.com
https://www.landgate.com
https://www.datacenterfrontier.com
https://fiberbroadband.org/resources/2024-fiber-deployment-cost-annual-report/
https://www.fcc.gov/broadband-equity-access-and-deployment-bead-program
https://811beforeyoudig.com/
FCC’s “Statistics of Communications Common Carriers”, (SOCC), 2007
https://www.reuters.com/business/meta-taps-pimco-blue-owl-29-billion-data-center-expansion-project-source-says-2025-08-08/
https://pitchbook.com/news/articles/meet-the-10-most-active-data-center-investors
No posts