Today, Pennsylvania is landing some of the largest AI infrastructure commitments in the US. Recent projects illustrate the shift: (1/3) 🧵 ⚆ The Homer City Energy Campus, redeveloping a former coal plant into a gas-powered datacenter hub with up to 4.5 GW of new generation ⚆ TECfusions’ Keystone Connect, a 1,395-acre campus designed to scale toward ~3 GW of IT capacity using a hybrid grid + on-site generation model ⚆ Multiple additional multi-hundred-MW and gigawatt-scale campuses announced across western and central Pennsylvania **
Pennsylvania offers a rare combination of: (2/3) ⚆ Abundant, low-cost energy, anchored by natural gas, nuclear, and legacy generation infrastructure ⚆ Brownfield megasites (retired coal and industrial plants) that already have transmission access, wate…
Today, Pennsylvania is landing some of the largest AI infrastructure commitments in the US. Recent projects illustrate the shift: (1/3) 🧵 ⚆ The Homer City Energy Campus, redeveloping a former coal plant into a gas-powered datacenter hub with up to 4.5 GW of new generation ⚆ TECfusions’ Keystone Connect, a 1,395-acre campus designed to scale toward ~3 GW of IT capacity using a hybrid grid + on-site generation model ⚆ Multiple additional multi-hundred-MW and gigawatt-scale campuses announced across western and central Pennsylvania **
Pennsylvania offers a rare combination of: (2/3) ⚆ Abundant, low-cost energy, anchored by natural gas, nuclear, and legacy generation infrastructure ⚆ Brownfield megasites (retired coal and industrial plants) that already have transmission access, water rights, and industrial zoning ⚆ State-level coordination, where permitting, utility alignment, and local approvals are increasingly moving in parallel rather than sequentially ⚆ Geographic proximity to Northern Virginia without Northern Virginia’s congestion, pricing, or political friction ⚆ Execution certainty, as projects are anchored to power plants, substations, and real permits rather than speculative land options. **
For datacenter developers, Pennsylvania removes the hardest constraint in AI infrastructure: delivering power at real scale, on real timelines. When a market can do that, momentum compounds quickly. Pennsylvania is no longer an alternative, it is becoming one of the places where the next generation of AI infrastructure actually gets built. (3/3) **
• • •
Missing some Tweet in this thread? You can try to force a refresh
** ** Keep Current with SemiAnalysis **

Stay in touch and get notified when new unrolls are available from this author!
** ** This Thread may be Removed Anytime!**
![]()
Twitter may remove this content at anytime! Save it as PDF for later use!
More from @SemiAnalysis_
Dec 23, 2025
If you want to power a datacenter off the grid, a gas turbine is the "obvious" choice. But it might not be the best option! Many developers select reciprocating engines for a reason. (1/4)🧵
A recip is more modular than turbines, happier at partial loads, and more comprehensible to maintain. You’re mostly changing lubricants, whereas a turbine requires no maintenance...until it needs a massive overhaul. (2/4)
More importantly, recips are less exotic technology. They rely less on rare alloys and critical minerals, and there are WAY more vendors to choose from, speeding up lead times and purchaser power. (3/4)
Read 4 tweets
Dec 4, 2025
Massive IT load growth. A transforming electric grid. Five-year lead times for turbines. Why not build more of them?
Well, GE and Siemens have seen this story before. (1/8)🧵
Back in the ’90s, parts of the American electric grid were "deregulating." These reforms gave us commodity markets for electricity–aka ISOs and RTOs. INDEPENDENT POWER PRODUCERS (IPPs), often utilities from other states, could build and run their own power plants and make money on these new electricity markets. Their generator of choice? The COMBINED CYCLE GAS PLANT (CCGT), particularly the then-new F-CLASS. (2/8)
Then, in 1999, Mark Mills and Peter Huber released a report called "The Internet Begins with Coal," which claimed that rising electric loads from these hot new computer things would overwhelm the existing electric grid. They concluded that by 2020, 30-50% the electric grid would go towards powering the digital economy. (3/8)
Read 8 tweets
Dec 3, 2025
A semiconductor is a material whose electrical conductivity lies between that of a conductor and an insulator. To achieve this property, doping is applied to a silicon wafer to adjust its electrical characteristics. (1/7)🧵
Before the 1970s, doping was performed through thermal diffusion in high-temperature furnaces. Process steps: ⚆ Pre-deposition: An oxide-based dopant film is deposited on the wafer surface. ⚆ Oxidation: The dopant oxide is driven into the growing silicon dioxide layer. ⚆ Doped region formation: The doped area forms and reaches the desired concentration and depth. ⚆ Wet etching: The oxide layer is removed using a wet etching process. (2/7)
Driven by research related to atomic weapons, technologies involving high-energy ion beams began to develop, leading to the introduction of ion implantation machines in the mid to late 1970s. Ion implantation technology offers four major advantages: ⚆ Dopant concentration can be controlled by adjusting the ion beam current and exposure time. ⚆ Doping depth can be precisely controlled by tuning the ion energy. ⚆ The anisotropic characteristic of ion implantation makes it easier to precisely define doped regions. ⚆ The process can be performed at room temperature, unlike traditional diffusion, which requires high temperatures of 800–1000 °C.(3/7)
Read 7 tweets
Nov 14, 2025
The economics of AI has been a big question mark in many investors’ minds - What does the value chain look like? How do you model out the ROIC of AI? What would the ROIC look like?
We built up an end-to-end economics stack to answer this question - how we go from a chip’s silicon cost, through full system integration, all the way down to the dollar cost per million inference tokens.(1/4)🧵
At the top of the stack, our accelerator analysis starts with the semiconductor bill of materials (transistors, packaging, HBM, and yield assumptions) to determine GPU provider content. From there, our BoM and ODM modeling breaks down every component inside the server. The network topology model then maps how these servers interconnect.(2/4)
When you roll this all up, illustratively for H200s, that gives us a capital cost of roughly $1.06 per GPU-hour, to which we add electricity and colocation costs for a complete TCO of $1.41 per GPU-hour. That’s the economic foundation. The cost to own and operate the hardware. A neocloud might rent that same GPU for roughly $2 per hour, leaving a modest gross margin. But until now, that’s where most analysis stopped at TCO/hr.(3/4)
Read 4 tweets
Nov 4, 2025
Qualcomm and MediaTek are in a race to reduce their dependency on the mature smartphone market. Both are still managing to beat unit growth in smartphones. But that won’t last long. Investors are looking for their progress in non-smartphones. Qualcomm’s non-smartphone chip business hit a $10B+ annual run-rate, contrasting with MediaTek’s $8B+. (1/7) 🧵
Both have increased their investments to capture more revenue in consumer, networking, industrial and computing markets. Non-smartphones account for 30% of Qualcomm’s semiconductor revenue and 48% of MediaTek’s. Qualcomm has a target of $22B non-smartphone chip revenue by FY29 at a 5-year CAGR of 21%. Qualcomm built a strong moat in autos but made mixed progress in IoT (a collection of end markets including PC, consumer, networking and infrastructure). (2/7)
After sitting on the sidelines for a long time, both Qualcomm and MediaTek have now firmed up their AI datacenter chip plans. MediaTek appears to have hit gold with its AI ASIC business, claiming $1B revenue in ’26, multiple billions in ’27 and up to $5B-$7.5B in ’28 and beyond (10-15% share of $50B TAM), growing faster than its flagship smartphone chip business, which will slow down from CY26. (3/7)
Read 7 tweets
Oct 30, 2025
AI workloads are characterized by elephant flows when all of the GPUs in a cluster exchange data through collective communication operations to synchronize data for distributed workloads. These flows can often lead to congestion and load balancing issues. (1/6)🧵
To solve this problem, Meta turned to the use of Disaggregated Scheduled Fabrics (DSFs). Being “Scheduled” means that a credit-based system is used to control flows and prevent congestion – before a node can send packets across the network, it must first send a credit request towards the receiving node to make sure that the receiving end has enough buffer to receive the packet. These packets also travel over a fabric that cellifies the packets, breaking it into smaller cells and spreading it across multiple routes in the fabric. (2/6)
Arista’s 7800R series of “Big Boy” Chassis switches provide such a scheduled fabric, as well as what is effectively a very high radix switch, but the downside is that all the ports are all in one physical location. (3/6)
Read 6 tweets