The last few weeks have been about building the network model properly. We’ve talked about it for months, but there’s a difference between sketching architecture in Miro and actually loading real network data, connecting it together, and trying to represent how power really flows.
We began, as we always have, with the LTDS tables. They’re imperfect, but they’re what is available today. And every time we asked ourselves whether we should just wait for CIM files, the same answer emerged; CIM will be useful eventually, but waiting for it means not learning anything now. So we decided to build with what we have, but build in a way that lets us swap in CIM later.
We’re almost certainly going to end up using the CIM files at some point. But this initial release due at the end of this mo…
The last few weeks have been about building the network model properly. We’ve talked about it for months, but there’s a difference between sketching architecture in Miro and actually loading real network data, connecting it together, and trying to represent how power really flows.
We began, as we always have, with the LTDS tables. They’re imperfect, but they’re what is available today. And every time we asked ourselves whether we should just wait for CIM files, the same answer emerged; CIM will be useful eventually, but waiting for it means not learning anything now. So we decided to build with what we have, but build in a way that lets us swap in CIM later.
We’re almost certainly going to end up using the CIM files at some point. But this initial release due at the end of this month (November 2025) isn’t enough of an improvement on the LTDS tables to justify having to wrangle 100x more data. It’s what they call the EQ profile, which is essentially a list of all network assets and how they relate to each other. We’re also really familiar with the LTDS tables at this point. Better the devil you know.
The improvements in CIM format we’re actually excited about come in May and November of next year. In May we get all of the capacity information provided within the CIM file format, as well as planned upgrades, reinforcements and reconfigurations for the next 5 years. In November, we get geospatial information on top of that.
So we started writing code to load and link everything; substations, circuits, and transformers at each level of the network. It’s not complicated in principle, but the data has a way of revealing gaps. Some parts of the network are clearly documented and rated. Others are implied rather than stated; they show up in single line diagrams but disappear in the tabular structure.
The super grid transformers at GSPs are notoriously difficult to identify, as they fall down the gap between the distribution and transmission networks. The task became hunting them down, confirming they do exist physically, and adding logic to catch and place them correctly in the network graph. This task is made harder due to the transmission network and the distribution network identifying substations by different sets of names and codes.
Comparing datasets across DNOs was interesting. For example, NGED provides a more detailed set of circuit elements, including all their switches and circuit breakers, but they don’t show switch states. Without knowing which breakers are normally open or closed, it’s hard to determine the “as-operated” network configuration.
In contrast, UKPN’s dataset contains less electrical detail, but does show the normal configuration, which means we can model how their network is actually run day-to-day. It highlighted something important: Completeness is not the same as usability. It’s better to know which parts of the network are energised than to know every parameter but lack context.
After getting a basic section of the network stitched together, we moved to power flow analysis. We chose part of NPG’s network where the topology is clean and the loads are modest. The goal wasn’t to get perfect numbers; it was to learn how to set up models in Pandapower, run flows, and interpret results.
Once we had flows running, the real learning was in the sensitivity behaviour. Rather than modelling the entire network all at once, we experimented with injecting power at different nodes and observing how line loadings changed. This matched what we’ve heard from both UKPN and NGED engineers, most operational decision-making is done by testing the response to changes, not by reading static capacity tables
This led naturally to a question we weren’t expecting to ask so soon, what does “headroom” actually mean once you start working at node-level on a sensitivity-basis? We’ve talked about headroom for a long time as if it’s a property of a substation, or a circuit, or even an entire grid supply point.
But what we saw this week is that the available capacity depends heavily on where the power enters and how it flows. The same network can handle 5 MW easily injected at one place and struggle with 1 MW from somewhere else. And what really matters is the reinforcement that this new load triggers. Upgrading a short piece of overhead 33kV circuit is very different to replacing 2 transformers at the BSP.
So the week ended with both progress and reframing. We can now represent a network and run flows through it. We can calculate constraint changes in response to new load. But we also now understand that a single headroom figure may not describe the system in a meaningful way anymore.
Next we need to try this on a larger, more complex, bit of network. We’ve already started modelling an entire UKPN GSP, and are hoping to start running the sensitivity analysis at the start of next week. The challenge is deciding how to communicate this in the product without overwhelming people, and without pretending the network is simpler than it is.