Many years ago, in another lifetime, I was presenting our team’s work to a rather senior politician. Here’s how I remember it:
“We want to provide value for money,” I said, “so we propose that running five small pilots of [thing I still can’t talk about]. We know there are multiple technologies which could work. But we don’t know which one will work best.”
“How will running something five times save the taxpayer money?” They asked, quite reasonably.
I replied, somewhat smugly, “Big technology projects often fail because they get very far along before a critical flaw is discovered. If we run some pilot programmes, we hope to discover those problems before we go too far down the wrong path.”
“But running five pilots will cost more money?” They replied, with a smugness born o…
Many years ago, in another lifetime, I was presenting our team’s work to a rather senior politician. Here’s how I remember it:
“We want to provide value for money,” I said, “so we propose that running five small pilots of [thing I still can’t talk about]. We know there are multiple technologies which could work. But we don’t know which one will work best.”
“How will running something five times save the taxpayer money?” They asked, quite reasonably.
I replied, somewhat smugly, “Big technology projects often fail because they get very far along before a critical flaw is discovered. If we run some pilot programmes, we hope to discover those problems before we go too far down the wrong path.”
“But running five pilots will cost more money?” They replied, with a smugness born of a thousand encounters like this.
I had the uneasy feeling I knew where this was going. “Yes, in the short term, it will cost more.”
“Why don’t we just run the pilot with the technology which will work best?” They asked earnestly.
I had one of those “Pray Mr Babbage” moments and took a moment to compose myself.
I gently explained that we wouldn’t know in advance the results of the experiment and, without going too far into The Structure of Scientific Revolutions, falsifiable hypotheses were probably the best way to discover the truth.
Apparently their PPE degree was worthwhile because they accepted my arguments - albeit only with funding for 3 pilots.
From their point of view, it was perfectly rational to reject experimentation. Each failed experiment is a waste of taxpayers’ hard-earned money. How do you look your constituents in the eye and say “80% of our budget was spent on failure”? It is political suicide.
Which leads me on to this brilliant blog post by Mark Sewards MP. In it, the MP describes the process of setting up an “AI” counterpart to answer his constituents’ questions.
So far, so zeitgeisty. But rather than just slap a label on an LLM and call it a day, the MP for Leeds South West and Morley actually spent time thinking about what he and his team wanted out of this experiment. They didn’t just launch and bugger off; they tested and refined.
The experiment was a success. Not because it reduced his case-load and allowed a tech company to profit from misery. But because it taught him (and others) the limitations of technology. It shows exactly what doesn’t work. If a person can’t understand where the boundaries are, they’ll never learn how to successfully master anything.
As Mark said:
What didn’t it do? It didn’t save any time. I read every single transcript to ensure we didn’t miss any questions from constituents. I can see this technology working alongside a casework team, but it needs a lot of refinement. I took this leap to understand what AI might be capable of and what it isn’t yet. I understand why some dismissed the model out of hand, but I think the potential is real, even if that’s all it is for now – potential.
Experimentation is hard because it leaves us vulnerable. It shows that we don’t know everything and that humbles us. We need to loudly celebrate politicians who try something new and are honest about where it goes wrong.
There is so much more to be learned from failure than success.