If you’ve been working on software for any reasonable length of time, you might have had an experience like this: You’re looking at a feature, and you have no idea how to build it. You have the context and understanding you need. But it’s all pointing in one direction — the system was never designed with this feature in mind.
You start thinking of how you’re going to build it, but you throw out the ideas as fast as you come up with them. This one scales awfully, you don’t have the data you need for that one, the API is missing what you’d need for another one. Finally, you come to the terrible realization that whatever you’re trying to do might not even be possible. Unfortunately, being impossible sometimes isn’t a good enough reason not to build the feature. And besides that, I don…
If you’ve been working on software for any reasonable length of time, you might have had an experience like this: You’re looking at a feature, and you have no idea how to build it. You have the context and understanding you need. But it’s all pointing in one direction — the system was never designed with this feature in mind.
You start thinking of how you’re going to build it, but you throw out the ideas as fast as you come up with them. This one scales awfully, you don’t have the data you need for that one, the API is missing what you’d need for another one. Finally, you come to the terrible realization that whatever you’re trying to do might not even be possible. Unfortunately, being impossible sometimes isn’t a good enough reason not to build the feature. And besides that, I don’t like to let the computer win.
The last time I found myself in this situation was near the start of this year, as I started building the (now released) Aha! Develop TestRail extension. It was my first time working with extensions, and I quickly ran into problems. Our system didn’t support the security controls we’d need to safely communicate with the TestRail API. However, the extension was a big priority for my team due to all the customers who’d been asking for test management support. No matter the difficulty, I had to build it.
These are the steps I took to push through the challenges I faced — but nothing about them is unique to me. Keep them in mind the next time you find yourself staring down your own impossible problem.
Step 1: Change the constraints
A challenge I faced early on was that the extension needed to make dozens or hundreds of API calls to fetch all the data it needed, all while running on a backend lambda with a fixed lifetime of 10 seconds. The volume could easily exceed rate limits, and the API timeout was 60 seconds. Figuring out how to fit 60 seconds into 10 seemed impossible. Because it was.
But problems like that are very rarely a single problem — there are many constraints, and it’s the intersection of those that makes the greater problem impossible. The first step is figuring out which constraints you can control and which you can’t. I can’t change someone else’s API, and the need to run on the backend with that timeout was a security requirement. Removing those from consideration helps you hone in on the constraints that are in your power to change, and changing them can often turn an impossible problem into an easy one. In this case, there was no need to use a single lambda. By spreading API calls so each instance only handled one, I not only reduced the risk of hitting API limits, but also made the handling for that case much easier for myself.
Step 2: Don’t do the hard part
Sometimes, there’s a particular thorn making a problem harder than it has to be — some edge case or constraint that you can’t just change. The next step up from changing a constraint is to drop it entirely if it’s possible to do so. Look for cases that are far from standard use or scaling issues that won’t apply until you are much larger. Failing that, figure out what the bare minimum is, then add more until you get to the part that’s making everything hard. (We have an entire blog post about how shipping decent fast is better than perfect slow.)
In almost every instance in TestRail, a test run for a given suite will contain test results for all the test cases in that suite. However, it is possible (though rare) to deliberately start a run on a subset of tests. This became relevant when I needed to show all runs containing a test on a particular test case to the user. To exclude runs that didn’t match, I loaded all tests for every run, then checked each to see if there was a match for the test case. This unfortunately meant one network request for every run, causing slowdown. It seemed like an unavoidable issue. But the only thing making it hard was having to support that one edge case. The “fix” was to stop loading tests until a user selected a run. Then, we loaded just the tests for that run and looked for a match. If we didn’t find one, we showed an error message letting them know to pick a different run. A mild inconvenience for the edge case that completely removed the scaling issue.
Your product manager is your best friend here. You should be talking to them often regardless, but I guarantee they’ll never be upset to see you coming to them with a way to ship functionality faster. Nine times out of 10, if you ask them what they prefer between a quick fix or taking your time to get it perfect, they’ll pick the quick fix. That last one time out of 10 is important too — checking in often is how you make sure the thing you’re binning is truly a nice-to-have and not a bottom line for your customers.
As an example of what not to do, early on I built a UI that required users to input IDs to select records. That made everything much easier for me to hook up in the backend. Had I checked with my product manager first, I would have found out that being able to search records by name was a key requirement and saved myself a lot of time.
Step 3: Change your solution
I like to describe my problem-solving style as “depth-first.” I look at the problem, spend a token amount of time planning it out, come up with a solution, and start building. Much like real depth-first search, this is great at heading toward a solution quickly, but it’s prone to running into local maximums.
In other words, I sometimes find the thing I’ve built isn’t quite good enough to solve the problem, but any tinkering I do around the edges just makes it worse. It’s easy to think that I might have an impossible problem. But really, the problem is that I’ve built myself into a corner. That’s when you should take a step back, look at your requirements, and start thinking of completely different ways to solve a problem.
This is a great time to use your lifelines. Hash out the problem with a colleague who might have some ideas of their own. Ask (but don’t trust) the AI. Talk to someone who has no context and won’t have any ideas, because talking about it is often all you need to jog your brain. Take a snack break, go for a walk — think about the problem while not staring at the code you’ve already written. Inspiration might strike quickly, but other times it’s a slog.
Step 4: Stop and regroup
So far, we’ve been working under the assumption that the problem isn’t impossible. It’s just hard enough to seem that way. But if you’ve tried the other three steps — changing and pruning and redoing — and still have not made headway, the problem might actually be impossible. Or at least impossible right now. This is the time to stop, let the rest of the team know you are stuck, and decide together what to do next.
Maybe the problem is just much bigger than anyone expected, and it’s important to reprioritize based on that. Maybe the next version of a library you’re using will get the functionality you want, and the smart move is to wait. Maybe the problem requires expertise you don’t have and can’t easily get. Maybe you could get it done eventually, but there are a dozen other things you could build in the same amount of time for more value.
If the team does decide to stop, it’s crucial to write up everything you know at this step and record it in an accessible place. This will either help the next person pick up the work or remind everyone why it’s still not worth picking up. Deprioritized work doesn’t always stay that way. And you don’t want to find yourself a year later staring at the same ticket having forgotten everything you learned the hard way.
In 2022, before I was in my current role, Aha! Develop teammates started looking into building an extension for TestRail. After a spike to investigate the feasibility of various approaches, they determined the work required meant that it wasn’t feasible to continue with. They wrote up their conclusions, including features for some of the required changes, and moved on to other work. The situation had changed when I picked up the feature in 2025. Not only was there growing demand, but we had the benefit of several years of improvements to our extensions. The notes left were a vital resource, letting me solve some of the trickiest challenges in a fraction of the time.
Step 5: Keep going
So you talked to the team about stopping, but decided to continue. No matter how long it takes, this problem is the best thing for you to be working on. All that you have left in front of you is actually doing it — no tricks, just turning requirements into reality the hard way. The challenge at this point is continuing on while keeping the problem-solving parts of your brain firing on all cylinders.
As always, communication is important. Keep teammates up to date with your progress, and let them know your estimates for how much is left (no matter how wildly inaccurate they might be). Share the technical problems you’re having with the wider engineering team, because it might have the insights you need. Celebrate your wins where you can. And if you’re not progressing on your current problem, find an easier one for a quick break. Knocking out a quick win in a half-hour can be the boost your brain needs to keep churning for another week.
At every opportunity, see if you can apply any of the first three steps to break the problem down. Always look for ways to break the big problem up into smaller problems — even if they’re all equally hard, at least they’re smaller. Most of all, keep believing the problem will be solved eventually. Take a break and revisit the previous section if you ever stop believing that.
My path to building the TestRail extension was not a straight line. My first iteration was woefully unfit for purpose. My second build took three minutes to load 30 records. My third approach crashed the browser when I tried to test it. My fourth and fifth designs worked fine locally, but folded once we let users test them with real data. Even after the initial successful release, I came back a few months later to fix more performance issues once I’d figured out how to solve them. All in all, I spent the better part of half a year working on the extension.
But I did finish it, and I get to enjoy the satisfaction that comes with a problem well solved — right up until I find my next impossible problem.
Our team is happy — and hiring engineers who enjoy digging into seemingly impossible problems. Join us.