DRY Dilemma — Part 1
6 min read20 hours ago
–
TLDR: When faced with a tight deadline, I had to implement a data model workaround to ensure the project was delivered on time.
Press enter or click to view image in full size
Image credit — Unsplash
All software becomes legacy as soon as it’s written.
― **Andrew Hunt, **The Pragmatic Programmer
Finish the project in 2 months
In the summer of 2025, I was leading a Java backend project with an extremely tight two-month timeline. The project was complex enough that four months would have been reasonable.
I proposed extending the timeline, but leadership in…
DRY Dilemma — Part 1
6 min read20 hours ago
–
TLDR: When faced with a tight deadline, I had to implement a data model workaround to ensure the project was delivered on time.
Press enter or click to view image in full size
Image credit — Unsplash
All software becomes legacy as soon as it’s written.
― **Andrew Hunt, **The Pragmatic Programmer
Finish the project in 2 months
In the summer of 2025, I was leading a Java backend project with an extremely tight two-month timeline. The project was complex enough that four months would have been reasonable.
I proposed extending the timeline, but leadership insisted on the original two-month deadline. Adding more engineers would not help either, as the onboarding overhead would only slow us down.
My team and I were working on the project day and night. We were making solid progress. Then, three weeks in, I encountered a dependency issue that kept me up for several nights.
Our backend service makes API calls to a downstream service for core operations. To build the API request payload, we used a data model JAR published by the contract owner, as illustrated below.
<dependencies> <!-- Data model classes from downstream service contract - used for building API payloads --> <dependency> <groupId>com.company.services</groupId> <artifactId>downstream-contract</artifactId> <version>2.15.2</version> </dependency></dependencies>
The data model JAR provided straightforward data model classes that we used throughout our service:
package com.company.services.downstream.contract;public class ApiContract { private Item item; public Item getItem() { return item; } public void setItem(Item item) { this.item = item; }}
For my project, I needed to update the dependency to the latest version so that API payload could contain new required fields.
A major challenge was that this dependency had not been updated for three years. Unsurprisingly, the latest version had evolved dramatically, and it was completely incompatible with our current codebase.
Updating the dependency wasn’t just a version bump. It required extensive refactoring across multiple modules in* a sea of unknowns* and re-testing everything that touched these models. Even if I dedicated the entire two months to this effort, it still would not have been enough.
Doing the right solution would use up the entire two-month timeline while leaving the project unfinished.
I could already picture the cascade of escalation emails asking why the project wasn’t delivered on time.
I needed to find another solution.
The solution is not clean, but it works
I carefully analyzed how the new fields would be used. It turned out that only my specific flow required these fields in the downstream API calls. All other flows remained unchanged since they didn’t use these fields.
Damn! This gave me an idea. All I needed was the API request payload to contain these new fields in my flow.
Here’s how I implemented it. I could subclass the outdated models and add the new fields there.
Since I could not modify the classes from the data model JAR, I created a subclass ItemV2.java that extends Item.java and added new fields:
@Datapublic class ItemV2 extends Item { private String newField;}
Next, I created ApiContractV2, which extends ApiContract and adds a new function that takes ItemV2 as a parameter:
@JsonIgnoreProperties(ignoreUnknown = true)@JsonInclude(JsonInclude.Include.NON_NULL)public class ApiContractV2 extends ApiContract { @JsonIgnore public void setItem(ItemV2 itemV2) { this.item = itemV2; }}
The @JsonIgnore annotation is crucial. It prevents conflicts between the parent class’s setItem and the subclass’s setItem during deserialization.
This solution works because of Java’s runtime polymorphism. Many online resources explain this concept in detail.
Finally, I used ApiContractV2to build the API payload containing the new fields for my flow, while all other flows continued using ApiContract as before.
Problem solved! It is not a clean solution, but it works.
Now I needed buy-in from my tech lead and manager. This was going to be the hard part.
The next day
I decided to discuss the problem and my workaround solution with my manager first.
My manager had joined the team about a year ago at the time. He understood the system at a high level, but he certainly did not know the nitty-gritty details.
I regularly updated him on the project’s progress, and he recognized we were making strong headway under pressure.
I found him at his desk after I got to the office.
“Do you have a moment? I wanted to get your thoughts on something quickly” I asked politely.
“Sure,” he replied. At the same time, I grabbed a chair and sat next to him
“We are making good progress,” I started, “but I’ve run into a problem. We need to add a new field to the downstream API request, but the dependency we are using is outdated.”
“Updating the dependency isn’t simple. The latest version is not compatible with our code base. I do have a workaround, though,” I continued.
My manager responded the moment he heard the word “workaround”. “Let’s do it the right way. You should upgrade the dependency. I don’t want to accumulate technical debt.”
I did not want to argue with him because his tone was assertive.
“Okay, got it. I will see what I can do,” I responded begrudgingly. I didn’t even get a chance to explain my workaround.
I wondered whether I should at least explain how complicated and risky the upgrade would be.
Never mind. My past experience had taught me that the only thing he cared about was results.
The day wasn’t going as I’d hoped.
Later that day, I talked with my tech lead. I gave him the context and walked him through my workaround solution.
He had been with the team for more than five years and knew the system inside and out. Refactoring a significant portion of the legacy codebase carried risks he recognized.
“I see. I don’t think we can easily upgrade the dependency. Rushing to update it is too risky,” he said.
“Our service is tier-1. Breaking it in production would be catastrophic for the entire organization,” he continued.
“Exactly!” I nodded, agreeing with him completely.
“We only have four weeks left to deliver the project. You can add comments explaining why we do it for now. Let’s plan the dependency upgrade for the next quarter” he suggested.
The fate of data model dependency
Was my solution hacky? Absolutely. I won’t gloss over that, but I had made peace with the decision because it was the best option given the time and resources available.
It reminded me of what President Theodore Roosevelt once said.
Do what you can, with what you have, where you are.
— Theodore Roosevelt
Over the years, I’ve watched teams struggle with data model dependencies. In my experience, these dependencies usually lead to one of three outcomes:
- The ideal: Teams keep the dependency up to date as the contract evolves. However, this rarely happens in practice over time.
- The fork: Teams fork the contract repo and maintain their own version.
- The workaround: Teams either duplicate the data model class in their repo or teams extend them as I did. Both approaches result in multiple versions of the same data model coexisting within a single codebase.
The original intention behind data model JARs was to ensure consistency and reduce maintenance overhead. In practice, teams defer dependency updates when possible. The upgrade eventually becomes so expensive that most teams either fork the repository or do their workaround instead.
Of the three outcomes, the third is the worst. It significantly increases complexity that make the codebase more hazardous to maintain. Future developers discover different implementations of the same data model across flows, causing confusion and increasing the risk of bugs.
When I first started my career, I strongly advocated for shared data models because I believed duplication violated the DRY principle. After years of experience, I’ve realized the true meaning of DRY.
Part 2 — DRY Is Not About Code Duplication.