Five years ago, I started working on json-tool out of necessity. I needed aJSON formatting tool I could trust with sensitive data: one that wouldn’t send my information to third-party servers filled with ads and tracking scripts. What began as a personal weekend project has quietly grown into something used by 3,000 active users. This milestone made me pause and reflect on what this journey has taught me about building and maintaining open source software.
3k users?: The number of weekly active users are fetched from the snapcraft channel distribution. The 3k comes from there, however, I would think this number is higher if adding taking into account the web version. I am unable to know that as there ar…
Five years ago, I started working on json-tool out of necessity. I needed aJSON formatting tool I could trust with sensitive data: one that wouldn’t send my information to third-party servers filled with ads and tracking scripts. What began as a personal weekend project has quietly grown into something used by 3,000 active users. This milestone made me pause and reflect on what this journey has taught me about building and maintaining open source software.
3k users?: The number of weekly active users are fetched from the snapcraft channel distribution. The 3k comes from there, however, I would think this number is higher if adding taking into account the web version. I am unable to know that as there are no trackers.
Privacy First
In 2021, my workflow involved frequent JSON manipulation: debugging API responses, integrating with third-party services, and inspecting data structures. Like many developers, I defaulted to googling "JSON prettier" and using whatever website appeared first. Tools like JSON Formatter and JSON Pretty Print worked fine, but they came with a cost: ads everywhere, no transparency about data handling, and zero guarantee that my JSON strings weren’t being logged somewhere.
For personal projects, this was mildly annoying. For work involving production data or sensitive information, it felt irresponsible. ThoughtWorks Tech Radar volume 27 later highlighted this exact concern, warning developers about formatting tools that don’t comply with data jurisdiction requirements.
I wanted something different: a tool that collected no data, showed no ads, and remained transparent through open source code. No complex feature set, just reliable JSON formatting that respected user privacy. That’s how json-tool was born.
The initial implementation was straightforward: an Electron app built with React, published to snapcraft.io. I applied outside-in TDD using Cypress and React Testing Library (a practice I’ve written about to ensure the core functionality worked reliably from the start.
When Users Broke Things
The first version worked well for typical use cases. Developers could paste JSON from logs, validate it, format it with custom spacing, and copy the result back to their clipboard. The privacy-first approach meant enterprise developers working with sensitive data could use it without corporate firewall concerns, and most importantly it runs offline once installed via Snapcraft.
Then reality hit: users started pasting JSON strings larger than 1MB.
The application froze. Completely. The user interface became unresponsive, buttons stopped working, and the whole experience fell apart. Through benchmarking with console.time() and console.timeEnd(), I traced the bottleneck to the JSON parsing and formatting happening on the main thread. JavaScript’s single-threaded nature, combined with the event loop execution model, meant any blocking operation would freeze the entire UI.
This is where web workers entered the picture.
Moving to Web Workers
According to Ido Green’s book "Web Workers: Multithreaded Programs in JavaScript", if your web application needs to complete a task that takes more than 150 milliseconds, you should consider using web workers. Green specifically lists "encoding/decoding a large string" as a primary use case, this is what json-tool was doing.
The architecture shift was significant. What was previously synchronous code:
const onJsonChange = useCallback(async (value: string) => {
setError('');
if (!spacing) return;
try {
if (value) {
JSON.parse(value);
}
} catch (e: any) {
setError('invalid json');
}
let format = new Formatter(value, 2);
const parseSpacing = parseInt(spacing);
if (!isNaN(parseSpacing)) {
format = new Formatter(value, parseSpacing);
}
const result = await format.format();
setOriginalResult(value);
setResult(result);
}, [spacing]);
Became an event-driven system where the main thread delegates heavy computation to a dedicated worker:
const onChange = (eventValue: string, eventSpacing: string) => {
if (worker.current) {
worker.current.postMessage({
jsonAsString: eventValue,
spacing: eventSpacing
});
}
setOriginalResult(eventValue);
setInProgress(true);
};
The worker itself handles the actual parsing and formatting:
importScripts('https://unpkg.com/format-to-json@2.1.2/fmt2json.min.js');
if('function' === typeof importScripts) {
addEventListener('message', async (event) => {
const value = event.data.jsonAsString;
const spacing = event.data.spacing;
if (value) {
const format = await fmt2json(value, {
expand: true,
escape: false,
indent: parseInt(spacing)
});
try {
JSON.parse(value);
} catch (e) {
postMessage({
error: true,
originalJson: value,
result: format.result
});
return;
}
postMessage({
error: false,
originalJson: value,
result: format.result
});
}
});
}
Did this improve performance? No. The actual computation time remained the same. But the user experience was transformed. Developers could continue interacting with the application while JSON processing happened in the background. I documented this technical journey in detail in Web Workers to the Rescue, where I explored the architecture changes and performance implications.
Moving to Web Workers
According to Ido Green’s book "Web Workers: Multithreaded Programs in JavaScript", if your web application needs to complete a task that takes more than 150 milliseconds, you should consider using web workers. Green specifically lists "encoding/decoding a large string" as a primary use case, this is what json-tool was doing.
The architecture shift was significant. What was previously synchronous code:
const onJsonChange = useCallback(async (value: string) => {
setError('');
if (!spacing) return;
try {
if (value) {
JSON.parse(value);
}
} catch (e: any) {
setError('invalid json');
}
let format = new Formatter(value, 2);
const parseSpacing = parseInt(spacing);
if (!isNaN(parseSpacing)) {
format = new Formatter(value, parseSpacing);
}
const result = await format.format();
setOriginalResult(value);
setResult(result);
}, [spacing]);
Became an event-driven system where the main thread delegates heavy computation to a dedicated worker:
const onChange = (eventValue: string, eventSpacing: string) => {
if (worker.current) {
worker.current.postMessage({
jsonAsString: eventValue,
spacing: eventSpacing
});
}
setOriginalResult(eventValue);
setInProgress(true);
};
The worker itself handles the actual parsing and formatting:
importScripts('https://unpkg.com/format-to-json@2.1.2/fmt2json.min.js');
if('function' === typeof importScripts) {
addEventListener('message', async (event) => {
const value = event.data.jsonAsString;
const spacing = event.data.spacing;
if (value) {
const format = await fmt2json(value, {
expand: true,
escape: false,
indent: parseInt(spacing)
});
try {
JSON.parse(value);
} catch (e) {
postMessage({
error: true,
originalJson: value,
result: format.result
});
return;
}
postMessage({
error: false,
originalJson: value,
result: format.result
});
}
});
}
Did this improve performance? No. The actual computation time remained the same. But the user experience was transformed. Developers could continue interacting with the application while JSON processing happened in the background. I documented this technical journey in detail in Web Workers to the Rescue, where I explored the architecture changes and performance implications.
Web Workers and TDD
One of my non-negotiables for json-tool was maintaining TDD discipline. I had written about using outside-in TDD from the beginning, and I wasn’t going to compromise on that.
Web workers introduced testing complexity I hadn’t anticipated. The Worker object exists in the browser under the window scope, but in Jest’s jsdom environment, it’s undefined. My test suite broke immediately after introducing workers.
The solution came from Jason Miller’s jsdom-worker library, which provides a Worker API implementation for jsdom. It doesn’t create actual threads, it mocks the behavior. The setup required two steps:
- Install:
npm i jsdom-worker - Import in
setupTests.ts:import 'jsdom-worker'
The library had limitations, it doesn’t support shared workers, and it required using the Blob pattern instead of separate script files. But it allowed me to maintain my existing tests without coupling them to implementation details:
it('should format json from uploaded file', async () => {
const file = new File(
['{"a":"b"}'],
'hello.json',
{ type: 'application/json' }
);
const { getByTestId } = render(<App />);
await act(async () => {
await userEvent.upload(getByTestId('upload-json'), file);
});
await waitFor(() => {
expect(getByTestId('raw-result')).toHaveValue(`{
"a": "b"
}`);
})
});
The test describes behavior, not implementation. It doesn’t care whether a web worker handles the formatting or not, it just verifies that uploaded JSON gets formatted correctly. This became a valuable lesson: web workers are implementation details. Tests should focus on user-facing behavior.
I later explored this testing approach further during a live coding session with web workers, where we applied TDD in a coding dojo setting and demonstrated that you can refactor synchronous code to use workers while maintaining test coverage.
Challenges and Learnings
Performance vs. Simplicity
The web worker implementation solved the UI freezing problem but introduced complexity. The codebase went from straightforward synchronous code to an event-driven architecture. New contributors would need to understand worker lifecycle management, message passing, and the implications of moving code off the main thread.
I balanced this by keeping the worker implementation isolated. The rest of the application remains simple: React components, straightforward state management, minimal abstractions. The complexity is contained, documented, and necessary for the user experience.
Maintenance Reality
Open source maintenance is time-consuming. Dependencies need updates. Security vulnerabilities require patches. Browser APIs change. Testing libraries evolve.
For json-tool, I aimed for sustainability over rapid development. The tech stack (React, TypeScript, Tailwind, CodeMirror) consists of stable, well-maintained projects with large communities. I avoid cutting-edge dependencies that might break or get abandoned.
Updates happen in batches, typically when I’m actively using the tool and notice something that needs improvement. This isn’t a business with support SLAs, it’s a community tool maintained alongside other responsibilities.
The Open Source Philosophy
Building json-tool taught me what open source means beyond "putting code on GitHub":
- Transparency builds trust. Users can audit the entire codebase. They can verify no tracking happens, no data gets sent anywhere, no ads get injected. Trust isn’t demanded, it’s proven through openness.
- Focused tools solve problems better. Json-tool doesn’t try to be everything to everyone. It does one thing well: formats JSON with privacy guarantees. This focus makes it maintainable and useful.
Test-driven development pays long-term dividends. The initial TDD investment enabled the web worker refactoring without breaking existing functionality. Tests served as a safety net, allowing confident changes years after initial development.
Looking Forward
Json-tool will continue as it is: a focused, privacy-first tool for developers who need trustworthy JSON formatting. I have no plans to monetize it, add analytics, or transform it into something bigger. The value lies in its simplicity and reliability. However, sponsorships are welcome via GitHub Sponsors.
Future improvements will likely focus on:
- Performance optimization for extremely large JSON files (>100MB)
- Better accessibility (screen reader support, keyboard navigation improvements)
- Editor synchronization so both scroll simultaneously
Nothing revolutionary. Just steady improvements that serve the existing user base better. For example, update with the latest version of CodeMirror to benefit from performance and accessibility enhancements, ReactJs updates, and TypeScript improvements.
The most rewarding part of this journey wasn’t reaching 3,000 users, it was building something that solves a real problem without compromising on principles. No tracking. No ads. No data collection. Just a tool that does what it promises and respects its users.
The source code for json-tool remains available at github.com/marabesi/json-tool. The tool itself is available at marabesi.github.io/json-tool. It’s been five years. Here’s to many more of building software that respects privacy and serves developers well.