Why this took a founder to build
For years, I wanted to build what eventually became Proxylity.
Not because it was novel, or because the technology didn’t exist in pieces already, but because I kept running into the same constraint across different organizations: systems that relied on UDP and other non-HTTP protocols were stuck on virtual machines long after everything else had moved on.
Those teams weren’t wrong. They had customers, revenue, and roadmaps that demanded their full attention. Migrating a UDP service off VMs—especially to something unproven—was hard to justify when the existing system mostly worked. Even when everyone agreed the architecture was holding them back, there was always something more urgent to ship.
Escaping VM gravity was clearly necessary, but neve…
Why this took a founder to build
For years, I wanted to build what eventually became Proxylity.
Not because it was novel, or because the technology didn’t exist in pieces already, but because I kept running into the same constraint across different organizations: systems that relied on UDP and other non-HTTP protocols were stuck on virtual machines long after everything else had moved on.
Those teams weren’t wrong. They had customers, revenue, and roadmaps that demanded their full attention. Migrating a UDP service off VMs—especially to something unproven—was hard to justify when the existing system mostly worked. Even when everyone agreed the architecture was holding them back, there was always something more urgent to ship.
Escaping VM gravity was clearly necessary, but never urgent enough.
Building a serverless UDP platform wasn’t something that could be done incrementally on the side. It required sustained focus, careful integration with cloud primitives, and a willingness to rethink long-held assumptions about how network services are deployed and operated. Inside those companies, devoting that level of time and expense simply didn’t make sense.
As a founder, it finally did. One year ago today I started Proxylity.
Proxylity exists because it needed a company where 100% of the resources could be devoted to solving this one problem well.
VM gravity and the cost of "good enough" networking
UDP-backed services have a habit of lingering.
Even in organizations that aggressively modernize their application stacks—adopting managed databases, serverless compute, and fully automated CI/CD—anything that speaks UDP often remains anchored to long-lived VMs or appliances. DNS, RADIUS, TFTP, telemetry collectors, real-time control planes: these systems are critical, sensitive to latency, and deeply entwined with the network. The safest place for them has traditionally been "the box that already works."
That decision is usually rational.
VM-based UDP services are well understood. They’re debuggable with familiar tools. They give operators full control over the packet path. And once they’re stable, they fade into the background—until they need to change.
That’s where the cost shows up.
Deployments are slow and risky. Scaling requires capacity planning rather than demand-driven elasticity. Small changes require coordination across infrastructure, networking, and operations teams. Over time, these services become the hardest part of the system to evolve, even as everything around them accelerates.
The result is a quiet mismatch: modern teams moving quickly everywhere except where they can least afford to be slow.
Proxylity started from the belief that this mismatch wasn’t inevitable—that the same cloud primitives that transformed HTTP workloads could be applied to UDP as well, if they were treated as first-class citizens rather than edge cases.
The founding bet: serverless should work for UDP too
The core bet behind Proxylity was simple to state and risky to make:
Serverless infrastructure should be just as effective for UDP services as it is for HTTP APIs.
That meant accepting a few constraints up front. UDP is connectionless, timing-sensitive, and often stateful in ways that don’t map cleanly to request–response models. Cloud platforms, meanwhile, are optimized for scale-out, ephemeral compute, and managed control planes.
The bet wasn’t that these differences didn’t matter—it was that they could be reconciled without sacrificing either side.
If that bet paid off, the outcome would look familiar to any modern cloud team:
- UDP services updated and deployed in minutes, not hours or days
- Changes rolled out frequently and safely
- Operations reduced to configuration and code, not pets and hand-tuned instances
The first year of Proxylity was largely about finding out whether that was actually true.
Early lessons: when "easy" isn’t actually easier
One of the earliest design decisions I made was around permissions.
My initial instinct was to optimize for simplicity: a single, account-level IAM role that customers could configure once and then forget about. Fewer moving parts. Less setup. Less to think about. It even gave me pause at the time—but "simple is better" felt like the right instinct to trust.
It turned out not to be.
The first signal wasn’t a hard objection. It was more subtle. Early customers would raise an eyebrow, ask a clarifying question, and move on. Nothing overt—but enough to register. Then the security questionnaires started coming back, and a particular phrase kept jumping off the page: least privilege.
In isolation, the account-level role was defensible. In practice, it didn’t fit how teams actually manage infrastructure. Permissions that live above the service are harder to reason about, harder to audit, and harder to maintain over time—especially in environments where infrastructure is defined and reviewed as code.
When I proposed an alternative—scoping permissions to individual destinations instead—the response from early adopters was immediate and unambiguous: "yes, please."
That change happened in the first quarter of 2025. Proxylity moved to destination-level roles, aligning permissions directly with the resources they governed. It was a rollback of sorts, but also a clarification. Optimizing for long-term maintainability and least privilege turned out to be far simpler than optimizing for a quick initial setup.
Where VM gravity shows up in the real world
The pull of VM-based architectures isn’t abstract. I’d seen it repeatedly in very different environments.
One example was RADIUS infrastructure at a central authentication provider handling roughly six thousand requests per second. The system was built on legacy server software—NPS and FreeRADIUS—running on long-lived instances. Deployments took hours. Failures were rare, but when they happened, they failed loudly and for a long time. The architecture worked, but it was brittle, and every change carried risk.
Another example came from a legal discovery company with a high-velocity ingest-only API. The system needed to accept data as fast as possible, guarantee idempotency, and support auditing and asynchronous responses. The HTTP-on-VM model they used imposed friction where none was required. The protocol fit the problem, but the deployment model slowed everything down.
In both cases, the problem wasn’t a lack of engineering skill or awareness. It was that the safest place for these systems—organizationally and operationally—was still a VM. The cost of moving them was clear; the benefit, harder to quantify.
Proxylity was built for exactly these kinds of systems: critical, latency-sensitive, and held back less by technology than by the weight of their own stability.
Positioning correction: cost savings aren’t the point
I initially positioned UDP Gateway as a way to save money.
That wasn’t wrong—moving off always-on infrastructure and paying only for what you use does reduce costs. But it was the wrong thing to lead with.
Saving money is a surprisingly weak motivator for architectural change. It’s abstract, delayed, and often invisible to the teams doing the work. What actually drives change is removing friction—especially the kind that slows teams down every day.
The real value of UDP Gateway isn’t that it’s cheaper. It’s that it lets teams move faster. It reduces deployment time from hours to minutes. It makes iteration safe and routine instead of exceptional. It lets engineers treat UDP services with the same expectations they already have for modern cloud workloads.
Once I reframed the story around velocity instead of savings, the conversations changed. Teams didn’t need convincing that the old way was expensive—they already knew it was slow.
What held up: developer experience and AWS integration
One of the bets that did hold up was a focus on developer experience and deep integration with AWS.
Early on, while building examples and internal tooling, something unexpected happened. Creating and updating global UDP Gateway listeners and destinations was consistently faster than provisioning the surrounding infrastructure—IAM roles, DynamoDB tables, and other supporting resources. What I used to think of as "fast" had become the slow path.
The contrast was hard to miss, and it was a good sign.
Customer conversations reinforced it. More than once, the reaction was simply: "It’s very fast. I didn’t expect that." Five-minute deploys weren’t an aspirational goal—they were happening in practice.
That speed changed behavior. Teams deployed more often. They experimented. They treated UDP services like normal parts of their stack instead of special cases that needed to be protected from change.
I anticipated the need to support our customers’ compliance efforts from the start. What I underestimated was how well a genuinely customer-friendly model would land.
Proxylity can emit metrics and logs, but they’re created directly in the customer’s AWS account and remain fully under their control. We don’t store logs ourselves, which means retention, access, and archiving work the way teams already expect. It wasn’t something customers asked for — but once they saw it, the response was overwhelmingly positive. Letting go of control over customer data turned out to be a powerful way to earn trust.
Operating reality: a year in production
By the end of the first year, Proxylity had seen just over six thousand production deploys.
Those deploys weren’t evenly distributed—some quarters were far busier than others—but the overall pattern was consistent: frequent iteration without reliability degradation. Over the year, uptime landed at 99.997%, with no sustained outages and no quarter showing systemic instability.
The point of tracking these numbers wasn’t to chase a marketing metric. It was to validate a premise: that high deployment velocity and operational stability don’t have to be in tension, even for network-facing, UDP-based services.
So far, that premise has held.
Open questions: usage-based pricing and predictability
One area that’s still evolving is pricing.
I believe usage-based pricing is fundamentally fair. Paying in proportion to actual work done aligns incentives and removes a lot of guesswork. It’s encouraging to see broader acceptance of this model across the industry.
At the same time, I understand the need for predictability. Some teams need to know, with confidence, what a service will cost month to month. Balancing those needs—fairness, simplicity, and predictability—is something I’m continuing to think carefully about.
There isn’t a final answer yet, and that’s okay.
Looking forward
The first year of Proxylity was about proving that this approach could work: that serverless infrastructure could support serious UDP workloads, and that doing so could materially improve how teams build and operate these systems.
The next year is about doubling down on that progress—removing more friction, supporting more real-world protocols, and continuing to push toward a model where non-HTTP services move just as fast as everything else in the cloud.
Escaping VM gravity turned out to be possible. Making it routine is the next step.