Application Performance Monitoring (APM) means many things to many people. At its core, it enables developers to diagnose why their applications are slow and helps them provide a better experience to their users. Traditionally, this is accomplished by collecting a lot of data and displaying it in the form of dashboards and request traces. The problems you’re trying to solve are generally known up front.
For example, N+1 queries are a common issue in many web applications, so many APMs offer purpose-built tools to address them. Third-party HTTP requests are another common culprit, and so they provide instrumentation to track slow external API calls, timeouts, and retry patterns that slow down your response times. This cookie-cutter approach is common across legacy APM solutions. …
Application Performance Monitoring (APM) means many things to many people. At its core, it enables developers to diagnose why their applications are slow and helps them provide a better experience to their users. Traditionally, this is accomplished by collecting a lot of data and displaying it in the form of dashboards and request traces. The problems you’re trying to solve are generally known up front.
For example, N+1 queries are a common issue in many web applications, so many APMs offer purpose-built tools to address them. Third-party HTTP requests are another common culprit, and so they provide instrumentation to track slow external API calls, timeouts, and retry patterns that slow down your response times. This cookie-cutter approach is common across legacy APM solutions. The data they collect and the interfaces they provide are glued together to solve a specific problem: application performance.
Too much data, too few answers
Modern applications fail in ways that developers never anticipated. Your payment processor returns a status code you’ve never seen. A customer finds an edge case that only happens on Tuesdays at midnight. Your app is down, but all your performance metrics look fine.
Traditional APMs, built to solve yesterday’s known problems, leave you in the dark to today’s unknown unknowns—and drowning in expensive data that doesn’t help when you need it most.
The irony is that legacy APMs manage to be both too much and too little at the same time. They overwhelm you with dashboards, metrics, and terabytes of data you’ll never use, yet still can’t answer the questions that matter when production breaks. You’re paying for complexity that doesn’t translate to capability. This is the APM paradox: maximum cost, maximum complexity, minimum flexibility.
Observability 1.0: metrics, logs, and traces
Most APMs use the concept of a distributed trace, with nested spans for each event that happened during a request, background job, or similar transaction. The canonical UI is a flame graph. For many development teams—especially those on-call for production systems—request traces are not enough. They need detailed logs and metrics to understand the overall state of their systems and troubleshoot issues outside their application code.
Observability attempts to solve these problems by combining these different data points into the three pillars of metrics, logs, and traces. Observability has increased in popularity over the past decade, especially with the rise of Open Telemetry, which seeks to standardize instrumentation and data formats across vendors.
But to be truly observable, you must be able to understand the state of your system based on its outputs—and not just for problems you can anticipate. You need to be able to deal with unknown unknowns.
Observability 2.0 and wide events
In the beginning, observability repackaged the discrete concepts of metrics, logs, and traces with a set of new language and practices (what some have called Observability 1.0). The true innovation came later, when the industry shifted towards an event-based architecture around wide events.
Wide events are essentially structured logs that can include many additional properties, such as system state, correlation IDs, and contextual metadata. Instead of nested traces, you end up with a flat list of related events that tell a story about your system. You can derive metrics from wide events at query time, and unlike text-based logs, structured events can be queried and analyzed in new ways.
For example, instead of separate logs, metrics, and traces for a checkout flow, you’d have a single wide event:
{
"event": "checkout",
"duration_ms": 642,
"user_id": 57239,
"cart_value": 99.50,
"payment_method": "stripe",
"items_count": 3,
"error": null,
"request_id": "b1de636a-7bf8-4f44-8917-0980c279e817",
"timestamp": "2025-10-15T10:30:00Z"
}
One event tells the whole story—performance, business metrics, and debugging context. To dig deeper, you might have similar events for an HTTP call to Stripe, a database query, or an error if one occurred. Related events are correlated by request_id, which gives you the complete picture of what happened during checkout.
This isn’t just about debugging—it’s also about understanding your business. Wide events can answer “Why did signups drop?” as easily as “Why is the API slow?” Traditional APMs segregate performance from product analytics, but your application doesn’t care about that distinction.
Observability vendors face their own problem, though—one shared by many APMs: cost. The proliferation of data—which costs a lot to transfer, store, and process—has led many software teams to rethink their monitoring approach, either moving to more cost-effective vendors or hosting their own monitoring infrastructure.
Many observability vendors are also less approachable to developers and require too much setup out of the box. Developers use Datadog and New Relic because they have to—not because they want to.
What do you really need?
If you’re a small-to-mid-sized development team on-call for your own code, you know this pain. You don’t have a dedicated observability team. You don’t have time to instrument your app and build 50 dashboards. You definitely don’t have the budget for enterprise APM pricing that scales with your success.
You need monitoring that respects both your time and your budget. Something that works out of the box but doesn’t box you in when you need to dig deeper.
The real problem with APMs
Legacy APM tools have a problem with the observability paradigm: they were initially built to solve a known issue (performance). But what about everything else? What about the unknown unknowns? They already had traces, and so the answer was often to bolt on the other two pillars: metrics and logs. But while they were playing catch-up to observability 1.0, the industry had already moved on to wide events.
The result is the worst of both worlds: rigid UIs and siloed data. If you need to diagnose a performance problem, APMs are good at that. But if you need to dig deeper—like troubleshooting a complex bug, answering a business question, or understanding why your app is down—you’re stuck with pre-aggregated metrics and grepping through logs.
And of course, you’re paying for all that extra data.
Where we go from here
The answer to this mess is to send more useful data and less data that is stored but never analyzed. Observability with wide events is one answer to the APM problem, and it’s what we’ve focused on at Honeybadger. Honeybadger started as an error tracking service, which is a similar concept: instead of storing every log in your application, what if we proactively enrich and alert you about the logs you really care about?
Our most recent addition, Honeybadger Insights, is a structured logging service that automatically instruments your application framework for wide events—with tools to further enrich your logs with context from your users, infrastructure, and business. Events are correlated, giving you the ability to see what happened during a particular request, but also to query logs, aggregate metrics, and chart any data point without adding specialized instrumentation.
On the UI side, we provide a set of automatic dashboard and widget templates that work great out of the box, but are easy to customize. We recently added an intelligent project overview dashboard that includes the most critical information for developers around application performance, errors, and uptime monitoring—to see what’s happening with your application at a glance.
We’re calling our approach “Just enough APM.”
Not because less is more, but because MORE is more. Not because we’re anti-features, but because we’re pro-developer. Every dashboard should earn its place. Every metric should answer a real question.
APMs are popular with developers because they understand and anticipate their needs. Honeybadger is not a monolithic APM—we’re more of an observability toolbox for developers—but our mission is similar: to help developers fix fundamental problems with a great out-of-the-box experience that “just works.” The difference is that when you need to break the mold, Honeybadger is flexible enough to adapt to your needs.
If you want to fix more issues, unlock modern observability, and reduce your costs, give Honeybadger a try.