How does one take two poorly defined terms and put them into a single title? Well, I guess I just did. That means it’s on me to explain what I mean by both AI and Edge. Let’s start with Edge computing, because it’s far more than the mysterious, windswept outpost people imagine.
And speaking of making sense of the edge, one company that’s been in this space long before “AI at the edge” became the trendy phrase of 2024 is ZEDEDA . I’ve followed them since my analyst days at GigaOm, and they’ve shaped quite a bit of how I think about what “edge” actually is.
Defining the Edge
When folks talk about the edge, they’re usually picturing some uniquely situated system in a far-flung location: an…
How does one take two poorly defined terms and put them into a single title? Well, I guess I just did. That means it’s on me to explain what I mean by both AI and Edge. Let’s start with Edge computing, because it’s far more than the mysterious, windswept outpost people imagine.
And speaking of making sense of the edge, one company that’s been in this space long before “AI at the edge” became the trendy phrase of 2024 is ZEDEDA . I’ve followed them since my analyst days at GigaOm, and they’ve shaped quite a bit of how I think about what “edge” actually is.
Defining the Edge
When folks talk about the edge, they’re usually picturing some uniquely situated system in a far-flung location: an oil derrick, a cruise ship, or the Mines of Moria. You get the idea. But the edge isn’t a location—not really. It’s a set of constraints that can pop up just about anywhere.
In my mind, the edge is defined by four pillars:
- Limited connectivity — low bandwidth, intermittent connections, or full-blown isolation.
- Limited infrastructure — space, power, and cooling are all at a premium.
- Limited resources — compute and storage are constrained, sometimes severely.
- Limited local skills — even when humans are nearby, they may not have the skills to service the system.
If you’re contending with the edge, you’re wrestling with at least one of these limits - and typically several at once. Edge isn’t confined to remote oil rigs; it shows up in retail stores, factories, gaming venues, farms, and cars. Modern computing has infiltrated everything, and nearly any industry now has an “edge component.”
These constraints have driven the emergence of specialized technologies and companies, projects like LF Edge’s EVE, lightweight orchestrators like K3s, and commercial vendors such as OnLogic and ZEDEDA. ZEDEDA in particular has built a full ecosystem around orchestrating and managing edge deployments at scale, and they’ve been shaping this category long before AI took center stage.
AI for the Edge
AI has become the watchword of the last few years—and for good reason. Massive advancements in LLMs that power the likes of ChatGPT and Claude have kicked off a gold rush where every company is racing to sprinkle AI into their ad copy and, occasionally, into their products. The results have ranged from genuinely transformative to… well, aspirational.
But AI at the edge? That’s nothing new. If anything, the edge is that cool, older brother of your best friend - the one who knew all the indie bands years before they hit Spotify. Remote, mildly aloof, and absolutely aware of what’s up.
Many edge use cases were already using AI long before LLMs grabbed headlines.
Manufacturing has leveraged vision AI for assembly line optimization and automated QA for years. The oil and gas industry uses AI to identify new deposits, predict well efficiency, and support safety operations on offshore rigs. ZEDEDA’s Sachin Vasudeva talked with me at KubeCon about this very use case:
Why AI Belongs at the Edge
We are connecting billions of devices, all generating enormous volumes of data. The idea of shipping all that information back to a centralized datacenter for analysis simply doesn’t hold up - economically, technically, or from a latency standpoint.
Edge AI allows you to:
- Process data in real time
- Avoid saturating expensive network links
- Preserve privacy by keeping sensitive information local
- Retrain or adapt models close to the source
- Operate even when connectivity is flaky or nonexistent
All of this is why companies like ZEDEDA have doubled down on enabling practical, scalable edge AI deployments. Their technology essentially gives organizations a way to operate like a cloud, but where the cloud can’t reasonably go.
Why AI at the Edge Is Hard
Bringing AI to the edge means managing environments where:
- Hardware varies wildly from site to site
- Upgrading equipment requires physically rolling a truck—sometimes thousands of times
- Local compute is limited
- Connectivity is intermittent
- Security requirements are strict and often regulated
- Scale stretches into thousands or tens of thousands of nodes
Try pushing an updated model—or even a patch—to 1,000 train signal huts and you’ll understand the challenge immediately.
You also need a way to orchestrate and govern applications across fleets of devices that look nothing like a tidy, climate-controlled datacenter. Traditional datacenter management tools weren’t designed for this. On-prem tools weren’t either. Expectations around latency, reliability, and bandwidth simply do not match what edge locations can realistically deliver.
This is where edge-native orchestration platforms matter. ZEDEDA, for instance, handles the lifecycle of edge nodes end-to-end: provisioning the OS, configuring networking, building clusters, and deploying workloads via Docker Compose or Kubernetes—whichever best fits the application footprint. Their platform was built to embrace distributed, asynchronous, unreliable environments, not fight against them.
A Closer Look: ZEDEDA’s Approach
ZEDEDA’s work in edge orchestration is deeply aligned with the realities I’ve described. A few highlights from their recent presentation with Tech Field Day:
- Their architecture is built on LF Edge’s EVE virtualization engine—a lightweight, secure, immutable OS layer designed specifically for edge nodes.
- ZEDEDA Cloud provides a centralized orchestration and management plane that can run as SaaS or on-prem.
- They support a broad range of workloads: VMs, containers, K8s, and mixed environments.
- Partnerships with companies like OnLogic allow for zero-touch provisioning using pre-certified hardware.
- Their marketplace model simplifies deploying everything from Docker Compose runtimes to full AI stacks.
And this isn’t theoretical—they have case studies across shipping, automotive, retail, and oil & gas. These industries aren’t dabbling in the edge; they’re operating at massive scale and require orchestration platforms that can actually keep up.
Kubernetes, Docker Compose, and Reality at the Edge
One of the most pragmatic parts of ZEDEDA’s story is that they don’t force every edge deployment into Kubernetes. They support it—but only where it’s the right tool.
Docker Compose remains extremely common because:
- Developers already use it locally
- It aligns with simple application footprints
- New models and services are constantly arriving
- It’s lightweight and fast to deploy
But when you need to orchestrate a distributed AI pipeline, or when multiple nodes must act as a coordinated cluster, Kubernetes enters the picture. ZEDEDA handles both, abstracting away the operational burden and giving teams a consistent control plane for everything from a single ruggedized node to a multi-site deployment.
Managing Edge AI in the Real World
Deploying AI at the edge requires more than running a model server. You need:
- Model versioning and lifecycle management
- Hardware-aware orchestration (CPU/GPU scheduling)
- Support for CI/CD pipelines
- Lightweight Kubernetes distributions like K3s
- Storage, networking, and security models tuned for the edge
- Resilience in environments where connectivity drops unexpectedly
ZEDEDA’s platform handles provisioning, monitoring, and orchestrating these K8s clusters, while allowing edge nodes to span clusters or operate independently depending on the workload.
Bringing It All Together
AI and the edge have been on a collision course for years, and we’re finally seeing the infrastructure mature enough to support large-scale deployments outside the datacenter. The constraints that define the edge aren’t going away, but our ability to work within them is improving.
If you’re curious about how to operationalize AI at the edge—or how to manage thousands of distributed nodes across unpredictable environments—I highly recommend checking out ZEDEDA’s Tech Field Day presentations or my conversation with Sachin Vasudeva at KubeCon. They’ve been solving these problems long before “AI at the edge” became a buzzword, and their perspective is invaluable.