Uptime Kuma is a fantastic software package that you can use to monitor your self-hosted services, but what if you could build your own monitoring software using an ESP32, instead? The ESP32 uses very little power, but is more than powerful enough to craft HTTP requests, send ICMP packets to query whether a host is responding or not to ping requests, and host a small web server to view the results. As a challenge, I turned my ESP32-S3 into a full-fledged uptime monitoring device, and it works surprisingly well.
What started as a silly idea, wondering if I could build something like Uptime Kuma on a microcontroller, turned into one of my favorite little home…
Uptime Kuma is a fantastic software package that you can use to monitor your self-hosted services, but what if you could build your own monitoring software using an ESP32, instead? The ESP32 uses very little power, but is more than powerful enough to craft HTTP requests, send ICMP packets to query whether a host is responding or not to ping requests, and host a small web server to view the results. As a challenge, I turned my ESP32-S3 into a full-fledged uptime monitoring device, and it works surprisingly well.
What started as a silly idea, wondering if I could build something like Uptime Kuma on a microcontroller, turned into one of my favorite little home-lab gadgets. The whole thing is very power efficient, boots in seconds, and keeps all service definitions inside a LittleFS volume on-device. Plus, it gives me a neat web dashboard showing whether my Home Assistant instance, Jellyfin server, or any other service is up or down. It’s simple, fast, and surprisingly reliable.
As well, there’s a benefit to running what is essentially an immutable uptime monitor in a form-factor like this. Unlike web-based alternatives you typically deploy on a full server, this one physically sits on your network. If your Proxmox cluster goes down, your Home Assistant instance crashes, or your NAS locks up, the ESP32 will still be awake to tell you, because it isn’t running on the same hardware that just fell over. It’s a tiny, single-purpose appliance, but one that you can use and expand to be a core part of your self-hosted monitoring system.
Why build your own uptime monitoring system?
It lives separate to your infrastructure
There are countless ways to check whether your services are up: Uptime Kuma, Grafana/Loki dashboards, Prometheus exporters, Nagios scripts, and more. But all of them require a device more powerful than a microcontroller, and most assume your wider infrastructure is intact. If the server it’s running on crashes, your monitoring system might be taken down along with it. And that kind of misses the point of an uptime monitoring system in the first place. Obviously, individual LXCs and VMs can crash on an otherwise-working Proxmox host, but if the entire host goes down, then your monitoring software that runs on it will, too.
An ESP32 solves that problem beautifully. It’s a separate device solely dedicated to monitoring, and it doesn’t run on your server or depend on Docker, Proxmox, or any virtual machine to be intact. It’s just a tiny board with Wi-Fi and enough horsepower to run basic monitoring logic, and it’s isolated from the rest of your devices, all while being affordable, too. If something in your home lab hard-crashes, the ESP32 is still sitting there, checking services and serving a little webpage showing what failed.
I’ll be honest, the other motivation behind this project was pure curiosity. I wanted to see how far I could push a microcontroller as a “proper” network appliance, and with asynchronous HTTP, LittleFS, JSON, and a frontend, there’s quite a lot you can do. Given that I already had extensive experience building various services with the ESP32, I wasn’t too surprised, but what I was surprised by was just how well it worked, and the actual use cases for a project like this are pretty evident, especially when all you need is a microcontroller costing $5 or so.
How my ESP32 Uptime Monitor works
You define a service, then poll for that service
At boot, the ESP32 connects to Wi-Fi, mounts its LittleFS filesystem, loads any previously saved services from a JSON file, and launches an asynchronous web server. All configuration is done through the web U, meaning that you don’t need to reflash just to add or remove a service. Once you’ve flashed it with your Wi-Fi SSID and password, you shouldn’t have to reflash it again unless you’re adding new features or changing the underlying code.
A “service” object is defined like so:
- A name
- A service type (Home Assistant, Jellyfin, GET request, Ping)
- Host/IP address
- Port
- Optional HTTP path (if a GET request should contact a specific endpoint)
- Optional expected response substring
- A check interval in seconds
Every five seconds, the ESP32 iterates through the list and decides whether each service is due for a check. Because check intervals are defined per service, you can poll more important services quickly, and low-priority ones less frequently. When a service goes down, the front-end updates to show it, too.
As for service types, it’s designed to be extensible. We can define specific behavior based on a service type, and also pre-fill the port based on the service. Jellyfin was used to demonstrate the specific /health API path, whereas Home Assistant shows how to use any response to still prove that the service is up. Here’s what all service types do and expect:
-
Home Assistant: It makes an HTTP request to /api/ and treats any valid HTTP response as a sign that HA is responding.
-
This one could be expanded to check for an actually valid Home Assistant endpoint, such as /api/states. This is a rather quick check, as /api returns a 404, but we treat that as the service being alive.
-
Jellyfin: It queries the /health endpoint and expects a 200 OK.
-
GET Request: It fetches the specified path and checks for a user-defined substring in the response.
-
Ping: It sends ICMP echo packets using ESP32Ping and listens for a response.
Each result is stored in memory and exposed through a JSON API, that the front-end polls every few seconds and then updates to display the latest fetched result. Finally, we track the last checked time, the last error message, and whether the service status changed. It’s a near-realtime view of what’s happening on your network in terms of your self-hosted services, and makes it easy to track what’s currently working and what isn’t.
I absolutely love this project
And it uses like no power
Credit:
The charm of this project isn’t that it replaces Uptime Kuma feature-for-feature, it doesn’t, and nor is it meant to. Instead, it’s perfect because of how simple it is. It gives you the basics: HTTP checks, ping checks, and simple validation. It runs on nearly no power, it’s completely silent, and it’s immune to most of the problems that take down your main servers, on account of it running as a completely separate, external entity. I built it to be extensible, too; because the functions are built individually and service types are defined separately, you could add your own services and build in support for MQTT or another notification service on top of all of it. Even Telegram or Discord support is possible, so you can get notifications when services go down in a way that doesn’t rely on your infrastructure (aside from an internet connection, of course).
If you want to try out this project, I published it over on GitHub. I might make some changes over time to improve it slightly, as I do want to add Telegram or Discord support at some point, along with support for POST requests, too. It’s a very basic project, though and uses very little power, and hopefully it can serve the basis of your next monitoring project, too!