Before learning about containers, I used to dedicate a single virtual machine to every service I wanted to run, which is hugely inefficient. Compared to fully-fledged virtual machines, containers are lighter, more specialized, flexible, and easier to set up.
Still, I struggle to choose between my main PC and my Raspberry Pi 4B whenever a new self-hosting project lands on my desk. Known as a single-board computer (SBC), the Raspberry Pi is often sufficient for most of my low-level self-hosting tasks. So, I listed the pros and cons of each to remind myself for next time. Most of my experiments so far have been with Docker and the Raspberry Pi, though the same ideas apply to other products.
Also, w…
Before learning about containers, I used to dedicate a single virtual machine to every service I wanted to run, which is hugely inefficient. Compared to fully-fledged virtual machines, containers are lighter, more specialized, flexible, and easier to set up.
Still, I struggle to choose between my main PC and my Raspberry Pi 4B whenever a new self-hosting project lands on my desk. Known as a single-board computer (SBC), the Raspberry Pi is often sufficient for most of my low-level self-hosting tasks. So, I listed the pros and cons of each to remind myself for next time. Most of my experiments so far have been with Docker and the Raspberry Pi, though the same ideas apply to other products.
Also, while cloud hosting is always an option, it defeats the point of self-hosting, so that’s off the table for the scope of this article.
Performance
Is it right for the project?
This VM contains the Jellyfin server I use to try out new stuff and research my articles.
For simple self-hosted media servers with only a few users, either containers or an SBC fit the bill. All my services run happily inside either virtual machines or containers on my main PC, which has an Intel Core i9-12900K and 32GB of RAM. But for logging, analytics, timing-sensitive control setups, or larger media servers with dozens of users, the choice of host matters more.
The more container services that are running on a single device, the more resources are shared between them. Stretch the resources too thin, and everything slows to a crawl. Dedicating a single SBC to a single service mitigates the issue somewhat since they allocate all their resources to a single task. Still, they’re susceptible to slowdowns if the project grows complex enough, as the SBC’s strengths—running low-powered hardware on a budget and sipping power—also limit its performance.
Thankfully, both setups can scale. With SBCs, you can cluster multiple boards to distribute the load. However, adding more boards increases complexity and really only benefits parallelizable workloads. Also, it’s difficult to scale individual components, like memory, in SBC clusters. For single-host systems, scaling performance is limited by compatibility and the number of parts they can accept.
The silver lining is that nothing is permanent. If one setup doesn’t work out, there are many ways to migrate between setups. Containers are made to be portable, so migrating them between machines with the same architecture should be smooth (otherwise, it will need to be emulated or rebuilt). And if dedicating a single SBC to a service leaves too much performance headroom, consolidating the light services to a central host should be simple, too.
Points of failure
Centralized convenience or distributed redundancy?
Credit: Intisy/MakerWorld
In a multi-container setup, if the central host machine goes down, so do all its services. That’s less of an issue with a discrete SBC setup. If one board fails, the failure is confined to a single device and service, not the entire setup. Also, setting up physical SBC compute clusters is a good way to both improve performance and add redundancy. If one member device of the cluster breaks, others can fill in without any downtime. And if it needs more performance, just add more boards.
Management
Both are easy to handle
A Docker Swarm speedtest tracker service page.
Management is straightforward either way. You can administer an SBC cluster from one of its nodes, and container setups can be orchestrated with tools like Docker Swarm or managed directly from the central host.
Space and setup
Tiny boards, tidy spaces
The Raspberry Pi 4B is barely larger than a credit card.
SBCs are tiny. Compared to laptops and even NAS units, they’re far easier to hide away. Their low power profile (The Raspberry Pi 5 uses just around 12W under load) also means they can stay on permanently without raising the power bill. Better yet, for low-power tasks, they can run fanless (though they’ll still need extra cooling for sustained high loads), staying completely silent. Once they’re set up, they can run headless (without a monitor or input devices), saving even more space.
Having a central host is a little different. Certain operating systems (looking at you, Windows 11) need a periodic restart to work properly, which adds downtime. Also, desktop processors need stronger coolers and power supplies, leading to higher noise and power consumption. However, hosting containers inside existing machines also saves space that would otherwise be occupied by SBCs.
Cost
No clear winner
“Balance” is a recurring theme here, and cost might be the trickiest factor of all. It’s natural to assume that containers would be cheaper, and for smaller home labs, that’s often true. But as workloads increase, you’ll eventually hit a performance ceiling, and upgrading components can quickly get expensive.
SBCs, on the other hand, are inexpensive. The Raspberry Pi 4B costs between $35 and $75, including the motherboard, CPU, and memory. If dedicating one per service feels ostentatious, clustering them is always an option with a bit of patience. Besides the boards themselves, the main investments are storage and setup time.
Repairability is another factor. While you can’t replace individual components on an SBC, the boards themselves are cheap enough that replacing one doesn’t hurt the wallet as much as PC hardware.
Does it need to be a choice?
Just like everything else in technology, the answer is never clear-cut. Choosing between a centralized host and distributing services across several discrete hardware is a challenge faced in humble home lab setups and massive organizations around the world every day.
Thankfully, we aren’t dealing with infrastructure worth millions of dollars, so we have the liberty to experiment. So far, I’ve been learning how to set up different services inside containers and, if I ever want to, set them up again on an SBC. I understand that some won’t want to go through the trouble of doing the same thing twice, but for me, it’s a necessary step in learning. Besides, it’s just plain fun.
And maybe there’s no need to choose. After all, containers can be hosted on anything, including an SBC or SBC cluster. There are plenty of great tutorials on setting up my own, and I’m pretty excited to build one in the near future. Finally, NAS is a third great option. To find out why, read this article by Ayush Pande.