My self-hosting journey started with a single TrueNAS machine cobbled together from old computer parts, starting more as a curiosity experiment than anything serious. I already had experience with Docker, virtual machines, and Linux system administration, and I was renting a couple of VPS instances, so the concept wasn’t totally foreign to me. Still, it was a foray into something new. Before long, I had Nextcloud and Jellyfin running, automated my backups, and began to use NAS storage for workflows I hadn’t even considered before.
Since then, I’ve scaled my setup considerably, and part of what’s allowed me to do that has been the splitting of my NAS and self-hosted …
My self-hosting journey started with a single TrueNAS machine cobbled together from old computer parts, starting more as a curiosity experiment than anything serious. I already had experience with Docker, virtual machines, and Linux system administration, and I was renting a couple of VPS instances, so the concept wasn’t totally foreign to me. Still, it was a foray into something new. Before long, I had Nextcloud and Jellyfin running, automated my backups, and began to use NAS storage for workflows I hadn’t even considered before.
Since then, I’ve scaled my setup considerably, and part of what’s allowed me to do that has been the splitting of my NAS and self-hosted services across multiple devices. Rather than having one, power hungry machine running everything, I have three, and each one has a special purpose.
My original NAS was a power hungry beast
And it wasn’t that powerful, either
That first TrueNAS machine was built from my old gaming PC parts: a Ryzen 7 3700X, 24 GB of DDR4 RAM, and a GTX 1070 Ti. It had two HDDs, ran 24/7, and drew between 100W and 140W of power. Unsurprisingly, it was slow to start up, noisy from spinning fans, and wasteful in terms of energy. Around this time, I began experimenting with Proxmox and deployed it on an AMD Ryzen 5 5600U mini PC. It consumed far less power, was practically silent, and had more than enough performance to handle most of my services.
I migrated the bulk of my containers and virtual machines there, but I still relied on the TrueNAS box for tasks like Nextcloud and Jellyfin that benefited from direct access to the HDDs. It worked for a while, but I wanted something with more power on demand, without the constant energy cost. Around this point, I introduced the Ugreen DXP4800 Plus running OPNsense and HexOS to my home lab. With 12TB of shared storage, it became my new central storage and networking hub. I knew I’d eventually move the rest of my data away from TrueNAS entirely and go all-in on Proxmox.
That’s when it hit me; what if I could build a server out of some of the parts I had available at home? I had an AMD Radeon RX 7900 XTX and an Intel i7-14700K, and I had the motherboard left over from when I gave up on Intel and went to the AMD Ryzen 7 9800X3D in my PC. All I needed was RAM, storage, and a PSU. Once those arrived, I assembled a new server and migrated my TrueNAS instance into a Proxmox virtual machine, passing through the SATA controller to keep my existing storage operational. Despite being a massive upgrade in raw power, this new server consumed less energy at idle than my original NAS.
With that in place, I started thinking about how to make my entire system smarter and more power-efficient. What if the mini PC, idling at 10 W, managed when the main server powered on and off? What if I could run Home Assistant on my mini PC, keeping that as an always-on machine, use the server running OPNsense and HexOS as my main storage unit, and then only turn on the big server as and when I needed it? Using Home Assistant, I set it up to send Wake-on-LAN packets to the big server only when needed, and automatically shut it down afterward. The server idles at a mostly-consistent 90W, so eliminating that from my 24/7 power draw saves over two kilowatt-hours per day. At up to €0.52 per kWh, that’s a pretty meaningful saving over time.
Now, the mini PC handles automations and low-power workloads, while the Ugreen DXP4800 Plus manages my network and shared storage. When I need extra horsepower, be it for running a local LLM, generating images in ComfyUI, or testing resource-heavy services, I can wake the main server instantly. It’s efficient, quiet, and perfectly tuned to exactly what I need, rather than drawing power for no reason.
It was an easy process
I was genuinely surprised
Migrating my self-hosted services from one machine to another turned out to be a lot easier than I thought it would be, and I’m incredibly glad that I did it. All I did was backup my VMs and LXCs, then transfer those files using scp to the other Proxmox node before restoring them. They continued running without a hitch, and I was honestly kind of surprised that I didn’t run into any issues. After all, both machines are entirely different in terms of hardware, but I suppose that doesn’t matter for containers or virtual machines in the first place, anyway.
I suppose that this kind of slow home lab scaling isn’t exactly uncommon, and if you’re thinking of taking the next step from your first machine to something more, consider the pathway that I undertook here as a way to do that. It’s not necessarily the “best” way, but to be honest, there’s rarely ever a “best” way to do anything when it comes to home labbing, anyway. What works for you is what works for you, and so long as you can justify it and explain why your situation works for you, then there’s nothing really else that you need to do.
Electricity costs aren’t as big of a problem in many areas around the world as they are here, so you may find that it’s not necessary for you to split up your servers like I’ve done here. Even still, reserving resources on a more powerful machine for those more powerful processes is still valid, and you may benefit from additional micro servers instead, just like I’ve done here.