Even with the free version of ESXi reinstated earlier this year, Proxmox is one of the most popular options for home labbers. However, it’s far from the only option, and you can build a reliable server with TrueNAS, Unraid, Harvester, and a bunch of other alternatives. Powered by the Xen hypervisor, XCP-ng is another Proxmox rival that has been making the rounds for a while now.
So, I figured I could try rebuilding my home lab by centering it around XCP-ng instead of Proxmox as my next project. And now that I’ve finished the experim…
Even with the free version of ESXi reinstated earlier this year, Proxmox is one of the most popular options for home labbers. However, it’s far from the only option, and you can build a reliable server with TrueNAS, Unraid, Harvester, and a bunch of other alternatives. Powered by the Xen hypervisor, XCP-ng is another Proxmox rival that has been making the rounds for a while now.
So, I figured I could try rebuilding my home lab by centering it around XCP-ng instead of Proxmox as my next project. And now that I’ve finished the experiment, I’ll probably move back to Proxmox. It’s not that XCP-ng isn’t good enough, but PVE simply ships with a ton of features and massive community support that are borderline essential for my home lab. If you’re curious, here’s my detailed log for the experiment.
The setup process is interesting, to say the least
But XOA ends up hogging some resources
Depending on its interface, installing a home server distribution can be extremely simple or overly convoluted. Although XCP-ng isn’t all that complicated to set up, it does need a handful of extra steps to work. But before I discuss those, I’ll quickly go over the specs of my testing rig. Since I wanted to test XCP-ng’s utility in a conventional home lab, I went with an old PC consisting of a Ryzen 5 1600, 16GB memory, and a GTX 1080 instead of my Xeon server rig. It’s the same PC that’s a part of my experimental Proxmox cluster, so I could gain a better estimation of its performance.
Anyway, I followed the age-old procedure of creating a bootable drive with XCP-ng and using the BIOS to boot into the installation wizard. Similar to TrueNAS, XCP-ng has a rather old-school, menu-based setup interface, though it’s pretty easy to navigate. Once I’d selected a 128GB SSD as the boot drive, I opted to use my 500GB NVMe SSD to house the VM files before configuring a couple of language, timezone, and network settings (where I left everything up to DHCP).
The installation procedure didn’t take that long either, but my work was far from done. After the server rebooted, I used the IP address shown on the terminal to access XO Lite. Here’s where the weird part starts to kick in: although XO Lite includes a couple of settings for managing XCP-ng, it’s still very limited in its functionality. That’s why I had to deploy Xen Orchestra Appliance, a management interface that runs inside a VM.
Setting up XOA isn’t really a problem, as all I had to do was press a single button. However, the XOA virtual machine requires 2 v-cores and 2GB memory to run, while most server distros (including Proxmox) consume a fraction of those resources for their management UI. It’s not that big a deal on hardcore Xeon/Epyc servers, but for low-power devices and consumer-grade hardware, allocating 2 v-cores and 2GB RAM to the control interface can be a problem.
XCP-ng doesn’t falter on the performance front
It can even run Windows 11 VMs!
Aside from that nitpick, XCP-ng’s performance is nothing to scoff at. I’ve previously heard statements calling the Xen hypervisor “dead in the water,” and after using XCP-ng to run a handful of VMs, let me tell you that’s far from the truth. Of course, I found KVM more responsive than Xen, but the latter is just as useful for server tasks, and I daresay it’s worth checking out if you’re looking for a cool platform that isn’t powered by KVM.
The latest version of XCP-ng works well with Windows 11 VMs thanks to built-in TPM 2.0 and Secure Boot emulation. The latter required a couple of certificates, which I downloaded on the host by SSHing into it and running the secureboot-certs install command. With Citrix drivers installed, even Windows 11 worked well – provided I allocated enough resources to it.
Linux-based virtual machines, including Debian, Pop_OS!, and Artix Linux, were a lot simpler to configure, and the same holds true for FreeBSD distros like GhostBSD. With XCP-ng version 8.3 overhauling the USB passthrough provisions, adding external I/O devices was a cakewalk.
I miss built-in support for LXCs, though
Despite its solid performance in VM workloads, I really wish XCP-ng included some containerization provisions. Well, technically, it does support Kubernetes via Hub Recipe, but it’s not the same as running lightweight containers directly on the host. Nor is the Kubernetes implementation as simple as pasting scripts from cool repos and watching LXCs spin up in a couple of minutes.
Sure, I could create a dedicated virtual machine for Docker, Podman, and LXC environments, but doing so would result in some performance overhead from the VM. Factor that with XOA’s resource consumption, and I can see ancient machines and budget-friendly mini-PCs buckling under the extra load on XCP-ng – even though these devices work fine as LXC-hosting workstations on Proxmox.
The official Xen Orchestra Appliance paywalls essential features
So, you’ll want to build XO from the source files
Keen-eyed readers may have noticed that I didn’t mention anything about backups, network proxies, or automation tasks. You see, clicking on any of these settings would reveal a dialog box asking me to start a trial of XOA’s premium license. Here’s the fun part: it’s possible to access all these services by ditching XOA for Xen Orchestra.
Unlike XOA, which is the official management platform with features locked behind a paywall, Xen Orchestra lets you control XCP-ng without these issues. But rather than letting you deploy the server using a simple button. Xen Orchestra has to be compiled manually from the source repo. There is technically a neat script that takes away some of the hassle, but it’s still a lot more annoying than, say, deploying Proxmox and using a single web UI to manage everything.
Now, I appreciate that XO can be deployed for free – especially since I’ve dealt with the restrictions on the free versions of ESXi and Windows Server Version 2019. But I must admit that I’m slightly miffed about XOA adding something as essential as backups to premium licenses – ones that require regular subscription fees, no less.
XCP-ng is great, but I still prefer Proxmox
Call me a Proxmox fanboy if you must, but I still prefer PVE over XCP-ng. Don’t get me wrong. Xen is still a viable hypervisor in 2025, and you can unlock most of the common home lab facilities without spending a dime if you compile XO from scratch. UI-wise, XOA and XO have the upper hand, as they’re cleaner than PVE’s interface without relegating essential facilities behind a bunch of menus.
However, between its lack of native support for containers, extra overhead from non-XO Lite UI, and the heavily paywalled nature of XOA, Proxmox is my top choice for home lab platforms. I’ll probably move my XCP-ng instance to an i5-125U system just so I can continue tinkering with it in my spare time. But when it comes to normal home server and self-hosting tasks, I’ll stick with good ol’ Proxmox instead.