Home labs have plenty of neat perks, but being able to use any old PC, laptop, or other gizmos as servers is one of my favorites. Plus, there’s a lot of versatility in the OS you can choose for your experimental workstation, as you can pick anything from the all-powerful Proxmox to the somewhat frowned upon (but still fairly useful) Windows Server 2019.
That said, there are certain features you’ll want to enable on practically every server node, regardless of its underlying hardware and operating system. Take the BIOS settings, for instance. While some of them may sound useless to the average gamer, they’re borderline essential for hardcore home labb…
Home labs have plenty of neat perks, but being able to use any old PC, laptop, or other gizmos as servers is one of my favorites. Plus, there’s a lot of versatility in the OS you can choose for your experimental workstation, as you can pick anything from the all-powerful Proxmox to the somewhat frowned upon (but still fairly useful) Windows Server 2019.
That said, there are certain features you’ll want to enable on practically every server node, regardless of its underlying hardware and operating system. Take the BIOS settings, for instance. While some of them may sound useless to the average gamer, they’re borderline essential for hardcore home labbers – even more so when you love building wacky projects as much as I do.
CPU virtualization
Kind of a no-brainer for running VM-heavy servers
This may sound like common knowledge when you’re even remotely tinkered by virtual machines, but it’s easy to forget CPU virtualization when you’ve got a shiny new rig to experiment with. Heck, I’ve had my fair share of moments where I had to re-enter the BIOS after installing a NAS operating system just because I forgot to enable SVM, Intel VT-x, AMD-V, or whatever other name the mobo manufacturer uses when referring to virtualization extensions.
If you’re a Linux-heavy user who conducts home lab experiments on containers, you can even get by without toggling CPU virtualization. But I love tinkering with virtual machines, and since CPUs rely on virtualization extensions to run guest operating systems, it’s something I always enable on my rigs.
IOMMU
Essential for GPU passthrough
The IOMMU or Input-Output Memory Management Unit may appear to be an unassuming service in the CPU section of the BIOS, but you’ll need it when passing NICs, GPUs, or other PCIe devices to your virtual machines. For the most part, the IOMMU is responsible for converting the virtual addresses of your IO devices (including PCIe peripherals) to the physical RAM addresses.
However, its real utility comes into light when you start working with VMs, as this handy feature uses its IO mapping mechanics to let PCIe components have direct access to system memory, and in turn, allows virtual machines to gain access to them. For example, I’ve passed my spare Arc A750 to my Windows 11 dev VM, which houses everything from my coding documents to machine-learning projects. I’ve got a bare-metal NAS, but when I had a virtualized file-sharing server on my Proxmox node, I relied heavily on IOMMU to pass the HBA adapter and NIC to it.
SR-IOV
To share PCIe cards between multiple VMs
Between their low resource consumption and solid performance, containers offer some neat advantages over virtual machines. But the most underrated one is the ability to share the same PCIe device with multiple containers, as virtual machines typically require an entire component dedicated to them. The keyword here is “typically,” as SR-IOV (Single Root IO Virtualization) is a neat workaround to the hardware-hogging tendencies of a virtual machine.
For those unfamiliar with the term, SR-IOV deceives the hypervisor and IOMMU into believing that a single IO device is, in fact, multiple peripherals, essentially allowing you to pair it with different virtual machines. This makes SR-IOV especially useful for graphics cards, as you can combine their processing capabilities with the superior isolation provisions of virtual machines without buying multiple GPUs.
C-States
Perfect for lowering my energy bills
Aside from sounding like jet engines and heating up your room like a sauna, home servers can consume a lot of energy – even more so when you use server-grade hardware like I do. That’s a lesson I learned the hard way after getting smacked by terrifying electricity bills just two months after grabbing said server.
Enabling C-States is a solid way to improve your processor’s energy efficiency, though there are certain caveats to it. You see, each C-State denotes the different power-saving modes of your processor, with C0 representing a CPU running no-holds-barred, and anything past C3 represents deep sleep. Although overclocking enthusiasts and hardcore gamers tend to disable C States entirely for better performance, enabling them results in better power efficiency. Considering that I’ve got two NAS units (one that runs off-site) and a cluster running 24/7 alongside my server system, I’d probably go destitute if I turned C-States off.
Some more tweaks to level up your home lab
Aside from the BIOS settings I’ve mentioned so far, I have a couple of other niche settings that I use in my home lab. If you’ve got a semi-decent system and have the same fondness for tinkering with different virtualization platforms as me, nested virtualization is worth checking. While it causes extra processing overhead, I’ve also enabled nested virtualization on my Windows 11 dev machine, so I can provision containers for my coding projects. I’ve also got distinct VLANs for insecure virtual machines and containers, especially ones connected to my smart home paraphernalia.