Published Jan 20, 2026, 7:00 PM EST
Anurag is an experienced journalist and author who’s been covering tech for the past 5 years, with a focus on Windows, Android, and Apple. He’s written for sites like Android Police, Neowin, Dexerto, and MakeTechEasier. Anurag’s always pumped about tech and loves getting his hands on the latest gadgets. When he’s not procrastinating, you’ll probably find him catching the newest movies in theaters or scrolling through Twitter from his bed.
Enterprise high availability usually means buying expensive rack servers with redundant power supplies and IPMI management cards. The hardware alone costs thousands before you even think about software. But what if you could build a proper HA cluster using old laptops collecting dust in your closet? That is exac…
Published Jan 20, 2026, 7:00 PM EST
Anurag is an experienced journalist and author who’s been covering tech for the past 5 years, with a focus on Windows, Android, and Apple. He’s written for sites like Android Police, Neowin, Dexerto, and MakeTechEasier. Anurag’s always pumped about tech and loves getting his hands on the latest gadgets. When he’s not procrastinating, you’ll probably find him catching the newest movies in theaters or scrolling through Twitter from his bed.
Enterprise high availability usually means buying expensive rack servers with redundant power supplies and IPMI management cards. The hardware alone costs thousands before you even think about software. But what if you could build a proper HA cluster using old laptops collecting dust in your closet? That is exactly what I set out to do, and not to boast, but I seem to have achieved that feat (to some extent, at least).
I put together two aging laptops and ran Proxmox Virtual Environment to form a high-availability cluster that survives node failures and keeps virtual machines running. Mind you, this is not a toy homelab that crashes when you sneeze near it. This is a real HA architecture with quorum voting and automatic failovers. The laptops just happen to have built-in UPS systems in the form of batteries and integrated consoles that servers charge extra for.
Related
Don’t just revive your old laptop, turn it into a home server
General-purpose Linux distros aren’t the only way to resurrect your outdated computing companion
Preparing the hardware
And understanding the storage issues
The biggest mistake people make when repurposing laptops as servers is forgetting that laptops are designed to disappear and reappear. They are meant to sleep, hibernate, roam between networks, and aggressively save power, and every one of those behaviors is catastrophic in a cluster. So before doing anything else, I turned both laptops into something that behaves like boring infrastructure.
I started by installing Proxmox VE directly on bare metal on both machines, avoiding installing it inside another operating system or virtualizing it. Proxmox expects direct access to disks, networking, and power states, and adding another layer underneath only introduces unnecessary failure points.
Next, I assigned static IP addresses to both laptops on the same subnet. Proxmox cluster communication depends on predictable addressing. DHCP renewals or IP changes can cause nodes to temporarily lose visibility of each other, which looks exactly like a failure from the cluster’s point of view.
I also disabled sleep, suspend, hibernation, and any vendor-specific power management features in the BIOS. One laptop also had a lid close behavior that triggered suspend, so that had to go as well. Once deployed, these machines should never decide on their own to change power state.
Inside Proxmox, I double-checked power settings to make sure no automatic suspend or power-saving profiles were active. The cluster assumes nodes are either alive or dead. A node that randomly pauses is worse than one that cleanly fails.
There is one genuine upside to using laptops here, and that is the batteries. Each node effectively has a built-in UPS. Short power cuts do not immediately kill the cluster, and Proxmox does not even notice. That said, there are trade-offs you have to accept.
The biggest one is storage. Proxmox HA is fundamentally a restart mechanism. If a node fails, the cluster attempts to restart affected VMs elsewhere. That only works if the VM’s disk is accessible on another node. With laptops, your default storage is local. Each VM disk physically exists on one machine. If that machine dies, the disk dies with it.
If you want functional HA, you have three real options, with ZFS replication being the most accessible. VM disks are replicated between nodes on a schedule. In a failure, the replica is promoted, and the VM starts from the most recent snapshot. This is not zero data loss, but for labs and many services, it is good enough.
Shared storage is the traditional approach. NFS or iSCSI hosted on a third machine allows both nodes to see the same disks. This works well, but it introduces another potential single point of failure unless that storage is also redundant.
The third option is to accept the limitations and only place stateless or disposable workloads under HA. Since this setup is purely experimental, I was comfortable sticking with local storage and living with those constraints.
Creating a two-node cluster
Also adding a quorum mechanism through QDevice
Once the laptops were ready, I created the cluster on the first laptop using the Proxmox UI or CLI. This node becomes the initial authority. I then joined the second laptop to the cluster. At this point, both nodes can see each other, and cluster services are active.
This is where things get confusing for most people. With only two nodes, the quorum is fragile by definition. Proxmox uses a voting system where each node gets one vote. To make decisions, the cluster needs a majority. Two nodes mean two votes. If one node disappears, only one vote remains. That is not a majority, so the cluster intentionally freezes HA actions.
This is not Proxmox being difficult. This is how distributed systems prevent split-brain scenarios, where two nodes both believe they are in charge and start corrupting data. However, this can be easily fixed using a QDevice. A QDevice is a quorum-only participant, which does not run VMs or store data. It exists purely to vote.
In my setup, the QDevice runs inside a tiny Linux VM hosted on a Mac. I created a minimal Ubuntu Server VM with very modest resources, keeping CPU and RAM usage negligible, and storage usage minimal. The VM must use bridged networking so it sits on the same LAN as the Proxmox nodes. NAT adds unnecessary complexity and can break connectivity during network changes.
Inside the VM, install and enable qnetd. This service listens for quorum traffic and participates in vote arbitration. Once it is running, make sure both Proxmox nodes can reliably reach the QDevice. Packet loss here defeats the entire purpose. Back on the Proxmox cluster, run the QDevice setup command. Proxmox exchanges certificates and integrates the QDevice automatically. After this, check the cluster status. You should see three votes in total: one for each laptop and one from the QDevice.
Now the math works. If one laptop fails, the remaining laptop plus the QDevice still has two out of three votes. Quorum is maintained, and HA stays active. There is one warning that is non-negotiable. If the Mac sleeps or shuts down, your QDevice disappears. From the cluster’s perspective, that looks exactly like a node failure. You’d want to keep the Mac awake or move the QDevice to something that is always on later, maybe a Raspberry Pi or cheap cloud VM.
Enabling and testing HA
With the quorum stable, I enabled HA in the Datacenter settings. This turns on the services responsible for monitoring nodes and restarting workloads when something goes wrong. You can create HA groups to prefer certain nodes, but with only two laptops, this is mostly academic. I then added a VM to HA. I deliberately picked something small and non-critical for testing.
After that, I tested it the right way. I did not gracefully shut down a node. That does not simulate a real failure. Instead, I hard-powered off one laptop. I pulled the plug and watched what the cluster did.
If the VM’s storage is available on the remaining node, it comes back automatically. If it is not, the failure is explicit and clearly logged. Either outcome is useful because both tell you exactly how your setup behaves under failure.
Related
I clustered budget-friendly devices into a Proxmox HA lab, and it’s more useful than I thought
Clusters may not be for everyone, but they work really when you need high-availability support for your Proxmox nodes