Sign in to your XDA account
Running your home or small business network through OPNsense is arguably one of the smartest moves any home labber can make... but running OPNsense on Proxmox? That’s the real power play. I’ve tried countless network setups over the last year, from bare-metal installs to dedicated router appliances, but none have come close to the stability, flexibility, and control that come with virtualizing OPNsense inside a Proxmox cluster.
It’s not just about performance or convenience (though you get plenty of both), as it’s about being able to simply do more that you can’t achieve with a single, traditional bare-metal install. From live snapshots and instant backups to high availability without additional complexity, OPNsense can do it all. Plus, as a PPPoE…
Sign in to your XDA account
Running your home or small business network through OPNsense is arguably one of the smartest moves any home labber can make... but running OPNsense on Proxmox? That’s the real power play. I’ve tried countless network setups over the last year, from bare-metal installs to dedicated router appliances, but none have come close to the stability, flexibility, and control that come with virtualizing OPNsense inside a Proxmox cluster.
It’s not just about performance or convenience (though you get plenty of both), as it’s about being able to simply do more that you can’t achieve with a single, traditional bare-metal install. From live snapshots and instant backups to high availability without additional complexity, OPNsense can do it all. Plus, as a PPPoE user, you’ll even get better throughput overall.
In short, OPNsense on Proxmox is the best way to run your network... and yes, I will die on this hill, especially with pfSense out of the picture. I say all of this slightly tongue in cheek, and realistically, it’s down to personal preference. With that said, you can get quite a lot of benefits.
Virtualization solves FreeBSD’s biggest problem
Its network driver support can be lacking
Being honest, I have one major nitpick when it comes to OPNsense and FreeBSD, and it’s one of the biggest reasons I went with Proxmox anyway... and that’s the driver support. FreeBSD’s driver support, or lack thereof, isn’t exactly fantastic.
Taking a step back, have you ever tried to run OPNsense on newer or niche network interface cards? You might discover how much of a pain it is, as outside of “major” options, there’s nothing, really. Case in point, the Ugreen NAS DXP4800 Plus that I use has an Intel I-225V NIC alongside an AQC107 10GbE NIC. The Intel performs fine no matter what, but the Aquantia, while working without issue under Windwos and Linux, doesn’t work on FreeBSD at all. To be honest, depending on your hardware, driver support is hit or miss, and you might find yourself stuck at 1GbE or with no link at all.
By running OPNsense as a Proxmox virtual machine, I can attach that same NIC to the VM through a virtualized adapter like VirtIO, which *is *supported by FreeBSD. This means that the host handles the actual hardware interaction (where drivers actually work under Linux), and OPNsense only ever sees a virtual device. That means I get full access to the card’s performance and stability without worrying about FreeBSD’s (arguably) spotty compatibility.
This one change alone made OPNsense on Proxmox feel like cheating, as suddenly, every NIC just works. You’re beholden to Linux drivers at that point, but those drivers are pretty much always better than the FreeBSD options. And FreeBSD supports VirtIO Ethernet drivers, so you let the host handle the hardware interfacing, and then FreeBSD can handle the virtualized adapter interfacing.
Snapshots make it feel safer
And I can experiment, too
If you’ve ever managed a firewall or router, you probably know how terrifying it can be to tweak a setting that could instantly nuke your connection. I’ve already done it; when OPNsense 25.7 Visionary Viper came out, I elected to install the update without realizing realizing my VM had ran out of space. It half completed, and the VM failed to boot outside of safe mode. While OPNsense has built in backups, that would still require redownloading a new ISO, setting up everything to a basic configuration, then restoring a backup inside OPNsense to get everything back to where it was.
However, I had another plan. When I’m doing anything important on OPNsense that could be risky, I open a tab to the Proxmox host that OPNsense is running inside, and take a snapshot beforehand. If OPNsense goes down, my PC still remembers the route to get to that machine, and if I keep it open, it’ll stay connected even if I shut down the OPNsense VM entirely. I was able to restore my snapshot and go back to 25.1, and I was back up and running in about two minutes.
I have an automated backup of my OPNsense install that runs every day at 5am, automated through Proxmox Backup Server and saved to another machine on my network. I also keep one daily backup on the same machine, so if OPNsense goes down and needs to restore a backup (and I don’t notice for a few hours, so I lose access to Proxmox Backup Server), I can still restore that day’s backup to at least get a working configuration. I can manually copy over the files too if I really need from the other machine (for example, if the most recent backup is still problematic), but it’s about convenience in this case of having a local one.
Compared to relying on OPNsense’s own configuration backups, it’s a pretty massive upgrade. And that’s not a knock on OPNsense, it’s just that an entire system backup is significantly better than a backed up configuration file that requires a working system to restore it to.
Proxmox High Availability has advantages over CARP
CARP requires a lot more than HA
High availability (HA) is one of OPNsense’s best features... at least in theory. The CARP (Common Address Redundancy Protocol) system allows you to mirror two OPNsense routers and switch over instantly in case one fails. In practice, though, CARP comes with baggage, even if it’s technically the best option.
Firstly, you need three IP addresses from your ISP: one for each router, and one for CARP. For most people’s that just not realistic. Some ISPs don’t even offer multiple public IPs without a business plan, and even if they do, the complexity can be ridiculous. If you don’t mind a one or two minute failover time, then Proxmox’s High Availability is a significantly better option. With a properly configured cluster, Proxmox automatically restarts your OPNsense VM on another node if the host goes down, and it’s quite straightforward to set up, especially if you’re just using virtual adapters like I am.
There are some workarounds for CARP’s public IP requirements, such as putting another router in front of both OPNsense nodes or using scripts to switch WAN ports dynamically, but both add unnecessary fragility. If your goal is reliability and simplicity, Proxmox HA wins out here, and it’s yet another reason that virtualizing OPNsense is fantastic.
For PPPoE, it’s a no-brainer
Better performance without the tuning
One of the less obvious perks of running OPNsense on Proxmox is improved Point-to-Point Protocol over Ethernet (PPPoE) performance. Very basically, PPPoE is a networking protocol that encapsulates PPP frames within Ethernet frames, allowing ISPs to authenticate users with a username and password before granting internet access. This is a pretty CPU-intense protocol, though, as it requires a router’s CPU to handle the authentication and packet encapsulation and it’s inherently a single-threaded protocol. Without without some additional work, OPNsense on a bare-metal install will simply chuck the entire incoming and outgoing streams onto a single core.
With all of that said, things have got a lot better over the last year on bare-metal at the very least, and it’s possible to achieve a gigabit connection on PPPoE on a bare-metal install, though that wasn’t always the case. If you have a gigabit line, there’s an even easier way, though. By virtualizing OPNsense, the Linux-based host takes those PPPoE frames and forwards them through the virtual bridge to your OPNsense instance, and the VM can process those incoming packets across all cores. That means better performance and better overall throughput.
If you have PPPoE, this is one of those drawbacks not many people expect, and I was one of those people. It makes total sense when you realize how PPPoE works, but if you’re not too clued in on networking protocols, then you can be in for a shock when you first deploy OPNsense.
A router that doubles as a NAS?
Yes please
Here’s one of the other underrated perks of virtualizing OPNsense: you can share your hardware for other purposes. As I mentioned, my main OPNsense system runs on a Ugreen DXP4800 Plus, which comes with four 4TB HDDs. On a bare-metal setup, those drives would mostly go to waste, especially given that OPNsense need anywhere near that much storage.
But since OPNsense is just a VM in Proxmox, I can pass through the SATA controller to another virtual machine or container. That means OPNsense still runs on the same system, handling my routing, VLANs, and firewall duties, while the rest of the hardware acts as a NAS platform for shared storage and media. The entire SATA controller is passed through to a HexOS (based on TrueNAS) virtual machine, so the VM enjoys complete hardware control over the drives, while OPNsense runs on the same system and handles my networking.
In other words, I’ve effectively turned a Ugreen NAS into a NAS-router hybrid, all on the one device. OPNsense uses only a few gigabytes of storage, leaving the rest of the included 128GB SSD to TrueNAS and the Proxmox host. I don’t want to run too much on the one system, and a single OPNsense instance and a HexOS instance are more than enough.
It’s a pretty efficient use of hardware. I’m getting the performance of a dedicated router, the redundancy of a proper NAS, and the flexibility of virtualization, all without needing separate machines to do it. On that note, too, my entire setup is hardware-agnostic. A virtual machine’s configuration lives independently of the underlying machine, so you can migrate it between hosts, chop and change hardware... or do anything, really, without touching a single setting inside OPNsense. It’s pretty close to a truly modular network infrastructure.
Flexibility is freedom
And Proxmox gives a lot of flexibility
Having OPNsense run in a virtual machine on Proxmox is incredibly freeing. It’s easy to fix anything that breaks, I can mess around with it knowing that i have a configuration I can restore in less than two minutes, and I can migrate it to another device if I really need to. Even the performance benefits when it comes to PPPoE negates the argument that you get worse performance from virtualizing, as in my case, it’s better.
Of course, virtualizing OPNsense isn’t for everyone, and while I’ll die on the hill that it’s the best way to run it, I say that somewhat tongue in cheek. A hypervisor can add additional complexity, and technically speaking, outside traffic is first hitting Proxmox rather than your firewall with a configuration like this. It’s not the worst, but it’s technically less secure, even if you could mitigate it by passing through your WAN NIC to OPNsense. After all, any compromise of the hypervisor is then a compromise of your network.
For a home network, it’s really just user preference. If you were to operate a business with OPNsense, I’d be more inclined to lean towards a bare-metal install, especially because automated backups like I’ve done here are still possible to do with a bare-metal setup, just with a bit more work. Neither approach is intrinsically wrong, and there are benefits and tradeoffs to both. If a problem affects the hypervisor, for example, then you have two problems, as your network goes down, too.
With all of that said, I wrote this merely as a way to convey the reasonings for virtualizing OPNsense, as a newcomer who’s unsure is more likely to find people defending the reason for a bare-metal install rather than a virtualized one. Both are valid ways to run it, and at the end of the day, so long as it works and is secure, then either way you go with is completely fine.