Internet
|
v
+-------------------------------+
| OVH VPS |
| - Public IP: vps.example.com |
| - SSH on port 6968 |
| - Caddy (reverse proxy + SSL)|
| - WireGuard client |
| - Internal IP: 192.168.6.1 |
+--------------+----------------+
| WireGuard tunnel
| (encrypted, locked down)
v
+-------------------------------+
| Home Network (UniFi) |
| WireGuard server: 192.168.6.x|
| |
| +--- DMZ VLAN (isolated) --+ |
| | home-server | |
| | 192.168.6.10 | |
| | - Docker containers | |
| | - App on port 3000 | |
| | - Jellyfin on 8096 | |
| +---------------------------+|
| |
| [Main LAN - NOT reachable] |
| [IoT VLAN - NOT re...
Internet
|
v
+-------------------------------+
| OVH VPS |
| - Public IP: vps.example.com |
| - SSH on port 6968 |
| - Caddy (reverse proxy + SSL)|
| - WireGuard client |
| - Internal IP: 192.168.6.1 |
+--------------+----------------+
| WireGuard tunnel
| (encrypted, locked down)
v
+-------------------------------+
| Home Network (UniFi) |
| WireGuard server: 192.168.6.x|
| |
| +--- DMZ VLAN (isolated) --+ |
| | home-server | |
| | 192.168.6.10 | |
| | - Docker containers | |
| | - App on port 3000 | |
| | - Jellyfin on 8096 | |
| +---------------------------+|
| |
| [Main LAN - NOT reachable] |
| [IoT VLAN - NOT reachable] |
+-------------------------------+
I’ve got a bunch of side projects. Many of them need decent compute and RAM just to sit there in the background. Things like AI experiments, media servers, personal tools. They might get three requests a day. Paying $50/month to DigitalOcean for something that sits idle 99% of the time? No thanks.
My home server has 32GB of RAM. It already exists, it’s already running, it’s already paid for. The marginal cost of spinning up another container is basically zero. Containers are literally free. No overhead, just process isolation. Sure, VMs would be safer, but they eat RAM. For side projects, containers are good enough.
Cloud makes sense for production traffic. For hobby projects? Not so much.
I like Cloudflare. I use it for plenty of things. But not everything needs to go through it. Jellyfin is the perfect example: Cloudflare’s ToS explicitly prohibits proxying "disproportionate" video content on free and pro plans (Section 2.8 of the Self-Serve Agreement). They can and do terminate accounts for this. This setup lets me expose my media server safely without worrying about ToS violations. Full bandwidth, no arbitrary limits. My traffic is my traffic.
The technical benefits are solid too: no vendor lock-in, full control over traffic routing and SSL termination, zero exposed ports on my home network, and no Cloudflare sitting in the middle of my TLS traffic.
The OVH VPS costs about $3-5/month and acts as a cheap, throwaway bastion host. It’s the only public-facing entry point. It runs Caddy for reverse proxying and WireGuard as a client. If this box gets compromised, my home network is still protected. The attacker would need the WireGuard keys to go any further.
Caddy handles HTTPS automatically via Let’s Encrypt. The config is dead simple (just a Caddyfile), and it reverse proxies to services on the WireGuard network. You can add rate limiting, custom headers, or basic WAF rules if needed. You can also do some basic ‘edge-caching’ here as well, which’ll save a 20ms hop and aleviate a small amount of strain on your home server.
WireGuard is the VPN tunnel between the VPS and my home network. It’s a modern, fast protocol with a minimal attack surface (~4k lines of code). It’s stateless, so there are no connections to track. My unifi router at home runs the WireGuard server, and the VPS connects as a client.
The home server running Proxmox (i’m a big fan), I run primarily unprivileged LXC containers for the sake of being more resource efficient and make things like shared GPU access easier for Jellyfin (though sharing a kernel is a bad idea for publicly facing applications, I hope this doesn’t come back to bite me.)
Reliability
This sounds janky, but it’s actually not bad. I’ve got two Proxmox nodes in an HA cluster. Containers don’t do live-failover like VMs would, but these are low-priority deployments anyway. If a node goes down, the container restarts on the other node. Most workloads are back up in 10 to 20 seconds. The i5-10500 is plenty fast.
I’ve also set resource limits on my LXC containers, so a runaway process can’t take down the whole node. Everything sits on a UPS. For a home deployment, it’s relatively solid.
My home network has only a non-standard wireguard port open, everything is put through that tunnel, I don’t even have SSH exposed. All traffic is encrypted end-to-end via WireGuard. The VPS only forwards specific services. It’s a whitelist model. SSH access to my home network requires a ProxyJump through the VPS (or wireguard). Even if someone compromises the VPS, they can only access my DMZ VLANs.
Separate VPNs and VLAN Isolation
This is the key to making this actually safe. The WireGuard config for public services is completely separate from my personal "VPN into home" setup. My personal VPN gives me full home network access (trusted devices only). The public-facing VPN is locked to DMZ VLANs only.
My UniFi router handles the firewall rules: the WireGuard interface for public services can only reach the DMZ VLAN. It cannot route to my main LAN, IoT devices, or any other VLANs. Even if an attacker compromises the VPS and gets the WireGuard keys, they’re stuck in the DMZ.
The DMZ VLAN contains only public-facing services: Jellyfin, web apps, that kind of thing. Defense in depth: VPS → WireGuard → Firewall → DMZ VLAN → Container.
Setting up SSH to jump through the VPS is straightforward:
# ~/.ssh/config
Host vps
User debian
Hostname vps.yourdomain.com
Port 6968
Host home-server
User root
ProxyJump vps
Hostname 192.168.6.10
Now ssh home-server transparently jumps through the VPS. No direct SSH exposure on the home network. Add an IdentityFile directive for key-based auth.
Caddy’s config is refreshingly simple:
# /etc/caddy/Caddyfile
app.example.com {
reverse_proxy 192.168.6.10:3000
}
api.example.com {
reverse_proxy 192.168.6.10:8080
}
# Optional: basic auth for admin panels
admin.example.com {
basicauth {
admin $2a$14$...
}
reverse_proxy 192.168.6.10:9000
}
# Jellyfin - can't run this through Cloudflare ToS
jellyfin.example.com {
reverse_proxy 192.168.6.10:8096
}
Caddy automatically obtains and renews SSL certificates. Each subdomain routes to a different service on the WireGuard network.
Deployments work through the jumpbox:
GitHub Actions builds the Docker container 1.
Pushes the image to GHCR 1.
Kamal SSHs through the jumpbox (using ProxyJump)
1.
Pulls and deploys the container on the home server 1.
Caddy routes traffic to the new container
Compared to Cloudflare Tunnel:
(+) No vendor lock-in
(+) Full control over SSL/traffic
(+) No Cloudflare seeing your plaintext
(-) Need to maintain a VPS (~$4/mo)
(-) No free DDoS protection (OVH does this though)
(-) DIY cert management (though Caddy makes this trivial)
Compared to exposing ports directly:
(+) Massively reduced attack surface
(+) VPS acts as a sacrificial buffer
(+) Easy to add WAF/rate limiting
(-) Extra hop adds ~10-20ms latency
(-) More moving parts
Let’s be honest about the failure modes.
App-level Exploit
If someone finds an RCE in one of my apps running in Docker, I’m fucked. The main mitigation is routing the DMZ VLAN’s outbound traffic through the OVH server. That way, the attacker’s C2 traffic exits via the VPS, not my home IP. If something goes wrong, OVH gets shut down, not my home internet.
Even better: my UniFi router lets me disable internet access to a VLAN entirely. If a webapp only talks to a local Postgres database, why does it need outbound internet? Disable outbound, only allow inbound routing via Caddy. Now an RCE means the attacker can’t phone home. No C2, severely limited options. This isn’t realistic for all deployments, but it’s worth considering.
Container Escape
If someone escapes both the Docker container and the LXC container it’s running in, I’m fucked. VMs would provide stronger isolation. I’m accepting this risk for side projects. The LXC containers are unprivileged and have resource limits set, so at least a runaway process can’t take down the whole node.
Home Network Goes Down
ISP outage, power outage beyond the UPS (my UPS estimates 50min runtime, which seems reasonable), router blows up? Fucked. It’s a home setup, not a datacenter.
House Floods
I live in a flood plain. If that happens, I’m fucked, but I’ve got bigger fish to fry at that point.
DDoS on the VPS
If someone DDoSes my "Cloudflare at home," I’m half-fucked. OVH claims to have DDoS protection. Worst case, the VPS goes down but my home network stays fine. I’ve added a caddy WAF and fail2ban is watching caddy. so theres a bit of layer 7 ddos protection as well...
No CDN
Yeah, sucks. OVH has a CDN, looking at benchmarks that also sucks. Not the end of the world for side projects. Cloudflare already handles my DNS, all it take is to switch the orange cloud on for deployments that need a CDN...
Jumpbox Compromise
If someone gets into the VPS, I’m half-fucked. The WireGuard tunnel only reaches the DMZ VLANs, so my house is safe. But all the domains pointing to this box get owned. Mitigations are standard server hardening: UFW, non-standard SSH port, fail2ban, pubkey-only auth. Not much else you can do.
The home server runs wg0.conf as the WireGuard server (with ListenPort and peer configs). The VPS runs as a client, with its Endpoint pointing to my home IP. If your home IP is dynamic, use DDNS or WireGuard’s persistent keepalive feature.