3 min readNov 30, 2025
–
Press enter or click to view image in full size
AI Generated — Nano Banana Pro
Debugging cron jobs and stitching together Python scripts into the wee hours of the morning is a pain most developers know all too well. There’s a unique type of dread that comes from looking at your server logs and realizing your entire workflow architecture is actually just held together by duct tape and hope.
That’s why I migrated to n8n. It’s an awesome open-source tool, essentially a self-hosted, free Zapier. Running it on a single VPS started to feel risky, and I wanted my automations to survive a server crash, handle spikes in traffic, and deploy without any downtime.
So, I bit the bullet and moved my n8n setup to Kubernetes. To save you the headache of figuring it ou…
3 min readNov 30, 2025
–
Press enter or click to view image in full size
AI Generated — Nano Banana Pro
Debugging cron jobs and stitching together Python scripts into the wee hours of the morning is a pain most developers know all too well. There’s a unique type of dread that comes from looking at your server logs and realizing your entire workflow architecture is actually just held together by duct tape and hope.
That’s why I migrated to n8n. It’s an awesome open-source tool, essentially a self-hosted, free Zapier. Running it on a single VPS started to feel risky, and I wanted my automations to survive a server crash, handle spikes in traffic, and deploy without any downtime.
So, I bit the bullet and moved my n8n setup to Kubernetes. To save you the headache of figuring it out from scratch, I built n8n-on-kubernetes, a repo designed to get you production-ready without needing a PhD in YAML.
Why I Ditched the Single Server
Running n8n via Docker Compose is fine for testing. But once I started running critical workflows, the cracks started showing:
- The SQLite Bottleneck: Default n8n uses SQLite. It chokes when you have heavy concurrency.
- The “Bus Factor”: If that one server dies, everything stops.
- Scaling: I couldn’t easily add more workers to process heavy jobs.
Moving to K8s wasn’t just about being fancy. It was about using PostgreSQL for a real backend and Redis for queue management so jobs don’t get dropped.
The Short Version of the Setup
I’m assuming you have a cluster (EKS, GKE, or even Minikube).
**1. The Database Matters: **Don’t use the built-in database for production. You need Postgres. If you don’t have an external DB, you can spin one up in the cluster:
helm install my-postgres bitnami/postgresql --set auth.database=n8n,auth.username=n8n
Note: You definitely also want Redis (Queue Mode) if you plan on scaling beyond one pod.
Get Renjith Ravindranathan’s stories in your inbox
Join Medium for free to get updates from this writer.
**2. Install the Chart: **Clone my repo and run the install.
git clone https://github.com/mysticrenji/n8n-on-kubernetes.gitcd n8n-on-kuberneteshelm install n8n ./helm-chart -f values.yaml
3. Configuration (The Important Part): The values.yaml file is where you control the beast. Here is a snippet of what a production-ready config looks like.
**values.yaml**
# Don't use 'latest' in prod, pin your version!image: repository: n8nio/n8n tag: "1.25.1"replicaCount: 2 # Now we are scaling!env: # Connect to the Postgres you set up earlier - name: DB_TYPE value: "postgresdb" - name: DB_POSTGRESDB_HOST value: "my-postgres-postgresql" - name: DB_POSTGRESDB_DATABASE value: "n8n" - name: DB_POSTGRESDB_USER value: "n8n" # CRITICAL: Enable Queue Mode (requires Redis) - name: EXECUTIONS_MODE value: "queue" - name: QUEUE_BULL_REDIS_HOST value: "my-redis-master" # If you don't set this, your webhooks will point to localhost and fail - name: WEBHOOK_URL value: "https://n8n.your-domain.com/" # Stop your emails from sending at 3 AM because of timezone diffs - name: GENERIC_TIMEZONE value: "Europe/Amsterdamingress: enabled: true annotations: kubernetes.io/ingress.class: nginx hosts: - host: n8n.your-domain.com paths: - path: / pathType: Prefixpersistence: enabled: true size: 10Gi # Keep your workflow history safe
Lessons Learned (So You Don’t Have To)
- Queue Mode is Mandatory: If you don’t use Redis (Queue Mode), n8n runs monolithically. It works, but you can’t scale it horizontally. Just enable Redis from day one.
- Volume Mounts: n8n writes a lot to
/home/node/.n8n. Make sure your Persistent Volume Claim (PVC) is set up correctly, or you’ll lose your workflow history every time a pod restarts. - Timezones: Set the
GENERIC_TIMEZONEvariable. Otherwise, your "9 AM" email blast will go out at 9 AM UTC, which is probably not what you want.
Wrap Up
This project was born out of frustration, but now my workflows just run. If a node dies, K8s spins up a new one. If traffic spikes, HPA handles it.
Grab the code, fork it, and let me know if it breaks. Happy automating.
In case of any queries, please feel free to connect with me via the social links below