
If you’ve been anywhere near Hacker News in the past few weeks, you’ve probably seen OpenClaw. The project, originally named Clawdbot, then briefly Moltbot before settling on its current name, has exploded in popularity since late last year. It crossed 100,000 GitHub stars and attracted around 2 million visitors in a single week. The hype got so intense that Cloudflare’s stock jumped 14% because people were using their tunnel services to host the tool. Cloudflare even used the opportunity to come up with Moltworker, a proof of concept that runs Ope…

If you’ve been anywhere near Hacker News in the past few weeks, you’ve probably seen OpenClaw. The project, originally named Clawdbot, then briefly Moltbot before settling on its current name, has exploded in popularity since late last year. It crossed 100,000 GitHub stars and attracted around 2 million visitors in a single week. The hype got so intense that Cloudflare’s stock jumped 14% because people were using their tunnel services to host the tool. Cloudflare even used the opportunity to come up with Moltworker, a proof of concept that runs OpenClaw on their Developer Platform using Sandboxes, Browser Rendering, and R2 storage.
What makes OpenClaw interesting isn’t just that it’s another AI assistant. It connects to your messaging platforms (WhatsApp, Telegram, Signal, Slack, Discord, and more) and does things autonomously. It’s less chatbot, more autonomous agent. Give it a task, let it run, wake up to results.
Security Considerations
OpenClaw’s power comes with real security risks. The tool has shell access, reads your files, and processes untrusted input from emails and web content. Security researchers have identified several concerns:
- Palo Alto Networks called it a "lethal trifecta": access to private data, exposure to untrusted content, and external communication ability
- Cisco found that 26% of third-party skills contain vulnerabilities, including data exfiltration
- Vectra AI documented prompt injection attacks that can achieve remote code execution
- The Register reported hundreds of exposed instances found via Shodan, some with no authentication
Running OpenClaw in Kubernetes provides meaningful isolation compared to running it directly on your workstation. Container isolation, network policies, and resource limits contain the blast radius if something goes wrong. Your host filesystem, credentials, and other workloads remain protected behind namespace boundaries. For anyone considering OpenClaw in a work context or handling sensitive data, Kubernetes is the safer option.
I run Kubernetes for everything in my homelab. When I saw OpenClaw gaining traction, I realized there wasn’t a Helm chart available for deploying it. So I built one. Everything in my homelab is declarative, version-controlled in Git, and managed through ArgoCD, that’s just how I operate.
This post walks through deploying OpenClaw on Kubernetes using my Helm chart.
Deployment Architecture
The setup is straightforward:
- OpenClaw runs as a single-replica Deployment (it cannot scale horizontally by design)
- Sidecars and init containers include a Chromium browser for automation and an init-skills container for declaratively installing skills and runtime dependencies
- Configuration is stored in a ConfigMap using JSON5 format
- Persistent storage keeps workspace data, sessions, and application state
- Secrets are managed externally (Vault recommended, but optional)
The Helm chart is built on the bjw-s app-template.
GitOps Behavior
This is important: any configuration done via the OpenClaw web UI is ephemeral. When the pod restarts, UI changes are wiped. The source of truth is your Helm values and manifests in Git. This is intentional. If you want persistent configuration changes, commit them to your repository.
Quick Start with Helm
If you just want to test the deployment quickly, you can install directly with Helm.
Add the repository and grab the default values:
helm repo add openclaw https://serhanekicii.github.io/openclaw-helm
helm repo update
helm show values openclaw/openclaw > values.yaml
Edit values.yaml to set your trusted proxies, model provider, timezone, and channels. At minimum, you’ll want to configure the trustedProxies list and your messaging channel.
Create a namespace and secret for your API keys:
kubectl create namespace openclaw
kubectl create secret generic openclaw-env-secret \
-n openclaw \
--from-literal=ANTHROPIC_API_KEY=your-api-key \
--from-literal=GATEWAY_TOKEN=your-gateway-token
Add the secret reference to your values.yaml:
app-template:
controllers:
main:
containers:
main:
envFrom:
- secretRef:
name: openclaw-env-secret
Install and verify:
helm install openclaw openclaw/openclaw -n openclaw -f values.yaml
kubectl get pods -n openclaw
kubectl logs -n openclaw deployment/openclaw
To uninstall:
helm uninstall openclaw -n openclaw
GitOps Setup with ArgoCD
If you’re new to ArgoCD, start with their getting started guide. The basic idea: you define your desired state in a Git repository, and ArgoCD continuously reconciles your cluster to match.
Umbrella Charts
For managing applications in a GitOps repository, I use the umbrella chart pattern. Instead of pointing ArgoCD directly at a remote Helm repository, you create a local chart that wraps the upstream chart as a dependency. This gives you a clean structure where each application lives in its own directory.
Start by creating the directory structure:
mkdir -p workloads/my-cluster/openclaw/crds
cd workloads/my-cluster/openclaw
workloads/
└── my-cluster/
└── openclaw/
├── Chart.yaml
├── values.yaml
└── crds/
└── vault-secret.yaml
Create Chart.yaml to declare the upstream chart as a dependency:
apiVersion: v2
name: openclaw
description: OpenClaw deployment for my-cluster
type: application
version: 1.0.0
appVersion: "2026.1.30"
dependencies:
- name: openclaw
version: 1.3.0
repository: https://serhanekicii.github.io/openclaw-helm
Grab the default values from the upstream chart:
helm repo add openclaw https://serhanekicii.github.io/openclaw-helm
helm repo update
helm show values openclaw/openclaw > values.yaml
Since this is an umbrella chart, all values need to be nested under the dependency name. Edit values.yaml and wrap everything under openclaw::
openclaw:
app-template:
controllers:
main:
containers:
main:
envFrom:
- secretRef:
name: openclaw-env-secret
# ... rest of your configuration
Update the openclaw.json section with your settings—trusted proxies, model provider, timezone, channels, etc.
Before deploying, verify your chart renders correctly:
helm dependency build
helm template openclaw . --debug
Create an ArgoCD Application manifest:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: openclaw
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/your-gitops-repo.git
targetRevision: HEAD
path: workloads/my-cluster/openclaw
destination:
server: https://kubernetes.default.svc
namespace: openclaw
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
With automated sync enabled, ArgoCD will detect changes and deploy automatically. Any drift gets corrected.
This pattern works well with ApplicationSets too. You can have ArgoCD automatically discover and deploy any chart in the workloads/ directory based on path patterns. For more on ArgoCD with Helm charts, see their Helm documentation.
Secrets Management
OpenClaw needs API keys and tokens to function. How you manage these depends on your setup.
If you run HashiCorp Vault (which I do), the Vault Secrets Operator handles syncing secrets to Kubernetes. Store your credentials in Vault at a path like secret/openclaw/env, then create a VaultStaticSecret. In the umbrella chart pattern, I keep these manifests in a crds/ directory alongside the chart. ArgoCD will apply them before the Helm release:
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: openclaw-env-secret
namespace: openclaw
spec:
vaultAuthRef: default
mount: secret
path: my-cluster/openclaw/env
type: kv-v2
destination:
name: openclaw-env-secret
create: true
If you’re not running Vault, a native Kubernetes Secret works. You can put it in the same crds/ directory:
apiVersion: v1
kind: Secret
metadata:
name: openclaw-env-secret
namespace: openclaw
type: Opaque
stringData:
ANTHROPIC_API_KEY: "your-api-key-here"
GATEWAY_TOKEN: "your-gateway-token-here"
Plain Kubernetes Secrets are base64-encoded, not encrypted at rest unless you’ve configured encryption.
Networking and Access
OpenClaw needs to be reachable for its web UI and potentially for webhook integrations with messaging platforms.
For external access, I recommend Cloudflare Tunnel. The cloudflared daemon runs inside your cluster as a Deployment, and no inbound ports need to be opened on your firewall.
For homelab use where you don’t need external access, keep OpenClaw internal. Access it via your local network or VPN. Don’t expose services unnecessarily.
If you’re exposing services within your cluster, Gateway API is the way forward. Retirement of ingress-nginx announced earlier last year, now’s a good time to migrate to Gateway API. Here’s a minimal HTTPRoute example:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: openclaw
namespace: openclaw
spec:
parentRefs:
- name: main-gateway
namespace: gateway-system
hostnames:
- "openclaw.example.internal"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: openclaw
port: 18789
For TLS termination and DNS automation (using tools like cert-manager and external-dns), that’s a topic that deserves its own post. For now, just know the Helm chart exposes the service on port 18789 and you can wire it up however fits your setup.
Post-Installation: Device Pairing
Once the pod is running, you need to pair your device with OpenClaw.
Access the web UI at https://openclaw.example.internal/ and enter your Gateway Token in Settings. If you’re using Vault, this is the value stored at your configured path.
Click "Connect" to initiate a device pairing request. The request needs to be approved from within the pod:
# List pending devices
kubectl exec -n openclaw deployment/openclaw \
--context my-cluster \
-- node dist/index.js devices list
# Approve the device
kubectl exec -n openclaw deployment/openclaw \
--context my-cluster \
-- node dist/index.js devices approve <REQUEST_ID>
Reconnect in the UI and you should now have an active session.
Congrats, now you have a functional OpenClaw deployment in your Kubernetes setup.