How to Deploy OpenClaw on Kubernetes - Minimal Kustomize Path
Deploy OpenClaw on Kubernetes with the official minimal manifests, Kustomize flow, Kubernetes secrets, and port-forwarded local access.
Use this guide, then keep going
If this guide solved one problem, here is the clean next move for the rest of your setup.
Most operators land on one fix first. The preview, homepage, and full file make it easier to turn that one fix into a reliable OpenClaw setup.
Kubernetes is in the OpenClaw docs, but the page is refreshingly honest about what it is. This is a minimal starting point, not a turnkey production platform. The official deployment uses Kustomize-style manifests, a single namespace, a single pod, and a port-forwarded local-access model by default.
What the official docs support
The docs explain why there is no official Helm chart at this layer: OpenClaw is one container plus config files, and the interesting customization usually lives in agent instructions and config, not a huge chart abstraction. The included scripts create the namespace, secret, PVC, config map, deployment, and service, then leave the rest to your environment-specific overlays.
This is the right path when you already have Kubernetes literacy and want OpenClaw to live near the rest of your cluster. It is not the fastest path for a beginner, but it is a solid baseline for teams that prefer declarative infrastructure and already know how they want ingress, storage, secrets, and cluster policy to work.
What you need first
- A running Kubernetes cluster and kubectl access
- An API key for at least one model provider
- Comfort editing manifests under scripts/k8s/manifests
- A decision on whether local port-forward access is enough or you need remote exposure
Recommended setup flow
The quickest supported flow is export the provider key, run the deploy script, port-forward to localhost, and only then think about custom ingress or remote access.
- Export one provider API key and run ./scripts/k8s/deploy.sh. The docs say the script creates a Kubernetes Secret with the provider key and an auto-generated gateway token, then applies the manifests.
- Port-forward the service with kubectl port-forward svc/openclaw 18789:18789 -n openclaw and open the dashboard on localhost. The docs treat this as the default safe access path because the gateway binds to loopback inside the pod in the included baseline.
- Pull the generated gateway token from the openclaw-secrets secret so you can authenticate to the Control UI. The docs also mention a --show-token flag for the deploy script when you want local testing convenience.
- Edit AGENTS.md or openclaw.json inside scripts/k8s/manifests/configmap.yaml when you want custom instructions or gateway config. Then re-run the deploy script so the pod restarts with the new config map content.
- If you need external exposure later, change the bind model in the config map, keep auth enabled, and add a proper TLS-terminated entrypoint. The docs are explicit that the included manifests are designed for port-forward first, not for an internet-facing service out of the box.
export <PROVIDER>_API_KEY="..."
./scripts/k8s/deploy.sh
kubectl port-forward svc/openclaw 18789:18789 -n openclawAccess, safety, and operational notes
The default access pattern is intentionally local and conservative. Because the gateway binds to loopback inside the pod, the stock setup works cleanly with kubectl port-forward and avoids accidental cluster-wide exposure before you have thought through TLS, auth, and browser origin behavior.
The architecture notes are worth reading twice. Security hardening such as readOnlyRootFilesystem, dropped Linux capabilities, and non-root UID 1000 are part of the included pod spec, which makes this a much better baseline than a random community YAML pasted from memory.
How to verify it is working
After deploy, retrieve the token, log into the dashboard through the port-forward, and then restart the deployment once to confirm the PVC-backed state survives. If you are testing locally, the Kind helper script in the docs is a nice way to validate the whole flow without touching a production cluster.
Common gotchas
- This is a minimal baseline, not a production-ready turnkey deployment
- The default manifests are built for port-forward access, not public ingress
- If you expose it beyond localhost, you need to change bind settings and add a proper remote-access model
If you want the operator version with tighter rollout checklists, safer defaults, and more production patterns, The OpenClaw Playbook is the easiest shortcut.
Frequently Asked Questions
What does OpenClaw support on Kubernetes right now?
Yes. The official manifests include a PersistentVolumeClaim so state survives pod replacement, assuming your storage class behaves normally.
How should I handle access and rollout on Kubernetes?
The documented default is kubectl port-forward to localhost, which keeps the Control UI on the safer local-access path during initial setup.
What is the main thing to watch when setting up Kubernetes?
The biggest mistake is treating the sample manifests like a complete internet-facing production stack. The docs explicitly say they are a starting point you should adapt to your environment.
Get The OpenClaw Playbook
The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.