How to Configure OpenClaw Ollama
Run OpenClaw with Ollama cloud, local, or hybrid models using native API URLs, model discovery, and vision support.
Use this guide, then keep going
If this guide solved one problem, here is the clean next move for the rest of your setup.
Most operators land on one fix first. The preview, homepage, and full file make it easier to turn that one fix into a reliable OpenClaw setup.
Ollama is the OpenClaw route for local, self-hosted, and Ollama Cloud models. The most important docs warning is simple: do not point OpenClaw at an OpenAI-compatible /v1 URL for remote Ollama. Use the native Ollama API base URL instead, such as http://host:11434 without /v1. That keeps tool calling and model behavior aligned with the provider OpenClaw actually supports.
30-second answer
Run openclaw onboard, choose Ollama, and pick Cloud + Local, Cloud only, or Local only. Cloud only uses https://ollama.com with OLLAMA_API_KEY. Local and LAN hosts can use the ollama-local marker. For manual setup, pull a local model with ollama pull, set OLLAMA_API_KEY appropriately, list models, then set a default model such as ollama/gemma4.
Where it fits
Use local-only Ollama when you want private or zero-cost local inference and can accept local model limitations. Use Cloud only when you want hosted Ollama models without a local server. Use Cloud + Local when a reachable Ollama host is your control point for both local and cloud models. The right choice depends on latency, privacy, model quality, and operations effort.
Docs-grounded facts
- Ollama uses the native /api/chat route.
- Remote Ollama should not use /v1 OpenAI-compatible URLs.
- baseUrl is the canonical provider config key.
- Modes include Cloud + Local, Cloud only, and Local only.
- Local model discovery queries /api/tags.
- If models.providers.ollama is explicit, auto-discovery is skipped.
Set it up deliberately
Ollama provider config uses baseUrl as the canonical key, with baseURL accepted only for compatibility. Local discovery queries /api/tags and uses best-effort /api/show to read context window, num_ctx, capabilities, vision support, reasoning heuristics, token limits, and cost defaults. If you define models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually.
Use it safely
Do not send a real OLLAMA_API_KEY to private hosts unless that is intended. The docs say provider-level keys and memory embedding keys are scoped to the host where declared, while a pure OLLAMA_API_KEY env value is treated as the Ollama Cloud convention and is not sent to local or self-hosted hosts by default. That host scoping matters in mixed cloud/local setups.
Common mistakes
The common mistake is using http://host:11434/v1 because it looks OpenAI-compatible. OpenClaw’s docs explicitly say that breaks tool calling and can make models output raw tool JSON as text. Another mistake is assuming a newly pulled model appears in a custom explicit provider. Auto-discovery only applies when you have not overridden the provider catalog manually.
Verification checklist
Run ollama list, then openclaw models list. Send a tool-using test prompt, not just a chat prompt, so you can catch the /v1 mistake early. For vision models, pull a model with image support and test media understanding separately. For cloud paths, confirm the host is signed in or the API key is present before assigning the model to agents.
Playbook angle
The OpenClaw Playbook’s Ollama pattern is to document the mode first: local, cloud, or hybrid. Once the route is clear, model discovery and auth make sense. Without that, Ollama setups drift into confusing half-local, half-cloud behavior.
Operator note
How to Configure OpenClaw Ollama works best when it is written into a small runbook instead of left as tribal knowledge. Record the intended owner, the exact config surface, the channel where results should appear, the allowed inputs, the expected output, and the rollback step. OpenClaw gives agents broad tools, but the durable value comes from making each tool boring, repeatable, and auditable. I would rather have one well-scoped Ollama config workflow that survives a restart than five clever demos nobody can safely run next week. If the runbook cannot explain when not to use it, keep refining before automation becomes default.
Frequently Asked Questions
Should remote Ollama use the /v1 URL?
No. The docs warn not to use the OpenAI-compatible /v1 URL because it breaks tool calling.
Do local Ollama hosts need a real bearer token?
No. Local and LAN hosts can use the ollama-local marker for supported private or loopback base URLs.
How does OpenClaw discover Ollama models?
It queries the native /api/tags endpoint and uses /api/show lookups for capabilities when applicable.
Get The OpenClaw Playbook
The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.