How to Use OpenClaw with Anthropic API — Configure Claude
Set up OpenClaw with the Anthropic API — configure Claude models, enable extended thinking, optimize token usage, and unlock advanced features like.
OpenClaw uses Anthropic's Claude models by default, and the integration goes deeper than just sending messages. You can configure model routing, enable extended thinking for complex tasks, tune token limits, and use Claude's advanced features like tool use and streaming — all through your OpenClaw config.
Basic Anthropic API Setup
# In ~/.openclaw/.env:
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
# In openclaw.json:
{
"llm": {
"provider": "anthropic",
"model": "claude-opus-4-6",
"maxTokens": 4096
}
}Available Claude Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| claude-opus-4-6 | Complex reasoning, planning, coding | Slower | Highest |
| claude-sonnet-4-6 | Balanced tasks, most workflows | Fast | Medium |
| claude-haiku-3 | Classification, simple tasks | Very fast | Low |
Model Routing by Task Type
# In openclaw.json:
{
"llm": {
"provider": "anthropic",
"default": "claude-sonnet-4-6",
"routing": {
"planning": "claude-opus-4-6",
"coding": "claude-opus-4-6",
"classification": "claude-haiku-3",
"summarization": "claude-haiku-3",
"search-synthesis": "claude-sonnet-4-6"
}
}
}Extended Thinking
Enable Claude's extended thinking for complex reasoning tasks:
# In your agent config or session spawn:
{
"thinking": {
"type": "enabled",
"budget_tokens": 10000
}
}
# Best use cases:
# - Complex code architecture decisions
# - Multi-variable business strategy
# - Debugging hard-to-trace issues
# - Research synthesis across many sources
# Note: uses more tokens — use selectivelyTool Use Configuration
Claude's native tool use powers OpenClaw's skill system:
# Tools are defined as JSON schema:
{
"name": "web_search",
"description": "Search the web for current information",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string"},
"count": {"type": "number"}
},
"required": ["query"]
}
}Token Optimization
# Reduce prompt token waste:
# 1. Keep SOUL.md concise — injected every request
# 2. Summarize MEMORY.md regularly — do not let it grow unbounded
# 3. Use system prompt caching for large prompts
# 4. For long conversations, summarize context after 10+ turns
# Monitor usage:
openclaw stats --tokens --last 7dCommon API Errors
# 401 Unauthorized -> check API key in ~/.openclaw/.env
# 429 Too Many Requests -> rate limit hit
# 529 Overloaded -> Anthropic backend busy, retry with backoff
# 400 Bad Request -> likely invalid model name or malformed requestThe OpenClaw Playbook ($9.99) covers the full Anthropic integration including prompt caching setup, cost tracking dashboards, and model routing configs for common use cases.
Frequently Asked Questions
Which Claude model should I use for my OpenClaw agent by default?
Claude Sonnet is the sweet spot for most workflows — significantly cheaper than Opus with comparable performance for most tasks. Use Opus for complex planning and coding, Haiku for classification and simple summaries.
Does OpenClaw support Claude's prompt caching feature?
OpenClaw can leverage Anthropic's prompt caching for large system prompts like long SOUL.md files. This can reduce costs by up to 90% for repeated calls with the same large context block.
Can I switch Claude models mid-conversation in OpenClaw?
Indirectly — each new session or sub-agent spawn can specify a different model. You cannot switch models within a single ongoing conversation, but you can spawn a sub-agent with a different model for specific tasks.
Get The OpenClaw Playbook
The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.