On this page
🔀 How model routing works
Model references use the format provider/model. Example: anthropic/claude-sonnet-4-5 or ollama/llama3.3.
Resolution priority for each request:
- Per-job override (cron job or slash command) — highest
- Per-agent config (
agents.list[].model) - Default model (
agents.defaults.model.primary) - Fallback chain — tried in order on rate limits (429s)
# CLI helpers
openclaw models list # Show available models
openclaw models set anthropic/claude-sonnet-4-5 # Set primary
openclaw onboard # Full guided setup
⚡ Quick start
# Option 1: Interactive wizard (recommended)
openclaw onboard
# Option 2: CLI one-liner
openclaw models set anthropic/claude-sonnet-4-5
# Option 3: Edit config directly
nano ~/.openclaw/openclaw.json
{
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
},
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-5"
}
}
}
}
After any change: openclaw gateway restart
📋 Provider catalog
Built-in providers (no models.providers config needed — just set the API key):
| Provider | Top models | Env var |
|---|---|---|
| Anthropic | Opus 4.6, Sonnet 4.5, Haiku 4.5 | ANTHROPIC_API_KEY |
| OpenAI | GPT-5.2, GPT-5, GPT-5-mini | OPENAI_API_KEY |
| Gemini 3 Pro, Gemini 3 Flash | GOOGLE_GENERATIVE_AI_API_KEY | |
| OpenRouter | Any model via routing | OPENROUTER_API_KEY |
| Groq | Llama models on Groq hardware | GROQ_API_KEY |
| xAI | Grok models | XAI_API_KEY |
| Mistral | Mistral Large | MISTRAL_API_KEY |
| DeepSeek | DeepSeek Chat, DeepSeek R1 | DEEPSEEK_API_KEY |
| Ollama | Any local model | Auto-detected at :11434 |
| Cerebras | GLM-4.7, GLM-4.6 | CEREBRAS_API_KEY |
| MiniMax | M2.5, M2.5 Free | MINIMAX_API_KEY |
| Moonshot | Kimi K2.5 | MOONSHOT_API_KEY |
Auto-priority when multiple keys are set: Anthropic → OpenAI → OpenRouter → Gemini → Groq → Mistral → and so on.
💡 Key rotation: Set multiple keys per provider with OPENAI_API_KEY_1, OPENAI_API_KEY_2, etc. OpenClaw rotates on rate limits (429s). Non-rate-limit errors fail immediately.
🔄 Fallback chains
If your primary model hits a rate limit or outage, OpenClaw automatically tries the next model in your fallback list:
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-5",
"fallbacks": [
"openrouter/google/gemini-3-flash-preview",
"openrouter/openai/gpt-5-mini",
"ollama/llama3.3"
]
}
}
}
}
Fallbacks are tried in order. Put your preferred models first, cheapest/local last.
🎯 Per-task model overrides
The most impactful cost optimization: use expensive models only where they matter.
| Task | Recommended model | Why |
|---|---|---|
| Main conversation | Claude Sonnet 4.5 | Best balance of quality + speed + cost |
| Heartbeat checks | GPT-5 Nano / Gemini Flash | 95% end in HEARTBEAT_OK — don't waste premium tokens |
| Complex reasoning | Claude Opus 4.6 | Best for deep analysis, code review |
| Simple cron jobs | Haiku 4.5 / GPT-5 Mini | Fast, cheap, good enough for status checks |
| Overnight coding | Claude Opus 4.6 + thinking:high | Quality matters more than speed at midnight |
# Heartbeat on cheap model
{ "heartbeat": { "model": "openrouter/openai/gpt-5-nano" } }
# Cron job on Opus
openclaw cron add --name "Deep work" --model "anthropic/claude-opus-4-6" --thinking high ...
🌐 OpenRouter
OpenRouter is a universal adapter that gives you access to 200+ models through one API key. Built-in support — no models.providers config needed.
# Setup
openclaw onboard --auth-choice openrouter-api-key
# Or set manually
export OPENROUTER_API_KEY="sk-or-..."
openclaw models set openrouter/anthropic/claude-sonnet-4-5
Auto Model
openrouter/openrouter/auto automatically selects the most cost-effective model based on your prompt complexity. Simple tasks route to cheap models, complex ones to capable models.
✅ Best for beginners: One API key, access to everything, automatic model selection. Start with OpenRouter Auto, then switch to direct provider keys as your usage grows and you know which models you prefer.
🏠 Ollama — free local models
Ollama runs models locally — zero API costs, full privacy, works offline. OpenClaw auto-detects it at http://127.0.0.1:11434.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3.3
# It just works — OpenClaw auto-detects
openclaw models list # Should show ollama/llama3.3
Manual config (if auto-detect doesn't work)
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama",
"api": "openai-responses"
}
}
}
}
⚠️ Model quality matters for security. Smaller/local models are weaker at resisting prompt injection. If using Ollama for group-facing bots, pair it with strict sandboxing and minimal tool profiles. See the Security Guide.
🔧 Custom / OpenAI-compatible providers
Any provider with an OpenAI-compatible API can be added:
{
"models": {
"mode": "merge",
"providers": {
"deepseek": {
"baseUrl": "https://api.deepseek.com/v1",
"apiKey": "${DEEPSEEK_API_KEY}",
"api": "openai-completions",
"models": [{
"id": "deepseek/deepseek-chat",
"name": "DeepSeek Chat",
"contextWindow": 128000
}]
}
}
}
}
Works with: LM Studio, vLLM, Together AI, Fireworks, any OpenAI-compatible endpoint.
💰 Cost-smart model strategy
The key insight: don't use one model for everything. Match model capability to task complexity.
# Recommended starter config
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-5",
"fallbacks": ["openrouter/google/gemini-3-flash-preview"]
},
"heartbeat": { "model": "openrouter/openai/gpt-5-nano" }
}
}
}
Then for specific cron jobs, override with --model as needed. Use our Cost Calculator to estimate monthly spend for your usage pattern.
🔧 Troubleshooting
| Problem | Fix |
|---|---|
| Model doesn't change after config edit | openclaw gateway restart then /new in chat for fresh session |
| 0 tokens, no response (Ollama) | Add "api": "openai-responses" to Ollama provider config |
| Rate limit errors | Add fallback models. Set multiple API keys for rotation. |
| "Model not found" | openclaw models list — check exact ID format. Use provider/model syntax. |
| Ollama not detected | Ensure Ollama is running: ollama list. Check URL: http://127.0.0.1:11434 |
| Config validation error | openclaw config validate for JSON/schema before restart. Or openclaw doctor --fix for broader checks. |