← Back to cheatsheet

🤖 Models & Providers

OpenClaw is model-agnostic — it works with Claude, GPT, Gemini, Ollama, DeepSeek, and dozens more. This guide covers how to set up providers, build fallback chains, assign different models per task, and keep costs under control.

20+ providersFallback chainsUpdated for v2026.3.23

🔀 How model routing works

Model references use the format provider/model. Example: anthropic/claude-sonnet-4-5 or ollama/llama3.3.

Resolution priority for each request:

  1. Per-job override (cron job or slash command) — highest
  2. Per-agent config (agents.list[].model)
  3. Default model (agents.defaults.model.primary)
  4. Fallback chain — tried in order on rate limits (429s)
# CLI helpers
openclaw models list          # Show available models
openclaw models set anthropic/claude-sonnet-4-5  # Set primary
openclaw onboard              # Full guided setup

⚡ Quick start

# Option 1: Interactive wizard (recommended)
openclaw onboard

# Option 2: CLI one-liner
openclaw models set anthropic/claude-sonnet-4-5

# Option 3: Edit config directly
nano ~/.openclaw/openclaw.json
{
  "env": {
    "ANTHROPIC_API_KEY": "sk-ant-..."
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-5"
      }
    }
  }
}

After any change: openclaw gateway restart

📋 Provider catalog

Built-in providers (no models.providers config needed — just set the API key):

ProviderTop modelsEnv var
AnthropicOpus 4.6, Sonnet 4.5, Haiku 4.5ANTHROPIC_API_KEY
OpenAIGPT-5.2, GPT-5, GPT-5-miniOPENAI_API_KEY
GoogleGemini 3 Pro, Gemini 3 FlashGOOGLE_GENERATIVE_AI_API_KEY
OpenRouterAny model via routingOPENROUTER_API_KEY
GroqLlama models on Groq hardwareGROQ_API_KEY
xAIGrok modelsXAI_API_KEY
MistralMistral LargeMISTRAL_API_KEY
DeepSeekDeepSeek Chat, DeepSeek R1DEEPSEEK_API_KEY
OllamaAny local modelAuto-detected at :11434
CerebrasGLM-4.7, GLM-4.6CEREBRAS_API_KEY
MiniMaxM2.5, M2.5 FreeMINIMAX_API_KEY
MoonshotKimi K2.5MOONSHOT_API_KEY

Auto-priority when multiple keys are set: Anthropic → OpenAI → OpenRouter → Gemini → Groq → Mistral → and so on.

💡 Key rotation: Set multiple keys per provider with OPENAI_API_KEY_1, OPENAI_API_KEY_2, etc. OpenClaw rotates on rate limits (429s). Non-rate-limit errors fail immediately.

🔄 Fallback chains

If your primary model hits a rate limit or outage, OpenClaw automatically tries the next model in your fallback list:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-5",
        "fallbacks": [
          "openrouter/google/gemini-3-flash-preview",
          "openrouter/openai/gpt-5-mini",
          "ollama/llama3.3"
        ]
      }
    }
  }
}

Fallbacks are tried in order. Put your preferred models first, cheapest/local last.

🎯 Per-task model overrides

The most impactful cost optimization: use expensive models only where they matter.

TaskRecommended modelWhy
Main conversationClaude Sonnet 4.5Best balance of quality + speed + cost
Heartbeat checksGPT-5 Nano / Gemini Flash95% end in HEARTBEAT_OK — don't waste premium tokens
Complex reasoningClaude Opus 4.6Best for deep analysis, code review
Simple cron jobsHaiku 4.5 / GPT-5 MiniFast, cheap, good enough for status checks
Overnight codingClaude Opus 4.6 + thinking:highQuality matters more than speed at midnight
# Heartbeat on cheap model
{ "heartbeat": { "model": "openrouter/openai/gpt-5-nano" } }

# Cron job on Opus
openclaw cron add --name "Deep work" --model "anthropic/claude-opus-4-6" --thinking high ...

🌐 OpenRouter

OpenRouter is a universal adapter that gives you access to 200+ models through one API key. Built-in support — no models.providers config needed.

# Setup
openclaw onboard --auth-choice openrouter-api-key

# Or set manually
export OPENROUTER_API_KEY="sk-or-..."
openclaw models set openrouter/anthropic/claude-sonnet-4-5

Auto Model

openrouter/openrouter/auto automatically selects the most cost-effective model based on your prompt complexity. Simple tasks route to cheap models, complex ones to capable models.

✅ Best for beginners: One API key, access to everything, automatic model selection. Start with OpenRouter Auto, then switch to direct provider keys as your usage grows and you know which models you prefer.

🏠 Ollama — free local models

Ollama runs models locally — zero API costs, full privacy, works offline. OpenClaw auto-detects it at http://127.0.0.1:11434.

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama3.3

# It just works — OpenClaw auto-detects
openclaw models list  # Should show ollama/llama3.3

Manual config (if auto-detect doesn't work)

{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://127.0.0.1:11434/v1",
        "apiKey": "ollama",
        "api": "openai-responses"
      }
    }
  }
}

⚠️ Model quality matters for security. Smaller/local models are weaker at resisting prompt injection. If using Ollama for group-facing bots, pair it with strict sandboxing and minimal tool profiles. See the Security Guide.

🔧 Custom / OpenAI-compatible providers

Any provider with an OpenAI-compatible API can be added:

{
  "models": {
    "mode": "merge",
    "providers": {
      "deepseek": {
        "baseUrl": "https://api.deepseek.com/v1",
        "apiKey": "${DEEPSEEK_API_KEY}",
        "api": "openai-completions",
        "models": [{
          "id": "deepseek/deepseek-chat",
          "name": "DeepSeek Chat",
          "contextWindow": 128000
        }]
      }
    }
  }
}

Works with: LM Studio, vLLM, Together AI, Fireworks, any OpenAI-compatible endpoint.

💰 Cost-smart model strategy

The key insight: don't use one model for everything. Match model capability to task complexity.

# Recommended starter config
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-5",
        "fallbacks": ["openrouter/google/gemini-3-flash-preview"]
      },
      "heartbeat": { "model": "openrouter/openai/gpt-5-nano" }
    }
  }
}

Then for specific cron jobs, override with --model as needed. Use our Cost Calculator to estimate monthly spend for your usage pattern.

🔧 Troubleshooting

ProblemFix
Model doesn't change after config editopenclaw gateway restart then /new in chat for fresh session
0 tokens, no response (Ollama)Add "api": "openai-responses" to Ollama provider config
Rate limit errorsAdd fallback models. Set multiple API keys for rotation.
"Model not found"openclaw models list — check exact ID format. Use provider/model syntax.
Ollama not detectedEnsure Ollama is running: ollama list. Check URL: http://127.0.0.1:11434
Config validation erroropenclaw config validate for JSON/schema before restart. Or openclaw doctor --fix for broader checks.