Use local Ollama for memory search. Set `memorySearch.provider = "ollama"` for fully local, private memory — no API calls.
Ollama for memory embeddings
You can now use local Ollama for memory search — no API calls, no cloud, fully private.
Configuration
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "ollama",
"fallback": "ollama"
}
}
}
}
Ollama uses your existing models.providers.ollama settings. Make sure you have an embedding model running (e.g. nomic-embed-text).
Why use Ollama for memory?
- Privacy — Your memories never leave your machine
- Cost — No embedding API fees
- Offline — Works without internet
- Speed — Local inference, no network latency
Fallback
If you set fallback: "ollama", OpenClaw will try the primary provider first and fall back to Ollama on failure.