Codex/gpt-* routes use the bundled Codex provider with native threads, discovery, and compaction.
Bundled Codex provider
OpenClaw adds a bundled Codex provider with a plugin-owned app-server harness so models under codex/gpt-* use Codex-managed auth, native threads, model discovery, and compaction on the Codex path.
openai/gpt-* models stay on the normal OpenAI provider route — the split keeps OAuth, tool schemas, and streaming behavior aligned with each stack.
See the upstream release for OAuth scope requirements and compatibility notes.