How to Configure OpenClaw OpenAI
Set up OpenAI in OpenClaw through API keys, Codex OAuth, model refs, image generation, TTS, web search, and embeddings.
Use this guide, then keep going
If this guide solved one problem, here is the clean next move for the rest of your setup.
Most operators land on one fix first. The preview, homepage, and full file make it easier to turn that one fix into a reliable OpenClaw setup.
OpenAI config in OpenClaw has three nearby names that are easy to mix up: openai, openai-codex, and codex. The docs separate them deliberately. openai/* is the direct OpenAI Platform API route. openai-codex/* is the Codex OAuth or subscription route through the normal PI runner. The bundled codex plugin and agentRuntime.id: codex are a separate native app-server harness path. Keep those layers separate and the setup becomes much easier.
30-second answer
For direct API billing, set OPENAI_API_KEY or run onboarding with the OpenAI API-key choice, then use a model ref such as openai/gpt-5.5. For Codex subscription auth, run openclaw models auth login --provider openai-codex or onboarding, then use openai-codex/gpt-5.5. For native Codex app-server behavior, use an openai/* model plus agentRuntime.id: codex.
Where it fits
Use direct OpenAI API keys when you want predictable platform billing and production credentials. Use openai-codex when you want ChatGPT or Codex subscription auth through OpenClaw’s PI route. Use the native Codex app-server harness only when you explicitly want that runtime’s thread and command behavior. Do not choose based on model name alone; choose based on auth and runtime.
Docs-grounded facts
- openai/* is the direct OpenAI Platform API route.
- openai-codex/* is the Codex OAuth/subscription PI route.
- agentRuntime.id: codex forces the native Codex app-server harness for openai/* refs.
- OpenAI image generation uses image_generate.
- OpenAI can be used for memory embeddings.
- Enabling OpenAI or selecting openai-codex/* does not enable the bundled Codex app-server plugin.
Set it up deliberately
The docs show onboarding commands for OpenAI API keys and Codex OAuth, model verification with openclaw models list --provider openai or --provider openai-codex, and default model config under agents.defaults.model.primary. OpenAI also covers image generation, video generation, TTS, batch speech-to-text, realtime voice, native web search under documented conditions, and memory embeddings.
Use it safely
Do not invent model refs. The docs explicitly warn that OpenClaw does not expose openai/gpt-5.3-codex-spark. Also do not assume openai-codex selects the native Codex app-server plugin. It does not. If both OpenAI Codex OAuth and the codex plugin are configured, openclaw doctor may warn so you can confirm the combination is intentional.
Common mistakes
The common mistake is putting runtime behavior into the provider prefix. Provider chooses auth and model catalog; runtime chooses who executes the agent loop. Another mistake is enabling native web search accidentally. If you want managed web_search with OpenAI models, pin a managed provider such as Brave or disable the native path according to the docs.
Verification checklist
After setup, list models for the chosen provider, run one small test message, and check provider usage or auth route. If image, TTS, or embeddings depend on OpenAI, test those surfaces separately because they may use different config fields. For Codex subscription auth, verify the OAuth profile before relying on it in a cron or subagent.
Playbook angle
The OpenClaw Playbook’s OpenAI advice is to label every route by auth, model, and runtime. That one habit prevents most “why is this using the wrong Codex?” confusion before it reaches production.
Operator note
How to Configure OpenClaw OpenAI works best when it is written into a small runbook instead of left as tribal knowledge. Record the intended owner, the exact config surface, the channel where results should appear, the allowed inputs, the expected output, and the rollback step. OpenClaw gives agents broad tools, but the durable value comes from making each tool boring, repeatable, and auditable. I would rather have one well-scoped OpenAI config workflow that survives a restart than five clever demos nobody can safely run next week. If the runbook cannot explain when not to use it, keep refining before automation becomes default.
Frequently Asked Questions
What prefix uses direct OpenAI API billing?
The openai/* model prefix uses direct OpenAI Platform API access with OPENAI_API_KEY unless a runtime override changes execution.
What prefix uses Codex OAuth through PI?
Use openai-codex/* for the OpenAI Codex OAuth/subscription route through the normal OpenClaw PI runner.
Does enabling OpenAI automatically enable the Codex plugin?
No. The docs say selecting openai-codex/* or enabling OpenAI does not enable the bundled Codex app-server plugin.
Get The OpenClaw Playbook
The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.