CDX-301a · Module 2
Model Selection & Profiles
4 min read
Codex supports multiple models through config.toml profiles, and choosing the right model for each task type is a force multiplier. The default model (codex-1) is optimized for code generation and editing. O3 with high reasoning effort excels at architecture decisions, complex debugging, and code review. GPT-4.1 offers a balance of speed and quality for routine tasks. The profile system lets you switch between these without editing config files — just reference the profile name at session start.
Model selection should be driven by the task, not personal preference. Code generation and file editing: codex-1. Multi-file refactoring with complex dependencies: o3 with high reasoning. Quick fixes and formatting: gpt-4.1-mini for speed. Code review in CI: o3 for thoroughness. The cost difference between models is significant — routing routine tasks to cheaper models while reserving expensive models for complex work can cut API spend by 40-60% without quality loss.
# Task-specific model profiles
[profile.default]
model = "codex-1"
approval = "suggest"
[profile.architect]
model = "o3"
reasoning_effort = "high"
approval = "suggest"
# Use for: design decisions, complex refactoring, debugging
[profile.review]
model = "o3"
reasoning_effort = "high"
approval = "suggest"
# Use for: code review, security audit, PR analysis
[profile.quick]
model = "gpt-4.1-mini"
approval = "auto-edit"
# Use for: formatting, simple fixes, documentation
[profile.ci]
model = "gpt-4.1"
approval = "full-auto"
# Use for: automated CI tasks, PR review bots
Do This
- Create profiles for each task category: generation, review, quick fixes, CI
- Set reasoning_effort explicitly — the default may not match your needs
- Track API costs per profile to validate your routing decisions
- Use auto-edit approval for trusted, low-risk profiles only
Avoid This
- Use the most expensive model for every task — it wastes budget and often adds latency
- Set full-auto approval on high-capability models without sandbox restrictions
- Forget to set a CI-specific profile — interactive defaults waste time in pipelines
- Ignore reasoning_effort — it dramatically affects both quality and cost for o3