CDX-101 · Module 2

Validation & Model Selection

3 min read

Codex supports multiple models, and choosing the right one for the task matters for both quality and cost. You can switch models mid-session with /model or set a default in your config.toml profile. The key models available are GPT-4.1 (fast, general-purpose), o3 and o4-mini (reasoning-focused), and the Codex-series models (optimized for software engineering with native compaction and tool use).

# Switch model mid-session
/model codex-1

# Set reasoning effort (for o3/o4-mini)
# Options: low, medium, high, xhigh
codex --reasoning-effort high "architect a caching layer"

# Check session status
/status

Reasoning effort levels control how much "thinking" the model does before responding. Low effort is fast and cheap — good for simple edits and one-line fixes. High and xhigh effort activates deeper reasoning chains — better for architectural decisions, complex debugging, and multi-step refactors. This is only available on o-series models (o3, o4-mini).

Validation is where the real value lives. After Codex makes changes, use /diff to see exactly what changed, and /review to have Codex analyze its own output for potential issues. The /review command is particularly powerful — it acts as a self-check, catching logical errors, missed edge cases, and convention violations that slipped through the initial generation.

You can also configure profiles in config.toml to set default models, reasoning effort, and approval modes per project or context.

# Default profile
[profile.default]
model = "codex-1"
approval = "suggest"

# Fast profile for simple tasks
[profile.fast]
model = "gpt-4.1"
approval = "auto-edit"

# Deep reasoning profile
[profile.architect]
model = "o3"
reasoning_effort = "high"
approval = "suggest"

Do This

  • Use /diff after every change to verify what was modified
  • Run /review before committing to catch self-inflicted issues
  • Set up profiles for common workflows (fast edits, deep reasoning, full-auto tasks)
  • Match model to task complexity — do not use o3-xhigh for renaming a variable

Avoid This

  • Use the most expensive model for everything "just in case"
  • Skip /review on multi-file changes — that is exactly when you need it most
  • Ignore /status — context and token usage directly affect output quality