Claude Opus 4.7 vs GPT-5 — which frontier model wins?

🧠
Claude Opus 4.7
Anthropic
$15 / in · $75 / out
per 1M tokens
Context: 200K
Cached input: $1.5/M
🌌
GPT-5
OpenAI
$10 / in · $30 / out
per 1M tokens
Context: 400K
Cached input: $2.5/M

Price. GPT-5 at $10/$30 per million is about 60% cheaper per output token than Opus 4.7's $15/$75. For any workload dominated by output cost, GPT-5 starts ahead.

Coding. Opus 4.7 leads SWE-bench Verified (~75%) against GPT-5 (~70%). For coding agents that need the last 5% of capability, Opus is worth it. For most dev work, the gap isn't visible.

Reasoning. GPT-5 leads on GPQA-hard and hard math. OpenAI's training investment in reasoning chains shows up here.

Context. GPT-5 has 400K; Opus has 200K. On docs over 200K, GPT-5 or Gemini are your options.

Caching. Anthropic's 10% cached-input rate is aggressive — on cache-heavy workloads, Opus's effective input cost drops to $1.50/M, competitive with GPT-5's $2.50/M cached rate.

Practical verdict: If your app is cache-heavy and coding-focused, Opus 4.7. If it's one-shot reasoning with moderate context, GPT-5. If you just want the cheaper frontier flag at default settings, GPT-5.