head-to-head
OpenAI: GPT-5.4 Mini vs Anthropic: Claude Sonnet 4.6
Side-by-side comparison of specs, pricing, benchmark scores, and task rankings. Updated 2026-05-14.
| OpenAI: GPT-5.4 Mini | Anthropic: Claude Sonnet 4.6 | |
|---|---|---|
| Vendor | openai | anthropic |
| Quality Score | 100 | 100 |
| Input Price | $0.75/M | $3.00/M |
| Output Price | $4.50/M | $15.00/M |
| Context Window | 400,000 | 1,000,000 |
| Max Output | 128,000 | 128,000 |
| Tool Calling | ✓ | ✓ |
| Structured Output | ✓ | ✓ |
| Reasoning Mode | ✓ | ✓ |
| Vision | ✓ | ✓ |
| Audio | - | - |
Who wins by task?
| Task | OpenAI: GPT-5.4 Mini | Anthropic: Claude Sonnet 4.6 |
|---|---|---|
| SQL Generation | 133 | 132 |
| Code Completion | 130 | 116 |
| Code Documentation | 130 | 128 |
| Regex Writing | 119 | 116 |
| CSV / Spreadsheet Cleanup | 133 | 132 |
| JSON Extraction | 130 | 120 |
| Bulk Data Labeling | 128 | 116 |
| Long-Document Summarization | 137 | 136 |
| Short-Form Summarization | 123 | 112 |
| Blog Post Writing | 121 | 120 |
Scores reflect capability match + benchmark data + pricing for each task. Methodology →
Related comparisons
Google: Gemini 3.1 Flash Lite vs OpenAI: GPT-5.4 Mini
Google: Gemini 3.1 Flash Lite vs Anthropic: Claude Sonnet 4.6
xAI: Grok 4.3 vs OpenAI: GPT-5.4 Mini
xAI: Grok 4.3 vs Anthropic: Claude Sonnet 4.6
Mistral: Mistral Medium 3.5 vs OpenAI: GPT-5.4 Mini
Mistral: Mistral Medium 3.5 vs Anthropic: Claude Sonnet 4.6
NVIDIA: Nemotron 3 Nano Omni (free) vs OpenAI: GPT-5.4 Mini
NVIDIA: Nemotron 3 Nano Omni (free) vs Anthropic: Claude Sonnet 4.6