GPT-5.4: Analysis of 1M Token Context and API Performance
GPT-5.4 introduces a 1M token context window and native computer-use capabilities through its unified ‘Codex’ intelligence. OpenAI is positioning this as the primary engine for complex agentic workflo

The Pitch
GPT-5.4 introduces a 1M token context window and native computer-use capabilities through its unified ‘Codex’ intelligence. OpenAI is positioning this as the primary engine for complex agentic workflows, though early developer feedback suggests the performance does not always match the price point.
Under the Hood
GPT-5.4 supports 1M tokens of input with a 128k output ceiling (source: llm-stats.com). Standard API costs are set at $2.50/M input and $15.00/M output (source: OpenAI Pricing via HN/OpenRouter). The model’s native computer-use success rate is 75.0% on the OSWorld-Verified benchmark, which sits above the human baseline (source: Techzine Europe).
However, the architecture suffers from significant operational overhead. Developers report 'Fast mode' latency reaching up to 2 minutes for reasoning-heavy tasks (source: HN Thread). Furthermore, a stealth pricing mechanism applies a 2x input multiplier for any prompt exceeding 272k tokens in specific configurations (source: Cursor/Substack analysis).
The model’s reliability in long-context sessions is currently under scrutiny. Users note a tendency for the model to 'forget' recent instructions or hallucinate that a task is finished when it is not (source: HN / Every.to). This represents a regression in instruction following when compared to Gemini 3.
There are still several gaps in the performance data. We do not know yet how GPT-5.4 compares to Claude 4.6 Opus on SWE-bench Pro, as those benchmarks are expected late March 2026. Additionally, OpenAI has not clarified why its 'Ask ChatGPT' integration is currently unable to summarize its own launch URL.
- 1M token input/128k output (Source: llm-stats.com)
- 75.0% success rate on OSWorld (Source: Techzine Europe)
- Integrated tool-search reduces token use by 47% (Source: Tom's Guide)
- Knowledge cutoff: August 31, 2025 (Source: OpenAI)
- Standard pricing: $2.50/M input, $15.00/M output (Source: HN)
Marcus's Take
GPT-5.4 is technically proficient but operationally fragile for production use. The 2-minute latency spikes and stealth pricing for long-context windows make it a liability for real-time agentic systems. While the 1M context window is a significant utility on paper, the state management failures and instruction 'forgetting' mean it cannot be trusted for mission-critical deployments. Stick to Claude 4 Sonnet for reliability or Gemini 2.5 for speed; skip GPT-5.4 until OpenAI solves the latency and pricing transparency issues.
Ship clean code,
Marcus.

Marcus Webb - Senior Backend Analyst at UsedBy.ai
Related Articles

Tin Can: A Proprietary VoIP Stack Disguised as Kids' Safety Hardware
Tin Can is a proprietary VoIP-over-Wi-Fi device marketed as a screen-free "landline" for children to communicate with a parent-approved whitelist. Following a $12M Series A led by Greylock Partners in

The 500MB Payload: The Technical Failure of Future PLC Infrastructure
PC Gamer recently published a guide to RSS readers, positioning them as the solution to modern social media bloat and algorithmic noise. The article is currently a focal point on Hacker News not for i

POSSE and the Industrialisation of Personal Domains
POSSE (Publish on your Own Site, Syndicate Elsewhere) is a decentralised publishing architecture that mandates the personal domain as the primary source for all content. By treating social media silos
Stay Ahead of AI Adoption Trends
Get our latest reports and insights delivered to your inbox. No spam, just data.