Humanity is increasing the amount of information that enterprise customers can send to Claude at a single prompt, part of an effort to attract more developers to the company’s popular AI coding model.
For Anthropic’s API customers, the company’s Claude Sonnet 4 AI model has a million token context windows. This means that AI can process the entire Ring of the Ring Trilogy, or 750,000 words or more than 75,000 lines of code, or more than 75,000 lines of code. That’s about five times the previous limits for Claude (200,000 tokens), and more than twice the 400,000 token context window provided by Openai’s GPT-5.
Additionally, Claude Sonnet 4 will provide long context through Anthropic’s cloud partners, including Amazon Bedrock and Google Cloud’s Vertex AI.
Humanity has built one of the biggest corporate businesses among AI model developers, primarily by selling Claude to AI coding platforms such as Microsoft’s Github Copilot, Windsurf and Anysphere cursors. While Claude has become a model of choice among developers, the GPT-5 could threaten human control with its competitive pricing and strong coding performance. Anysphere CEO Michael Truell helped Openai announce the launch of the GPT-5. This is currently the default AI model for new users in Cursor.
Brad Abrams, the human product lead of the Claude platform, told TechCrunch in an interview that he hopes the AI coding platform will “have a lot of profits” from the update. When asked if GPT-5 had placed a dent in Claude’s API usage, Abrams downplayed his concerns by saying he was “really pleased with the API business and how it grew.”
Openai generates a large portion of revenue from consumer subscriptions to ChatGpt, but Anthropic’s business focuses on selling AI models to businesses via APIs. This may make the AI coding platform an important customer of humanity, and this may be why the company is launching some new perks to attract users in the face of GPT-5.
Last week, humanity unveiled the Claude Opus 4.1, an updated version of the largest AI model, pushing the company’s AI coding capabilities a little more.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
In general, AI models tend to improve performance on all tasks, especially in software engineering issues, if there is more context. For example, asking an AI model to spin up new features in your app, could do a better job if you could look at the entire project rather than just a section.
Abrams also told TechCrunch that Claude’s big context windows can help improve performance on long agent coding tasks. AI models work autonomously on problems for minutes or hours. A large context window allows Claude to remember all previous steps in the elder task.
However, some companies are pushing the big context window to extremes, claiming that AI models can handle large prompts. Google will provide 2 million token context windows for the Gemini 2.5 Pro, while Meta will provide 10 million token context windows for the Llama 4 Scout.
Some studies suggest that there are limitations to how effective a large context window is. The AI model is not good at handling these large prompts. Abrams said that the human research team focuses on increasing the number of “effective context windows” windows, not just Claude’s context windows, suggesting that the AI can understand most of the information given. However, he refused to reveal the exact technology of humanity.
If the Claude Sonnet 4 prompts exceed 200,000 tokens, humanity will be billed more to API users, increasing $6 per million and $22.50 per million ($3 per million input tokens, $1 million output tokens).
