Anthropic Strikes Compute Deal with SpaceX, Raises Claude API and Claude Code Limits
Anthropic has entered into a partnership with SpaceX that will significantly expand its computing infrastructure, a move the AI safety company says directly enables it to lift usage restrictions on some of its most in-demand developer tools.
The announcement, made on May 6, marks a notable moment in Anthropic’s ongoing push to scale the infrastructure behind its Claude family of models — and signals growing competition in the race for AI compute capacity.
What Has Changed for Developers
Effective immediately, Anthropic has made three adjustments to how customers access Claude.
Claude Code’s five-hour rate limits have been doubled across Pro, Max, Team, and seat-based Enterprise plans. The company has also removed the peak-hours limit reduction that previously applied to Pro and Max accounts — a restriction that frustrated developers working across time zones, including those building on the platform from Africa and other regions outside North America.
On the API side, Anthropic has substantially raised rate limits for its Claude Opus models, a change that matters most for enterprise customers and developers building at scale.
The SpaceX Deal
At the centre of the announcement is a new agreement granting Anthropic access to the full compute capacity at SpaceX’s Colossus 1 data centre. According to Anthropic’s announcement, this adds more than 300 megawatts of capacity — equivalent to over 220,000 NVIDIA GPUs — and is expected to come online within the month.
Separately, the two companies have expressed interest in exploring orbital AI compute infrastructure, with Anthropic noting ambitions for multiple gigawatts of space-based capacity in partnership with SpaceX.
Part of a Broader Infrastructure Bet
The SpaceX deal does not stand alone. Anthropic has been assembling a large compute portfolio over recent months. The company previously announced an agreement with Amazon for up to five gigawatts of capacity, including additional inference infrastructure in Asia and Europe; a five-gigawatt arrangement with Google and Broadcom set to come online from 2027; a strategic partnership with Microsoft and NVIDIA covering $30 billion in Azure capacity; and a $50 billion commitment to American AI infrastructure through Fluidstack.
Claude runs on a range of hardware, including AWS Trainium chips, Google TPUs, and NVIDIA GPUs.
International Expansion and Data Residency
Anthropic flagged that a portion of its capacity growth will be directed internationally, citing demand from enterprise clients in regulated sectors — financial services, healthcare, and government — who require in-region infrastructure to satisfy data residency and compliance obligations.
For African enterprises exploring Claude integrations, this signals that data localisation considerations are on Anthropic’s radar, even if the continent is not yet named in current expansion plans. The company also stated it intends to partner with democratic countries whose legal and regulatory environments can support infrastructure investments of this scale.
What It Means
The practical effect for developers is more headroom. Higher rate limits reduce one of the more common friction points when building with Claude at production scale. The removal of peak-hour restrictions should also make Claude Code more predictable as a daily development tool.
More broadly, the compute agreements Anthropic is assembling suggest the company is preparing for a significant step-up in usage — whether from current customers scaling up workloads, or from new enterprise deployments yet to come.

