April 6, 2026·19 min read·1 views·4 providers

Claudpocalypse: Anthropic Cuts Claude OAuth Access

Anthropic blocked the "Login with Claude" OAuth on April 4, 2026, ending third‑party agent access (e.g., OpenClaw) and triggering major cost and migration‑

Key Finding

An autonomous agent can consume substantially more compute cost than a low fixed monthly subscription fee—for example, a user paying $20/month for Claude Pro could use far more than $20 worth of compute, and a full-day agent run may cost roughly $1,000 to $5,000 in API-equivalent resources depending on workload.

high confidenceSupported by openai, perplexity
Justin Furniss
Justin Furniss

@Parallect.ai and @SecureCoders. Founder. Hacker. Father. Seeker of all things AI

gemini-litegrok-premiumopenaiperplexity

Claudpocalypse: Anthropic's April 2026 Third-Party API Access Cutoff — Cross-Provider Analysis


Executive Summary

  • The policy is real and already in effect. Effective April 4, 2026 at 12:00 PM PT, Anthropic blocked Claude Pro and Max subscription OAuth tokens from powering third-party agent frameworks, most prominently OpenClaw [3]. This is not a rumor or gradual rollout — enforcement began at a specific timestamp.

  • The economic driver is unambiguous. A single autonomous OpenClaw agent running for one day could consume $1,000–$5,000 in equivalent API compute [2], while the user paid only $20–$200/month for their subscription. With 135,000+ OpenClaw instances estimated online [65], Anthropic's potential monthly liability under the old model ran into the hundreds of millions of dollars.

  • This is a targeted OAuth restriction, not a broad API shutdown. Users who already had direct Anthropic API keys or routed through services like OpenRouter were entirely unaffected [2]. Only the "Login with Claude" OAuth flow — which let subscription holders authenticate into third-party apps — was blocked.

  • The transition path exists but is costly. Anthropic offered a one-time credit equal to one month's subscription fee (redeemable until April 17, 2026), discounted "Extra Usage" bundles, and a path to full API key access [3]. For heavy users, the real-world cost increase is reported at 10–50x their prior monthly spend [19].

  • The broader ecosystem impact is significant. OpenClaw's creator Peter Steinberger had already joined OpenAI in February 2026 [2], the policy is accelerating developer migration to competitor platforms [2], and the event signals a structural shift in how frontier AI companies will price and gate agentic compute going forward.


Cross-Provider Consensus

The following findings were independently confirmed by multiple providers and represent the highest-confidence conclusions of this analysis.


1. Enforcement began at exactly 12:00 PM PT on April 4, 2026

  • Confirmed by: Gemini-Lite [2], Grok-Premium [15], OpenAI [2], Perplexity [3]- Confidence: HIGH
  • All four providers cite this precise timestamp, with Grok additionally noting the 3:00 PM ET equivalent. The specificity and cross-source agreement is strong.

2. The restriction targets OAuth subscription tokens, not API keys

  • Confirmed by: Grok-Premium [11], OpenAI [2], Perplexity [2]- Confidence: HIGH
  • Users with direct API keys or OpenRouter routing were unaffected. Only the "Login with Claude" OAuth method — which allowed subscription holders to authenticate into third-party apps — was blocked. This is a technically precise and consequential distinction.

3. Boris Cherny, Head of Claude Code at Anthropic, was the primary public spokesperson

  • Confirmed by: Gemini-Lite [2], Grok-Premium [15], Perplexity [2]- Confidence: HIGH
  • Cherny announced the change via X and email to affected users, citing "outsized strain" on systems and incompatibility with subscription-tier economics.

4. Peter Steinberger created OpenClaw and joined OpenAI in February 2026

  • Confirmed by: Gemini-Lite [2], Grok-Premium [2], Perplexity [2]- Confidence: HIGH
  • The timing — OpenClaw's creator departing for a competitor two months before Anthropic banned his tool — is one of the most narratively significant facts in this story.

5. OpenClaw was previously known as Clawdbot, renamed after a trademark dispute with Anthropic

  • Confirmed by: Gemini-Lite [2], Grok-Premium [2], Perplexity [3]- Confidence: HIGH
  • The renaming occurred in January 2026 [13]. Grok notes the original name played on Anthropic's Claude/Clawd mascot. This establishes a prior adversarial relationship between Anthropic and the OpenClaw project predating the April ban.

6. Anthropic's stated rationale: third-party tools bypass prompt caching and create unsustainable compute load

  • Confirmed by: Gemini-Lite [2], Grok-Premium [2], OpenAI [2], Perplexity [2]- Confidence: HIGH
  • All four providers independently cite this technical explanation. Anthropic's first-party tools (Claude.ai, Claude Code, Claude Cowork) are engineered with prompt caching and telemetry optimizations; external tools hit the model with raw, uncached requests.

7. A single autonomous agent session could consume $1,000–$5,000 in equivalent API costs

  • Confirmed by: Gemini-Lite [3], Grok-Premium [19], OpenAI [2], Perplexity [3]- Confidence: HIGH
  • This figure appears consistently across all four providers and multiple underlying sources. It is the core economic justification for the policy change.

8. Anthropic offered a one-time credit equal to one month's subscription fee, redeemable until April 17, 2026

  • Confirmed by: Gemini-Lite [2], Grok-Premium [11], OpenAI [13]
  • Confidence: HIGH
  • Three providers confirm both the credit and the April 17 deadline. Perplexity does not specifically mention this deadline but does not contradict it.

9. Users face 10–50x cost increases to maintain equivalent agentic usage

  • Confirmed by: Gemini-Lite [2], Grok-Premium [19], OpenAI [2]- Confidence: HIGH
  • Gemini cites "up to 50x," Grok cites "10–50x depending on usage," and OpenAI corroborates the $1,000–$5,000/day figure against a $20/month baseline. The range is consistent across providers.

10. First-party Anthropic tools (Claude.ai, Claude Code, Claude Cowork) remain fully covered under subscriptions

  • Confirmed by: Grok-Premium [11], OpenAI [2], Perplexity [2]- Confidence: HIGH
  • Claude Code is explicitly carved out from restrictions. This is a deliberate competitive moat, not a blanket agentic ban.

11. Significant community backlash occurred on Reddit, Hacker News, X, and YouTube

  • Confirmed by: Grok-Premium [2], OpenAI [2], Perplexity [3]- Confidence: HIGH
  • Multiple Reddit threads, Hacker News discussions, and YouTube videos are cited across providers. The community response was immediate and vocal.

Unique Insights by Provider

Perplexity

  • Detailed enforcement timeline predating April 4. Perplexity is the only provider to reconstruct the full enforcement chronology: hints from Anthropic engineers in January 2026 [2], a formal Terms of Service update on February 20, 2026 explicitly prohibiting OAuth use in third-party tools [3], server-side restrictions deployed between February and March [2], and tightened peak-hour usage limits (5 a.m.–11 a.m. PT) in late March affecting approximately 7% of users [2]. This timeline reveals the April 4 cutoff was the culmination of a months-long enforcement escalation, not a sudden decision.

  • Quantified financial exposure. Perplexity alone attempts to calculate Anthropic's aggregate liability: $8,200–$10,600 monthly loss per agentic power user [4], multiplied across 135,000+ instances [65], yielding a potential monthly liability in the hundreds of millions of dollars [2]. This is the most specific financial framing in the dataset.

  • OpenClaw's GitHub scale. Perplexity uniquely reports that OpenClaw had accumulated 335,000 GitHub stars by March 2026 [2], reportedly surpassing React — a metric that contextualizes why this tool specifically triggered Anthropic's response.

  • Negotiation delay. Perplexity reports [3] that Peter Steinberger and Dave Morin attempted to negotiate with Anthropic and succeeded in delaying enforcement by approximately one week from Anthropic's original timeline. No other provider mentions Dave Morin or a specific delay duration.

  • Typical subscriber token consumption baselines. Perplexity provides the only quantified comparison: a typical Pro subscriber uses 20,000–40,000 tokens/day; a Max subscriber uses 80,000–220,000 tokens/day [7]. This makes the agentic overconsumption problem concrete.

  • Security exposure. Perplexity cites a Bitdefender report [65] that 135,000 OpenClaw instances were exposed to the internet, raising security concerns independent of the billing dispute.


Grok-Premium

  • "Claudpocalypse" as a recurring community phenomenon. Grok is the only provider to contextualize this event within a broader pattern [10], noting that "Claudpocalypse" is a community-coined term for recurring periods of user frustration with Anthropic involving rate limits, capacity constraints, or policy shifts. This frames April 2026 as the latest — and most severe — instance of a recurring dynamic rather than an isolated event.

  • Technical mechanics of cache-busting by agents. Grok provides the most granular technical explanation [17] of why agents defeat prompt caching: system prompt changes, new tool injections, timing injections, and spaced-out requests all break cache continuity. This is a more specific technical argument than other providers offer.

  • Boris Cherny submitted PRs to OpenClaw. Grok [15] and Perplexity [14] both note that Cherny submitted pull requests to improve OpenClaw's prompt cache hit rates to ease the API transition — a detail that somewhat softens the adversarial framing and suggests Anthropic attempted technical collaboration alongside the policy enforcement.

  • "SaaSpocalypse" framing. Grok uniquely surfaces [23] the claim that Anthropic's broader agent stack (Claude + Code + Cowork) has been blamed for wiping out $200 billion in SaaS market value by automating tasks across services. Whether accurate or hyperbolic, this framing represents a significant secondary narrative around the same policy period.

  • Claude 4 context. Grok is the only provider to explicitly connect this event to Claude 4's release in May 2025 [18], noting its emphasis on agent capabilities and strong SWE-bench performance — establishing why agentic demand surged to unsustainable levels in the first place.


OpenAI (Provider)

  • January 9, 2026 OAuth block as a distinct prior enforcement action. OpenAI's report [2] identifies a specific January 9, 2026 update that blocked Claude OAuth tokens from Free, Pro, and Max accounts in apps outside official Claude products — a distinct enforcement step that predates the February TOS update and the April cutoff. This adds granularity to the enforcement timeline.

  • Discounted bundles quantified at up to 30% off. OpenAI is the only provider to specify the discount magnitude on transition bundles [13] — up to 30% off — giving affected users a concrete number to evaluate.

  • OpenAI's contrasting policy explicitly noted. OpenAI's report [2] is the only one to directly state that OpenAI confirmed ChatGPT subscriptions can be used in external tools, framing this as a competitive differentiator that accelerated developer migration away from Claude.

  • Anthropic's older Agent SDK included in the ban. OpenAI uniquely notes [2] that even Anthropic's own older Agent SDK was included in the OAuth restriction — suggesting the ban was broader than just consumer-facing third-party tools.


Gemini-Lite

  • "Extra Usage" bundles as a distinct product offering. While other providers mention extra usage, Gemini-Lite [3] most clearly distinguishes between two separate transition paths: (1) the commercial pay-as-you-go API and (2) "Extra Usage" bundles tied to the Claude account. This distinction matters for users who want to stay within the Claude subscription ecosystem rather than moving to raw API access.

  • Anthropic's "big fan of open source" statement. Gemini-Lite [2] is the only provider to surface Anthropic's explicit public statement that it "remains a big fan of open source" — a PR framing that other providers omit, likely because it sits in tension with the policy's practical effect on open-source tools like OpenClaw.


Contradictions and Disagreements

Contradiction 1: The Name Sequence — Was It Clawdbot → Moltbot → OpenClaw, or Clawdbot → OpenClaw?

Grok-Premium [2] states the tool was renamed to Moltbot in January 2026 after Anthropic's trademark request, and then later renamed to OpenClaw for stability and trademark clearance.

Gemini-Lite [3] and Perplexity [4] describe only two names: Clawdbot → OpenClaw, with the trademark dispute as the trigger.

Assessment: Grok's three-step sequence (Clawdbot → Moltbot → OpenClaw) is more detailed and cites [13] and [14] which other providers also cite for the renaming. The Moltbot intermediate step is plausible and consistent with [45] (trendingtopics.eu: "Clawdbot Becomes Moltbot After Anthropic Trademark Issue"). However, Perplexity and Gemini omit Moltbot entirely. Do not treat the name sequence as settled — the Moltbot step requires independent verification.


Contradiction 2: How Much Notice Did Users Receive?

Perplexity [2] states the announcement came with less than 24 hours' notice.

Grok-Premium [15] and Gemini-Lite [1] do not specify the notice period but imply it was short.

Perplexity also states [3] that negotiations by Steinberger and Morin delayed enforcement by approximately one week from Anthropic's original timeline — implying some users or insiders had more than 24 hours of advance knowledge, even if the public announcement was last-minute.

Assessment: These claims are not necessarily contradictory (the public notice could be <24 hours while insiders negotiated for a week), but the framing differs significantly. The "less than 24 hours" characterization may be accurate for the general user population while obscuring that key stakeholders had longer warning. Flag this as a nuance gap rather than a hard contradiction.


Contradiction 3: Scope of the Ban — Does It Cover All Third-Party Tools or Just OpenClaw Initially?

Grok-Premium [2] states the change "was rolled out starting with OpenClaw" and "applies or will apply to other third-party tools" — implying a phased rollout.

OpenAI's report [2] states "On April 4 onward, Claude's subscription could no longer be used in third-party agents at all" — implying immediate universal application.

Perplexity [3] describes the ban as applying to "third-party agent frameworks" broadly, not just OpenClaw.

Assessment: The weight of evidence favors a broad, immediate ban rather than a phased OpenClaw-first rollout. However, Grok's phrasing may reflect that OpenClaw was the most prominent and publicly discussed case, with other tools affected simultaneously but less visibly. This distinction matters for developers using non-OpenClaw third-party frameworks — they should verify their specific tool's status.


Contradiction 4: Financial Loss Estimates

Perplexity [4] provides a specific per-user monthly loss figure of $8,200–$10,600 and extrapolates a hundreds of millions of dollars monthly aggregate liability across 135,000 instances [65].

No other provider attempts this aggregate calculation or validates the per-user loss figure independently.

Assessment: Perplexity's financial extrapolation is the most specific but also the most speculative. The $8,200–$10,600 figure appears to be derived from the $1,000–$5,000/day API cost estimate (which is cross-confirmed) minus subscription revenue, but the methodology is not transparent. The 135,000-instance figure comes from Bitdefender [65], which measured internet-exposed instances — not total users. Treat the aggregate liability figure as illustrative rather than authoritative.


Detailed Synthesis

Background: The Ecosystem That Made This Inevitable

To understand the April 2026 policy change, it is necessary to understand the ecosystem that preceded it. Claude 4, released in May 2025, represented a step-change in agentic capability [18], with strong performance on SWE-bench and long-running task benchmarks. This release, combined with Anthropic's flat-rate subscription pricing ($20/month for Pro, $200/month for Max [3]), created a powerful economic incentive for developers to build autonomous agent frameworks on top of Claude subscriptions rather than paying per-token API rates.

OpenClaw — originally called Clawdbot, renamed to Moltbot in January 2026 after a trademark dispute with Anthropic [13], and subsequently rebranded as OpenClaw [14] — was the most prominent beneficiary of this dynamic. Created by Austrian developer Peter Steinberger [2], the open-source framework enabled autonomous task execution across a user's applications: messaging platforms, inboxes, calendars, travel bookings, and complex multi-step workflows [12]. By March 2026, OpenClaw had accumulated 335,000 GitHub stars — reportedly surpassing React [2] — and was running on an estimated 135,000+ internet-connected instances [2].

The tool's growth was inseparable from its pricing model. OpenClaw could accept Claude models via either direct API keys or subscription OAuth tokens [2]. The OAuth path allowed users to authenticate with their existing Claude Pro or Max subscription, effectively giving autonomous agents access to "unlimited" compute within the subscription's rate limits. A user paying $20/month could, in theory, run agents that consumed the equivalent of $1,000–$5,000 per day in API costs [5].

The Enforcement Escalation (November 2025 – April 2026)

The April 4 cutoff was not a sudden decision. [Perplexity] reconstructs a months-long enforcement escalation that other providers largely omit. Anthropic's enforcement history traces back to at least November 2025 [7], with engineers beginning to hint publicly about strengthening restrictions by January 2026 [2]. On January 9, 2026, Anthropic deployed an update that blocked Claude OAuth tokens from Free, Pro, and Max accounts in apps outside official Claude products [2] — a significant but underreported prior enforcement step.

On February 20, 2026, Anthropic formally updated its Consumer Terms of Service with explicit language prohibiting the use of subscription OAuth tokens in third-party tools [4]. Between February and March, Anthropic deployed server-side security measures that substantially restricted third-party access [2]. In late March, Anthropic imposed tightened usage limits during peak business hours (5 a.m.–11 a.m. PT), affecting approximately 7% of users according to Anthropic's Thariq Shihipar [2].

Meanwhile, in February 2026, Peter Steinberger accepted a position at OpenAI to work on agent development [3] — a move that would later add a competitive dimension to the narrative. According to [3], Steinberger and investor Dave Morin attempted to negotiate with Anthropic about the timing and scope of the restriction, reportedly succeeding in delaying enforcement by approximately one week from Anthropic's original timeline. The public announcement ultimately came with less than 24 hours' notice for most users [2].

The April 4 Cutoff: What Actually Changed

At 12:00 PM PT (3:00 PM ET) on April 4, 2026, Anthropic began blocking Claude Pro and Max subscription OAuth tokens from powering third-party agent frameworks [4]. Boris Cherny, Head of Claude Code at Anthropic, announced the change via X and in an email to affected users [3].

The technical mechanism is important to understand precisely. [2] reports that Claude Free, Pro, and Max OAuth tokens are now cryptographically scoped to work only when Anthropic's authentication infrastructure can verify that the requesting client is genuinely Claude Code, Claude.ai, Claude Desktop, or Claude Cowork. Anthropic's official tools transmit comprehensive telemetry data with each request; third-party tools cannot replicate this verification. [2] confirms that users relying on actual Claude API keys or routing via OpenRouter were entirely unaffected — only the OAuth subscription authentication method was blocked.

Critically, [11] and [2] both note that Claude Code is explicitly excluded from restrictions and remains fully functional with subscription tokens. This is not a ban on agentic use of Claude — it is a ban on subsidized agentic use through third-party tools, while Anthropic's own agentic product (Claude Code) retains full subscription access.

The Technical and Economic Rationale

Cherny's public statements cited "outsized strain" on systems and incompatibility with subscription-tier economics [3]. The technical explanation centers on prompt caching: Anthropic's first-party tools are engineered to maximize cache hit rates, meaning repeated or similar requests reuse cached results and dramatically reduce compute costs [4]. Third-party tools hit the model with raw, uncached requests.

[17] provides the most granular technical explanation of why agents specifically defeat caching: system prompt changes, new tool injections, timing injections, and spaced-out requests all break cache continuity. Agents can make frequent calls with large accumulated context, use hundreds of thousands of tokens per session, and run in parallel — consuming far more resources than typical user chats [17]. Cached tokens are cheap or free for Anthropic; uncached agentic tokens are not.

The economics are stark. A typical Pro subscriber uses 20,000–40,000 tokens across an entire day of conversational work; a Max subscriber uses 80,000–220,000 tokens [7]. An autonomous agent running for a full day can consume the equivalent of $1,000–$5,000 in API costs [4]. [4] estimates Anthropic was absorbing a monthly loss of $8,200–$10,600 per agentic power user under the old system. Multiplied across 135,000 instances [65], this represents a potential monthly liability in the hundreds of millions of dollars — though this extrapolation should be treated as illustrative given its assumptions.

Notably, Cherny also submitted pull requests to improve OpenClaw's prompt cache hit rates to ease the transition to API-based usage [2] — a detail that suggests Anthropic attempted technical collaboration alongside enforcement, and that the relationship was not purely adversarial.

Transition Options and Community Response

Anthropic provided three transition paths [3]: (1) continue with the same Claude login and enable extra usage billed separately per token; (2) purchase discounted "Extra Usage" bundles (up to 30% off per [13]); or (3) switch to a full Claude API key. As a one-time mitigation, Anthropic offered a credit equal to one month's subscription fee, redeemable until April 17, 2026 [3].

For heavy users, none of these options replicate the economics of the old model. Reports cite 10–50x cost increases depending on usage patterns [3]. The community response was immediate and intense, with significant backlash on Reddit [4], Hacker News [3], X, and YouTube [2].

[2] notes that OpenAI publicly confirmed ChatGPT subscriptions can be used in external tools — a direct competitive contrast that accelerated developer migration. Some users pivoted OpenClaw setups to OpenAI models; others switched to API keys; others migrated to alternative providers entirely [19].

Broader Context and Strategic Implications

[23] surfaces a secondary narrative: Anthropic's broader agent stack (Claude + Code + Cowork) has been credited — or blamed — for triggering a "SaaSpocalypse," with claims that AI agent capabilities wiped out $200 billion in SaaS market value by automating tasks across services. Whether accurate or hyperbolic, this framing reflects the scale of disruption attributed to the same agentic capabilities that drove OpenClaw's growth.

[2] frames the decision as a fundamental inflection point: Anthropic faces a critical choice between rebuilding developer confidence through improved pricing transparency and ecosystem support, or continuing to prioritize infrastructure efficiency and per-user profitability through stricter controls. The April 4 policy clearly signals the latter direction — at least for now.

For the average Claude user who uses Claude.ai conversationally, nothing changed [2]. The policy's impact is concentrated on the developer and power-user segment that had built workflows around the assumption of subsidized agentic compute. That segment, however, represents the leading edge of AI adoption — and their migration to competitor platforms carries long-term ecosystem implications that extend well beyond the immediate billing dispute.


Evidence Explorer

Select a citation or claim to explore evidence.

Go Deeper

Follow-up questions based on where providers disagreed or confidence was low.

What is the complete and accurate renaming sequence of OpenClaw — was there a distinct "Moltbot" phase between Clawdbot and OpenClaw, and what were the specific trademark claims Anthropic made?

Three providers describe only two names (Clawdbot → OpenClaw) while Grok describes three (Clawdbot → Moltbot → OpenClaw), citing and . The discrepancy is unresolved and matters for accurately characterizing the prior relationship between Anthropic and the OpenClaw project.

DisagreementXS tier
Investigate this →

What is the actual scope of the April 4 ban — did it apply simultaneously to all third-party OAuth tools, or was OpenClaw specifically targeted first with other tools to follow?

Grok implies a phased rollout starting with OpenClaw, while OpenAI and Perplexity describe an immediate universal ban. Developers using non-OpenClaw agent frameworks (LangGraph, CrewAI, custom harnesses) need clarity on whether they are currently affected or will be affected in a future phase.

DisagreementS tier
Investigate this →

What are the actual migration patterns and market share shifts following the April 4 ban — how many developers moved to OpenAI, switched to API keys, or abandoned agentic Claude use entirely?

Multiple providers [Grok, src_19][OpenAI, src_29] note that developer migration to competitor platforms accelerated, and that OpenAI explicitly positioned its contrasting policy as a differentiator. Quantifying this migration would reveal the real competitive cost of the policy to Anthropic and inform whether the financial calculus was sound.

ImplicationM tier
Investigate this →

How does Anthropic's new "Extra Usage" bundle pricing and API token rates compare to equivalent usage costs on OpenAI, Google Gemini, and open-source alternatives for agentic workloads?

The 10–50x cost increase figure is cited without a competitive benchmark. If OpenAI or Gemini offer comparable agentic capabilities at lower effective costs — especially given OpenAI's stated policy of allowing subscription use in external tools — this represents a structural competitive disadvantage for Anthropic in the developer segment.

ImplicationM tier
Investigate this →

What is the status of the February 2026 Claude Code source code leak [src_62][src_66], and did security concerns about third-party tool access (including the 135,000 exposed OpenClaw instances [src_65]) play any role in the timing or scope of the OAuth restriction?

Perplexity cites both a Claude Code source leak and a Bitdefender report on 135,000 exposed OpenClaw instances , while Grok mentions the leak separately . No provider explicitly connects these security events to the OAuth ban, but the timing is suggestive. If security — not just economics — was a driver, the policy's framing as purely an "engineering necessity" would be incomplete.

Low ConfidenceS tier
Investigate this →

Key Claims

Cross-provider analysis with confidence ratings and agreement tracking.

130 claims · sorted by confidence
1

OpenClaw is an open-source AI agent framework created by Austrian developer Peter Steinberger.

high·gemini-lite, grok-premium, openai, perplexity·medium.combusinessinsider.comx.com+10·
2

Peter Steinberger joined OpenAI in February 2026, reportedly to work on agent development.

high·gemini-lite, grok-premium, perplexity·businessinsider.comx.comventurebeat.com+2·
3

Effective April 4, 2026 at 12:00 PM PT, Anthropic stopped allowing Claude Pro and Max subscription access to be used through third-party tools or agent frameworks, limiting those subscriptions to official Anthropic-controlled clients and authentication flows.

high·gemini-lite, openai, perplexity·cyberpress.orgmlq.aipcmag.com+8·
4

Boris Cherny is Head of Claude Code at Anthropic.

high·gemini-lite, grok-premium, perplexity·medium.comhyperight.comventurebeat.com+3·
5

Anthropic changed its policy to prevent consumer subscription authentication/tokens from being used by third-party harnesses or external agentic apps, including Claude Pro/Max/Team usage in such tools.

high·gemini-lite, grok-premium, openai·the-decoder.combusinessinsider.comnatural20.com+5·
6

Boris Cherny said Anthropic’s subscriptions were not built for the usage patterns of third-party tools, and users who want to keep using those tools must use Anthropic’s commercial pay-as-you-go API.

high·gemini-lite, grok-premium, perplexity·platform.claude.commedium.comventurebeat.com+3·
7

OpenClaw supported Claude subscription OAuth tokens (including Claude Pro/Max subscriptions) and could accept Claude models outside Anthropic’s official apps, sometimes alongside API keys.

high·grok-premium, openai, perplexity·medium.comyoutube.commedium.com+4·
8

Users wanting to use Claude in third-party/outside tools must enable an Extra Usage billing plan tied to their Claude account, with some variants describing it as pay-per-token or separately billed usage, and one variant adding an enterprise API key option.

high·gemini-lite, grok-premium, openai·medium.comtechcrunch.comventurebeat.com+2·
9

Third-party agentic tools like OpenClaw generate usage patterns different from interactive chat, often by running continuous, high-volume automated tasks; a single agent running continuously for a day might generate $1,000–$5,000 in API usage.

high·gemini-lite, grok-premium, openai·x.comyoutube.comventurebeat.com+2·
10

OpenClaw was formerly known as Clawdbot.

high·gemini-lite, grok-premium, perplexity·businessinsider.comyoutube.comhyperight.com+3·
11

Anthropic’s Max subscription was priced at $200 per month.

high·gemini-lite, perplexity·medium.commedium.comventurebeat.com+3·
12

An autonomous agent can consume substantially more compute cost than a low fixed monthly subscription fee—for example, a user paying $20/month for Claude Pro could use far more than $20 worth of compute, and a full-day agent run may cost roughly $1,000 to $5,000 in API-equivalent resources depending on workload.

high·openai, perplexity·youtube.comventurebeat.comyoutube.com+2·
13

Anthropic offered discounted pre-purchased usage bundles.

high·gemini-lite, grok-premium·venturebeat.comtheverge.comthenextweb.com·
14

Anthropic provided a one-time credit equal to one month’s subscription fee/cost.

high·gemini-lite, grok-premium·venturebeat.comtheverge.comthenextweb.com·
15

Anthropic’s commercial API usage was required to be pay-as-you-go and billed per token.

high·gemini-lite, openai·platform.claude.compecollective.comventurebeat.com+2·

Sources

44 unique sources cited across 130 claims.

News & Media23 sources
venturebeat.com
venturebeat.comvia gemini-lite, grok-premium, perplexity, openai
34 claims
thenextweb.com
thenextweb.comvia gemini-lite, grok-premium, perplexity, openai
32 claims
7
venturebeat.comvia gemini-lite, grok-premium, perplexity, openai
19 claims
2
theverge.comvia gemini-lite, grok-premium, openai, perplexity
18 claims
medium.com
medium.comvia gemini-lite, openai, perplexity, grok-premium
15 claims
15
medium.comvia gemini-lite, grok-premium, perplexity, openai
14 claims
5
businessinsider.comvia gemini-lite, grok-premium, perplexity, openai
12 claims
4
medium.comvia gemini-lite, grok-premium, openai, perplexity
10 claims
13
reddit.comvia gemini-lite, openai, perplexity, grok-premium
9 claims
11
axios.comvia grok-premium
9 claims

Topics

claudpocalypseanthropic claude oauth cutoffopenclaw ban 2026third-party agent accessclaude subscriptions policyai agent compute costsoauth token restriction

Share this research

Read by 1 researcher

Share:

Research synthesized by Parallect AI

Multi-provider deep research — every angle, synthesized.

Start your own research