“Yeah guys, it’s going to be harder in the future to make sure OpenClaw still works with Anthropic models,” OpenClaw creator Peter Steinberger posted on X early Friday morningalong with a photo of a message from Anthropic saying his account had been suspended due to “suspicious” activity.
The ban didn’t last long. A few hours later, after the post went viral, Steinberger said his account had been restored. Among hundreds of comments — many of them conspiracy-theory, given that Steinberger is now employed by Anthropic competitor OpenAI — was one from an Anthropic engineer. The engineer told the famous developer that Anthropic has never banned anyone for using OpenClaw and offered to help.
It is unclear if this was the key that restored the account. (We’ve asked Anthropic about it.) But the entire series of messages was enlightening on many levels.
To recap recent history: This ban follows news last week that subscriptions to Anthropic’s Claude would no longer cover “third-party harnesses, including OpenClaw,” the AI modeling company said.
OpenClaw users must now pay for this usage separately, on a consumption basis, through Claude’s API. Essentially, Anthropic, which offers its own agent, Cowork, now charges a “nail tax.” Steinberger said he was following this new rule and using his API, but he got banned anyway.
Anthropic said it instituted the pricing change because the subscriptions weren’t built to handle nail “usage patterns.” Nails can be more computationally intensive rather than prompts or simple scripts, because they can run continuous reasoning loops, automatically repeat or retry tasks, and connect to many other third-party tools.
Steinberger, however, wasn’t buying that excuse. After Anthropic changed its pricing, published“It’s funny how the timings match up, first they copy some popular features into their closed zone, then they lock down the open source.” Although he didn’t elaborate, he may have been referring to features added to Claude’s Cowork agent, such as Claude Dispatch, which allows users to remotely control agents and assign tasks. The shipment was released a few weeks before Anthropic changed its pricing policy to OpenClaw.
Steinberger’s frustration with Anthropic surfaced again on Friday.
One person hinted that some of this is against him for taking a job at OpenAI instead of Anthropic, posting, “You had a choice, but you went wrong.” To which Steinberger responded: “One welcomed me, one sent legal threats.”
Ouch.
When several people asked him why he uses Claude instead of his employer’s models, he explained that he only uses it for testing, to ensure that updates to OpenClaw don’t break things for Claude users.
He explained: “You have to separate two things. My work at the OpenClaw Foundation where we want to make OpenClaw work great for *any* model provider, and my work at OpenAI to help them with future product strategy.”
Many people also pointed out that the need to test Claude is because this model remains a popular choice for OpenClaw users via ChatGPT. He also heard it when Anthropic changed its price, to which he replied, “I’m working on it.” (Well, that’s an indication of what his work at OpenAI entails.)
Steinberger did not respond to a request for comment.
