Anthropic’s Super Bowl ad, one of four ads dropped by the AI lab on Wednesdaybegins with the word “BETRAYAL” boldly splashed across the screen. The camera cuts to a man earnestly asking a chatbot (apparently meant to represent ChatGPT) for advice on how to talk to his mom.
The bot, portrayed by a blonde woman, offers some classic advice. Start by listening. Try a nature walk! And then it turns into an ad for a fictitious (hopefully!) cougar dating site called Golden Encounters. Anthropic concludes the spot by saying that while ads are coming to AI, they won’t be coming to its own chatbot, Claude.
Another commercial features a slight young man looking for advice on building a six pack. After giving him his height, age and weight, the bot shows him an ad for height-enhancing insoles.
Anthropic ads are smartly targeting OpenAI users, following the company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they immediately caused a stir, making headlines that Anthropic “taunts”, “skewers”, and “Dunks on” OpenAI.
They are pretty much as funny as Sam Altman was admitted to X who laughed at them. But apparently he didn’t really find them funny. They inspired him to write a novella-sized rant calling his opponent “dishonest” and “authoritarian.”
In this post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a wide margin.
But OpenAI’s CEO insisted the ads were “disingenuous” implying that ChatGPT would twist a conversation to insert an ad (and possibly an off-color product, to boot). “We’re not stupid and we know our users would reject it.”
Techcrunch event
Boston, MA
|
June 23, 2026
Indeed, OpenAI has promised that ads will be separate, flagged, and never interfere with a conversation. But the company also said it plans to make them conversation-specific — which is the central claim for Anthropic’s ads. As OpenAI explained in his blog, “We plan to test ads at the bottom of ChatGPT responses when there is a relevant sponsored product or service based on your current conversation.”
Altman then proceeded to hurl some equally questionable claims at his opponent. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly about bringing AI to billions of people who can’t afford subscriptions.”
But Claude also has a free chat tier, with subscriptions at $0, $17, $100 and $200. ChatGPT levels are $0, $8, $20 and $200. One could argue that the membership levels are fairly equivalent.
Altman also claimed in his post that “Anthropic wants to control what people do with AI.” It claims it prevents the use of Code Claude by “companies it doesn’t like,” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.
It’s true that Anthropic’s entire marketing deal from day one was “responsible artificial intelligence”. The company was founded by two former OpenAI partners, after all, who claimed they were concerned about AI security when they worked there.
However, both chatbot companies have usage policiesAI guardrails, and talk about AI security. And while OpenAI allows ChatGPT to be used for erotic while Anthropic it doesn’tOpenAI, like Anthropic, has defined it some content should be blockedespecially when it comes to mental health.
However, Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”
“An authoritarian corporation won’t get us there on its own, to say nothing of the other obvious dangers. It’s a dark road,” he wrote.
Using “overbearing” in a rant about a cheeky Super Bowl ad is misplaced, at best. It is particularly careless when considering the current geopolitical environment in which protesters around the world they have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, Anthropic has clearly struck a nerve.
