In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology have broken down, the Trump administration has labeled Anthropic a supply chain risk, and the AI company has said it will fight that designation in court.
OpenAI, meanwhile, quickly announced a deal of its own, sparking a backlash that saw users uninstall ChatGPT and push Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive resigned over concerns that the announcement was rushed without proper safeguards.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a change of tune a little bit?”
Sean pointed out that this is an unusual situation in several ways, in part because OpenAI and Claude produce products that “nobody can shut up about.” And very importantly, this is a controversy about “how their technologies are or aren’t used to kill people,” so of course it’s going to attract more scrutiny.
However, Kirsten argued, this is a situation that should “pause any startup.”
Read a preview of our conversation below, which has been edited for length and clarity.
Kirsten: I wonder if other startups are starting to look at what happened to the federal government, specifically the Pentagon and Anthropic, that debate and the wrestling match, and [take] pause on whether they want to chase federal dollars. Will we see a bit of a change of tune?
Techcrunch event
San Francisco, California
|
13-15 October 2026
Sean: I wonder that too. I think not, to some extent, in the short term, just because when you really try to think about all the different companies, whether they’re startups or more established Fortune 500s that work with the government and particularly the Department of Defense or the Pentagon, [for] many of them, that work flies under the radar.
General Motors makes defense vehicles for the military and has done so [that] for a long time and has worked on all electric versions of these vehicles and autonomous versions. There are things like this that happen all the time and it just never really hits. I think the problem that OpenAI and Anthropic ran into last week is that these are companies that make products that a ton of people use — and also more importantly, [that] no one can be silent.
So there’s such a focus on them, which of course highlights their involvement to a level that I think most of the other companies that contract with the federal government—and, in particular, any of the war elements of the federal government—don’t necessarily have to deal with.
The only caveat I would add to this is there is a lot of tension around this discussion between Anthropic and OpenAI and the Pentagon is very particular about how their technologies are or aren’t used to kill people or in parts of missions that kill people. It’s not just the attention they get and the familiarity we have with their brands, there’s an additional element there that I think is more abstract when you think of General Motors as a defense contractor or whatever.
I don’t think we’re going to see, for example, Applied Intuition or any of these other companies framed as dual use, just because I don’t see the spotlight on that and there’s just not the kind of common understanding of what that impact might be.
Anthony: This story is so unique and specific to these companies and personalities in so many ways. I mean, there’s been a lot of really interesting thinking about: What’s the role of technology in government? [Of] AI in government? And I think these are all good and worthwhile questions to ask and explore.
I also think, though, that this is a very strange lens through which to look at some of these things, because Anthropic and OpenAI aren’t really that different in many ways or the positions they’re taking. Of not like one company says, “Hey, I don’t want to work with the government,” and one says, “Yes, I do.” Or someone says, “You can do whatever you want.” and [the other is] saying, “No, I want to have limitations.” Both, at least publicly, say: “We want restrictions on how our AI is used.” It just seems that Anthropic digs a lot more about: You can’t change the terms that way.
And then beyond that, there also seems to be a level of personality where, Anthropic CEO and Emil Michael — who many TechCrunch readers may remember from his Uber days and is now [chief technology officer for the Department of Defense]. Apparently, they just don’t like each other. According to information.
Sean: Yes, there is a very large “girls fight” element here that should not be overlooked.
Kirsten: Yes, a little. There is, but the implications are a bit more intense than that. Again, to back up a bit, what we’re talking about here is the Pentagon and Anthropic getting into a fight that Anthropic seems to have lost, although I have to say they’re still very much used by the military. They are considered a critical technology, but OpenAI has somehow stepped in and this is evolving and will likely change by the time this episode comes out.
The recovery has been interesting for OpenAI where we’ve seen a lot of ChatGPT uninstalls I think have gone up 295% since OpenAI closed the deal with the DoD.
To me, this is all noise about the really critical and dangerous thing, which is that the Pentagon was seeking to change the existing terms in an existing contract. And that’s really important and should give any startup pause because the political machine that’s going on right now, particularly with the Department of Defense, seems to be different. This is not normal. Contracts take forever to make at the government level and the fact that they are seeking to change those terms is a problem.
