As Sam Altman found out Saturday night, it’s a tough time to work for the US government. Around 7pm, the CEO of OpenAI announced that he would submit questions publicly on X, as a way to debunk his company’s decision to pick up the Pentagon contract that Anthropic had just walked away from.
Most of the questions boil down to OpenAI’s willingness to engage in mass surveillance and automated killing — the exact activities Anthropic had ruled out in its negotiations with the Pentagon. Altman singled out the public sector, saying it was not its role to set national policy.
“I believe very deeply in the democratic process,” he wrote in a response, “and that our elected leaders have the power and that we all must uphold the constitution.”
An hour later, he confessed his surprise that so many people seemed to disagree. “There’s more open debate than I thought there would be,” Altman said, “about whether we should prefer a democratically elected government or unelected private corporations to have more power. I guess that’s something people disagree on.”
It’s a telling moment for both OpenAI and the tech industry at large. In his Q&A, Altman adopted a stance typical of the defense industry, where military leaders and industry partners are expected to take civilian leadership.
But what’s more telling is that, as OpenAI transitions from a wildly successful consumer startup to a piece of the national security infrastructure, the company seems unable to manage its new responsibilities.
Altman’s public town hall came at an intense time for his company. The Pentagon had just blacklisted OpenAI competitor Anthropic for insisting on contractual restrictions on surveillance and automated weapons. Hours later, OpenAI announced that it had won the same contract it had relinquished from Anthropic. Altman portrayed the deal as a quick way to de-escalate the conflict — and it was certainly profitable. But he seemed unprepared for how much of a backlash it generated from both the company’s users and its employees.
Techcrunch event
San Francisco, California
|
13-15 October 2026
OpenAI has been working with the US government for years — but not like this. When Altman made his case to congressional committees in 2023for example, he still mainly followed the social media playbook. He was glowing about the company’s potential to change the world while acknowledging the risks and enthusiastically engaging with lawmakers—a perfect combination to excite investors while moving away from regulation.
Less than three years later, this approach is no longer possible. AI is obviously so powerful and the capital needs so intense that a more serious engagement with government is impossible to avoid. The surprise is how unprepared both sides seem for this.
The biggest immediate conflict is Anthropic itself and US Defense Secretary Pete Hegseth’s announced plan on Friday to designate the lab as a supply chain risk. This threat dominates the whole conversation like an unfired weapon. As former Trump official Dean Ball he wrote over the weekendthe designation will cut Anthropic off from hardware and hosting partners, effectively destroying the company. It would be an unprecedented move against an American company, and while it could to finally be overturned in courtit will damage the interval and send shockwaves through the industry.
As Ball describes the process, Anthropic was executing an existing contract with terms set years in advance — only to have management insist on changing the terms. It is far beyond anything that would fly between private companies and sends a chilling message to other suppliers.
“Even if Secretary Hegseth backs down and limits his extremely broad threat against Anthropic, much damage has been done,” Ball wrote. “Most companies, political actors and others will have to operate under the assumption that race logic will now rule.”
It’s an immediate threat to Anthropic, but also a serious problem for OpenAI. The company is already under intense pressure from workers to maintain some red line. At the same time, the right-wing media will be alert for any signs that OpenAI is a less than loyal political ally. In the middle of everything is the Trump administration, which is doing its best to make the situation as difficult as possible.
It can be argued that OpenAI didn’t set out to be a defense contractor, but because of its huge ambitions, it was forced to play the same game as Palantir and Anduril. Making inroads during the Trump administration means choosing sides. There are no apolitical actors here, and winning some friends will mean alienating others. It remains to be seen how high a price OpenAI will pay, either in lost business or lost employees, but it’s unlikely to emerge unscathed.
It may seem strange that this crackdown comes at a time when there are more prominent tech investors holding positions of influence in Washington than ever before, but most of them seem perfectly happy with the racial logic. Among Trump-aligned venture capital funds, Anthropic has long been seen as favoring the Biden administration in ways that would hurt the larger industry — a perception the Trump adviser underscored David Sachs’ reaction to the ongoing conflict. Now that the reverse has happened, few seem willing to defend the broader principle of free enterprise.
That’s a difficult position for any company to be in — and while politically aligned players may benefit in the short term, they’ll be just as exposed when the political winds inevitably shift. There’s a reason why, for decades, the defense sector was dominated by slow-moving, tightly regulated conglomerates like Raytheon and Lockheed Martin. Acting as the industrial wing of the Pentagon gave them the political cover they needed to avoid politics, staying focused on technology without having to push reset every time the White House changed hands.
Today’s startup competitors may be moving faster than their predecessors — but they’re far less prepared for the long term.
