By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense was “definitely rushed” and “the optics don’t look good.”
After negotiations between Anthropic and the Pentagon broke down on Friday, President Donald Trump asked federal agencies to stop using Anthropic’s technology after a six-month transition periodand Defense Secretary Pete Hegseth said he was designating the AI company as a supply chain risk.
OpenAI then quickly announced that it had reached an agreement of its own for the models to be developed in classified environments. With Anthropic saying it draws red lines around using its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were some obvious questions: Was OpenAI honest about its safeguards? Why was she able to reach an agreement while Anthropic was not?
So as OpenAI executives defended the deal on social media, the company also posted a blog post describing her approach.
In fact, the post pointed out three areas where it said OpenAI’s models can’t be used — mass domestic surveillance, autonomous weapons systems, and “high-risk automated decision-making (eg, systems like ‘social credit'”).
The company said that unlike other AI companies that have “reduced or removed guardrails and relied primarily on usage policies as their primary safeguards in national security development,” OpenAI’s agreement protects its red lines “through a more expansive, multi-layered approach.”
“We retain complete discretion over our security stack, we deploy in the cloud, we have cleared OpenAI staff on-going, and we have strong contractual protections,” the blog said. “All of this is in addition to the strong existing protections in US law.”
Techcrunch event
San Francisco, California
|
13-15 October 2026
The company added, “We don’t know why Anthropic was unable to reach this agreement, and we hope that they and more labs will look into it.”
After publishing the post, Techdirt’s Mike Masnick claimed that the agreement “absolutely allows domestic surveillance” because it says the collection of private data will comply with Executive Order 12333 (along with a number of other laws). Masnick described this order as “how the NSA hides its domestic surveillance by recording communications by tapping *non-US* lines even if it contains information from/about US people.”
In a LinkedIn postOpenAI’s head of national security partnerships, Katrina Mulligan, argued that much of the debate surrounding contract language assumes that “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single-use policy provision in a single contract with the Department of War.”
“That’s not how any of this works,” Mulligan said, adding, “Development architecture matters more than contract language […] By limiting our deployment to the cloud API, we can ensure that our models cannot be directly integrated into weapon systems, sensors, or other operational hardware.”
Altman also raised questions about the deal in X, where he admitted that he was in a hurry and led to significant backlash against OpenAI (to the extent that Anthropic’s Claude outranked OpenAI’s ChatGPT on Apple’s App Store on Saturday). So why do it?
“We really wanted to de-escalate things, and we thought the offer was good,” Altman said. “If we are right and this leads to a de-escalation between the DoW and the industry, we will look like geniuses and a company that went to great pains to do things to help the industry. If not, we will continue to be labeled as […] hasty and careless.”
