More than 30 employees of OpenAI and Google DeepMind filed a statement on Monday supporting Anthropic’s lawsuit against the US Department of Defense after the federal agency designated the artificial intelligence company as a supply chain risk, according to court filings.
“The government’s designation of Anthropic as a supply chain risk was an inappropriate and arbitrary use of power that has serious implications for our industry,” says the brief, signed by Google DeepMind Chief Scientist Jeff Dean.
Late last week, the Pentagon labeled Anthropic a supply chain risk — usually reserved for foreign adversaries — after the AI company refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any “lawful” purpose and not be restricted by a private contractor.
The amicus brief in support of Anthropic appeared on the docket just hours after manufacturer Claude filed two lawsuits against the DOD and other federal agencies. Wired was the first to report the news.
In the court filing, Google and OpenAI officials point out that if the Pentagon “was no longer satisfied with the agreed terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”
The DOD, in fact, signed a deal with OpenAI within minutes of labeling Anthropic a supply chain risk — a move many of the ChatGPT maker’s employees protested.
“If allowed to proceed, this effort to punish one of America’s leading AI companies will undoubtedly have ramifications for the United States’ industrial and scientific competitiveness in AI and beyond,” the brief states. “And it will loosen the open debate in our field about the risks and benefits of today’s AI systems.”
Techcrunch event
San Francisco, California
|
13-15 October 2026
The filing also confirms that Anthropic’s stated red lines are legitimate concerns that warrant strong guardrails. Without public law governing the use of artificial intelligence, he argues, the contractual and technical constraints developers impose on their systems are a critical safeguard against catastrophic misuse.
Many of the officials who signed the statement also signed open letters over the past two weeks calling on the DOD to drop the label and calling on their company leaders to support Anthropic and deny unilateral use of its AI systems.
