Dario Amodei he said Thursday that Anthropic plans to challenge the Defense Department’s decision to label the AI company a supply chain risk in court, a designation it called “legally untenable.”
The statement comes hours after the State Department officially designated Anthropic as a supply chain risk following a weeks-long dispute over the military’s control over artificial intelligence systems. A supply chain risk designation can prevent a company from working with the Pentagon and its contractors. Amodei argued that Anthropic’s AI would not be used for mass surveillance of Americans or for fully autonomous weapons, but the Pentagon believed it should have unrestricted access for “all lawful purposes.”
In his statement, Amodei said the vast majority of Anthropic’s customers are not affected by the supply chain risk characterization.
“In relation to our customers, it clearly only applies to customers’ use of Claude as a direct part of it War Department contracts, not all Claude use by customers who have such contracts,” he said.
In a preview of what Anthropic will likely argue in court, Amodei said the Department’s letter labeling the company a supply chain risk is limited in scope.
“It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the less restrictive means are required to achieve the objective of protecting the supply chain,” Amodei said. “Even for War Department contractors, the supply chain risk designation does not (and cannot) limit Claude’s uses or business relationships with Anthropic if these are not related to their specific Department of War contracts.”
Amodei reiterated that Anthropic has had productive talks with the DOD in recent days, talks that some suspect were derailed when an internal memo it sent to staff was leaked. In it, Amodei called rival OpenAI’s dealings with the Department of Defense “security theater.”
Techcrunch event
San Francisco, California
|
13-15 October 2026
OpenAI signed an agreement to work with the DOD in place of Anthropic, a move that has sparked backlash among OpenAI staff.
Amodei apologized for the leak in a statement on Thursday, claiming the company did not intentionally share the memo or instruct anyone else to do so. “It is not in our interest to escalate the situation,” he said.
Amodei said the memo was written within “a few hours” of a series of announcements, including a presidential post on Truth Social saying Anthropic would be removed from federal systems, following Defense Secretary Pete Hegseth’s designation of a supply chain risk, and finally the announcement of a Pentagon deal with OpenAI. He apologized for the tone, calling it “a difficult day for the company” and said the memo did not reflect his “careful or considered views.” Written six days ago, he added, it is now an “outdated review”.
He concluded by saying that Anthropic’s top priority is to ensure that American soldiers and national security experts have access to important tools in the midst of ongoing major combat operations. Anthropic currently supports some of the U.S. activities in Iran, and Amodei said the company will continue to provide its models to the DOD at a “nominal cost” for “as long as it takes to make that transition.”
Anthropic could challenge the designation in federal court, likely in Washington, but the law behind the decision makes it harder to challenge because it limits the usual ways companies can challenge government procurement decisions and gives the Pentagon broad discretion on national security matters.
Or as Dean Ball — a former Trump-era White House adviser on artificial intelligence who has spoken out against Hegseth’s handling of Anthropic — put it: “Courts are reluctant to second-guess the government about what is and isn’t a matter of national security … There’s a very high bar to clear to do that. But it’s not impossible.”
