Anthropic filed two affidavits in a California federal court late Friday afternoon, pushing back against the Pentagon’s claim that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case is based on technical misunderstandings and allegations that were not actually made during the months of negotiations that preceded the dispute.
The statements were filed with Anthropic’s response to its lawsuit against the Department of Defense and come ahead of a hearing next Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly said they were cutting ties with Anthropic after the company refused to allow unrestricted military use of AI technology.
The two people who submitted the statements are Sarah Heck, Anthropic’s Chief Policy Officer, and Thiyagu Ramasamy, the company’s Head of Public Affairs.
Heck is a former National Security Council official who worked in the White House under the Obama administration before moving to Stripe and then Anthropic, where she runs the company’s government relations and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Pentagon Undersecretary Emil Michael.
To her statementHeck calls out what she describes as a central lie in the government’s records: that Anthropic required some sort of approval role for military operations. That claim, he says, is simply not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee indicate that the company wanted such a role,” he wrote.
It also claims that the Pentagon’s concern about the possibility of disabling or changing Anthropic’s technology during its operation was never raised during the negotiations. Instead, he says, it appeared for the first time in the government’s court filings, which didn’t give Anthropic a chance to respond.
Techcrunch event
San Francisco, California
|
13-15 October 2026
Another detail in Heck’s statement that is sure to garner attention is that on March 4—the day after the Pentagon officially finalized its supply chain risk designation against Anthropic—Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the administration now cites as evidence that Anthropic is a national security threat: its positions on mass autonomy of surveillance weapons.
The email, which Heck has attached as an exhibit to her statement, is worth reading along with what Michael said publicly in the coming days. On March 5, Amodei released a statement saying the company had “productive conversationswith the Pentagon. The next day, Michael posted on X that “there is no active War Department negotiation with Anthropic.” A week after that, he told CNBC that there was “no chance” of renewed talks.
Heck’s point seems to be: If Anthropic’s stance on these two issues is what makes it a national security threat, why did the same Pentagon official say the two sides were nearly aligned on these very issues immediately after the designation was finalized? (He stops short of saying the government used the designation as a bargaining chip, but the timeline he sets leaves the question hanging.)
Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployment for government clients, including classified environments. At Anthropic, he is credited with building the team that brought Claude models to national security and defense settings, including $200 million contract with the Pentagon announced last summer.
Of statement accepts the government’s claim that Anthropic could theoretically interfere with military operations by disabling technology or otherwise changing its behavior, which Ramasamy says is not technically possible. According to him, once Claude is installed in a government-insured, “air-vacuum” system operated by a third-party contractor, Anthropic has no access to it. there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fantasy, he suggests, explaining that a change to the model would require the Pentagon’s express approval and action to install.
Anthropic, he says, can’t even see what government users type into the system, let alone extract that data.
Ramasamy also disputes the government’s claim that Anthropic’s employment of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone a US government security clearance check — the same background check process required to access classified information — adding in his statement that “to my knowledge,” Anthropic is the only AI company where cleared personnel built the AI models designed to operate in classified environments.
Anthropic’s lawsuit argues that the supply chain risk designation — the first ever applied to a US company — amounts to government retaliation for the company’s publicly expressed views on AI safety in violation of the First Amendment.
The government, in a 40-page filing earlier this week, rejected that frame entirely, saying that Anthropic’s refusal to allow all legitimate military uses of its technology was a business decision, not protected speech, and that the designation was a simple appeal to national security, not punishment for the company’s views.
