Anthropic, the AI company known for its advanced language models, has taken a firm stance against the Pentagon's claims that it presents an "unacceptable risk to national security." In a recent court filing made in California, Anthropic submitted two sworn declarations that aim to clarify misunderstandings and misrepresentations made by the Department of Defense. This legal move follows a public declaration from President Trump and Defense Secretary Pete Hegseth, which announced the termination of their relationship with Anthropic after the company refused to grant unrestricted military access to its AI technology.
The filings are part of Anthropic's ongoing lawsuit against the Department of Defense and come just before a scheduled court hearing. Sarah Heck, Anthropic's Head of Policy and a former National Security Council official, and Thiyagu Ramasamy, the Head of Public Sector, submitted the declarations. Heck emphasizes that the government's assertion—that Anthropic sought approval over military operations—has no basis in fact. She firmly states that such demands were never made during negotiations, countering the Pentagon's narrative.
Moreover, Heck highlights that concerns regarding the potential for Anthropic to disable or modify its technology during military operations were never discussed until they appeared in the court filings. This omission has left Anthropic without the opportunity to address these issues during negotiations, raising questions about the legitimacy of the Pentagon's claims. The unfolding situation underscores the complexities of AI governance and the challenges companies face when navigating relationships with government entities.
As the case progresses, the implications of this dispute extend beyond Anthropic. It raises critical questions about the role of AI in national security and the extent to which tech companies can engage with military applications without compromising their ethical standards or operational integrity. With a hearing set for March 24, the tech community is closely watching how this case will influence future collaborations between AI companies and government agencies.

