A U.S. federal judge has blocked an order from the Department of War that designated artificial intelligence company Anthropic as a national security supply chain risk. The ruling, issued Thursday, March 26, 2026, by U.S. District Judge Rita Lin in the Northern District of California, prevents the government from enforcing the label and from terminating its contracts with the AI firm.
Judge Lin stated the Pentagon's actions likely constituted unlawful retaliation against the company for its protected speech, according to court documents. The preliminary injunction halts a Trump administration directive that barred federal agencies from using Anthropic's Claude AI models. The judge's order is stayed for one week to allow for a potential government appeal [1].
The conflict began when the Department of War sought authorization for 'all lawful uses' of Anthropic's Claude AI system for military purposes. The San Francisco-based AI developer resisted the request, citing ethical concerns that its technology could enable mass domestic surveillance or fully autonomous weapons [2].
The Pentagon subsequently ended contract negotiations, imposed the national security designation, and ordered its contractors to cease using Anthropic's AI [3]. The designation functions as a blacklist, preventing the company from doing business with the federal government and its contractors [4]. This legal and ideological confrontation centers on a fundamental question of whether a private AI company can refuse its technology to the military on ethical grounds without being labeled a national security threat [5].
In her ruling, Judge Lin called the Pentagon's actions 'a classic First Amendment retaliation' and noted the government was attempting to 'chill public debate' because of the company's stance [6]. She wrote that branding a U.S. company a potential adversary for expressing policy disagreement was unsupported by statute and 'Orwellian' [7].
The judge further noted that the national security designation is typically reserved for 'foreign intelligence agencies, terrorists, or hostile actors,' not American companies engaged in policy debate [7]. The ruling states that 'nothing in the governing statute supports the notion that an American company may be branded a potential adversary for expressing disagreement with the government' [8].
Anthropic filed a federal lawsuit on Monday, March 9, 2026, calling the Pentagon's move 'unprecedented and unlawful' and alleging retaliation for its criticism of government policy [9]. The complaint argues the administration exceeded its legal authority for refusing to remove safeguards restricting military use of its AI [10].
Last month, an executive order from President Donald Trump directed all federal agencies, including the Pentagon, to stop using Anthropic's technology, granting the military a six-month phase-out period for systems already in use [11]. Secretary of War Pete Hegseth accused Anthropic of 'arrogance and betrayal,' stating the Pentagon would shift to a 'more patriotic' alternative [12].
Anthropic warned in its lawsuit that the government's actions have unsettled customers and could cost the company billions in future revenue. Some non-defense agencies, including the Department of Health and Human Services, have reportedly removed its products [1].
The Pentagon has since struck a deal with OpenAI, whose CEO Sam Altman stated the agreement includes technical safeguards against mass domestic surveillance and requires human oversight for any lethal force applications [13]. This shift follows a Department of War statement that Anthropic's ethical 'red lines' made it an 'unacceptable risk to national security' [14]. The dispute has prompted hundreds of tech workers to sign an open letter urging the Pentagon to withdraw the designation [15].