A federal judge in San Francisco issued a preliminary injunction on 26 March 2026, temporarily blocking the United States Department of Defense from enforcing its designation of the artificial intelligence company Anthropic as a supply chain risk to national security. United States District Judge Rita Lin, sitting in the Northern District of California, also blocked a directive by President Donald Trump ordering all federal government agencies to immediately cease using Anthropic’s technology and its Claude artificial intelligence model. Lin delayed implementation of her ruling by seven days to allow the Trump administration the opportunity to seek relief from an appeals court before the injunction takes effect.
The ruling followed a 90-minute hearing on 24 March 2026 at which Lin questioned the legal and factual basis for the Pentagon’s actions with notable directness. In a 43-page opinion, the judge found that the measures taken against Anthropic appeared to violate the company’s First Amendment rights and due process rights under the United States Constitution, and that Defense Secretary Pete Hegseth had failed to follow proper procedures when executing the designation. Lin determined that Anthropic was likely to succeed on the merits of its underlying legal claims, the threshold required under United States federal law for a court to grant a preliminary injunction.
How did Anthropic’s dispute with the Department of Defense over autonomous weapons lead to a supply chain risk designation?
The origins of the legal conflict lie in contract negotiations between Anthropic and the Department of Defense that collapsed in late February 2026. Anthropic had signed a 200 million United States dollar contract with the Pentagon in July 2024 and was among the first artificial intelligence companies to deploy its technology across the agency’s classified networks. As the government sought to expand its use of the Claude model across defence applications, Anthropic maintained two conditions it refused to abandon: that its artificial intelligence systems would not be used in fully autonomous lethal weapons systems without human oversight, and that its technology would not be used for domestic mass surveillance of American citizens. The Department of Defense sought what it characterised as unfettered access to Claude for all lawful purposes, arguing that placing restrictions on how deployed military software can be used is an operational matter that must remain within the chain of command.
When Anthropic publicly disclosed its position through media reporting in late February 2026, Anthropic chief executive officer Dario Amodei stated the company would not yield on either condition. Defense Secretary Hegseth responded by posting on the social media platform X that Anthropic had been designated a supply chain risk, and declared effective immediately that no contractor, supplier, or partner conducting commercial business with the United States military could maintain commercial activity with Anthropic. President Trump separately posted on Truth Social ordering all federal agencies to immediately cease use of Anthropic’s technology, stating there would be a six-month phase-out period. The Department of Defense formally notified Anthropic of the designation by letter in early March 2026.
What is the supply chain risk designation authority and why has it never previously been applied to a United States company?
The supply chain risk designation invoked by the Department of Defense is a statutory authority governed by United States federal procurement law. It has historically been applied exclusively to foreign entities suspected of being adversaries or capable of sabotaging United States government information technology infrastructure. The authority allows the government to prohibit contractors and vendors from using the designated entity’s products or services in work performed for the United States military. When applied to Anthropic, the designation would have obliged major defence contractors including Amazon, Microsoft, and Palantir to certify they had removed all Claude products from any work performed under Pentagon contracts.
Lawyers representing Anthropic noted during the hearing that no American company had previously been publicly subjected to this designation. The Department of Defense invoked two distinct statutory authorities to impose the restrictions: 10 U.S.C. Section 3252 and 41 U.S.C. Section 4713, each of which must be challenged in separate federal court proceedings. Anthropic filed its California lawsuit challenging the Section 3252 designation on First Amendment and due process grounds, while a parallel case challenging the Section 4713 authority is pending before a federal court in Washington, D.C.
Why did Judge Lin find that the Pentagon’s designation of Anthropic constituted illegal First Amendment retaliation under federal law?
The government’s legal position at the 24 March hearing rested on three main arguments. The Department of Justice, representing the Department of Defense, contended that Anthropic’s insistence on contractual guardrails constituted an attempt by a private vendor to insert itself into the operational chain of command, threatening the integrity of wartime decision-making. It also argued that Hegseth’s social media post carried no independent legal force and was merely an announcement of pending regulatory action. The government further raised a speculative concern that Anthropic could in the future push a software update introducing a capability equivalent to a kill switch that would render military systems unreliable or ineffective.
Lin rejected each of these arguments in her written ruling. On the chain-of-command rationale, she wrote that if the Department of Defense’s genuine concern was operational integrity, the appropriate remedy was to stop using Claude, not to attempt to sever Anthropic from the entire defence industrial base. On the social media posts, she described the government’s position that the statements carried no legal weight as surprising, pressing government counsel on why a cabinet secretary would publicly announce a directive he did not intend to be binding. On the kill switch theory, she noted that the Department of Defense’s own internal records showed the designation had been issued because of Anthropic’s hostile manner through the press, not because of any documented technical security concern.
The Department of Defense’s chief technology officer, Emil Michael, had argued in public remarks that the government could not permit a company with a different policy preference baked into its model to pollute the supply chain in ways that could compromise weapons, body armour, and protection for deployed personnel. Lin’s ruling addressed this framing directly, concluding that the record supported an inference that Anthropic was being punished for criticising the government’s contracting position publicly, and that this constituted classic illegal First Amendment retaliation. She described the government’s approach as reflecting an Orwellian notion that an American company could be branded a potential adversary and saboteur of the United States for expressing disagreement with the government.
What commercial and reputational impact did Anthropic argue justified emergency judicial relief in the federal proceedings?
Anthropic told the court that the designation had already caused immediate and measurable harm. The company stated that business partners had begun reconsidering contracts in light of the designation’s broad scope, and that federal agencies had taken steps to remove Claude from their systems. Because the designation required all Pentagon contractors and suppliers to certify non-use of Anthropic products, the company argued the measure functioned as an industry-wide commercial blacklisting rather than a bilateral government procurement decision. Anthropic cited the potential loss of hundreds of millions of dollars in existing contracts and described the reputational damage as difficult to reverse even if the designation were ultimately struck down.
Prior to the conflict, Anthropic had been one of the first artificial intelligence laboratories to partner with federal agencies as the government sought to upgrade its capabilities, and had undergone rigorous national security vetting before being granted access to classified networks. Claude was reported to have been embedded across multiple government agencies including defence operations. The supply chain risk designation, Anthropic argued, effectively forced third-party defence contractors to choose between their existing Pentagon business and any commercial relationship with Anthropic, irrespective of whether those contractors used Claude at all.
How does the injunction interact with the broader legal proceedings and the parallel case pending in Washington?
Lin’s injunction does not compel the Department of Defense to continue using Anthropic and does not prevent the Pentagon from transitioning to alternative artificial intelligence providers. The scope of relief is limited to preventing enforcement of the supply chain risk designation and the Trump administration’s directive ordering agencies to cease use of the Claude technology. The seven-day delay before the injunction takes effect gives the Trump administration a window to appeal to the United States Court of Appeals for the Ninth Circuit.
A separate legal challenge by Anthropic to the Section 4713 designation authority invoked by Hegseth remains pending before a federal court in Washington, D.C. The two cases proceed on separate legal tracks because the two statutory authorities used to impose the restrictions require review in different federal jurisdictions. The outcome of both proceedings will have implications not only for Anthropic but for the broader question of what legal remedies are available to United States technology companies subject to executive branch pressure over product safety conditions embedded in commercial contracts.
What this ruling means for Anthropic, the Pentagon, and United States artificial intelligence policy
- United States District Judge Rita Lin issued a preliminary injunction on 26 March 2026 blocking the Department of Defense’s supply chain risk designation of Anthropic and President Trump’s directive ordering all federal agencies to cease using the Claude artificial intelligence model, finding both measures constituted likely First Amendment retaliation.
- Anthropic is the first American company to be publicly subjected to the supply chain risk designation authority, which has historically been reserved for foreign adversaries, after contract negotiations with the Pentagon collapsed over Anthropic’s refusal to permit use of Claude in fully autonomous lethal weapons or domestic mass surveillance.
- Lin determined that the Department of Defense’s own internal records indicated the designation was motivated by Anthropic’s public criticism of the government’s contracting position rather than a genuine national security concern, and found that Defense Secretary Pete Hegseth had not followed proper procedures in executing the designation.
- The injunction takes effect after a seven-day delay to permit the Trump administration to seek emergency relief from an appeals court, and does not require the Department of Defense to continue using Anthropic’s products or prevent the Pentagon from transitioning to alternative artificial intelligence providers.
- A parallel legal challenge by Anthropic to the second statutory authority used to justify the designation remains pending before a federal court in Washington, D.C., meaning the full legal dispute between Anthropic and the Trump administration is not resolved by the California injunction.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.