By MATT O’BRIEN and KONSTANTIN TOROPIN
Updated at 7:40 PM PST, March 5, 2026

The Trump administration has acted on its threat to classify artificial intelligence firm Anthropic as a supply chain risk, a decision that may compel other government contractors to cease using the AI chatbot, Claude.
In a statement released Thursday, the Pentagon confirmed that it has “officially notified Anthropic leadership that the company and its products are considered a supply chain risk, effective immediately.”
This move appears to eliminate any possibility of further negotiation with Anthropic, following allegations from President Donald Trump and Defense Secretary Pete Hegseth that the company poses a national security threat.
Last Friday, just before the Iran war began, Trump and Hegseth outlined a series of potential consequences after Anthropic CEO Dario Amodei stood firm against claims that the company’s technologies might be used for mass surveillance or autonomous weaponry.
In response, Amodei stated, “We believe this action is legally unfounded, and we have no option but to contest it in court.”
The Pentagon emphasized that this decision focuses on one primary principle: ensuring the military’s ability to utilize technology for lawful purposes. “The military will not allow a vendor to impose limits on the lawful use of critical capabilities, putting our warfighters at risk,” they declared.
Amodei contended that the specific exceptions Anthropic requested concerning surveillance and autonomous weaponry “pertain to high-level usage areas, not operational decision-making.”
He noted that there had been “constructive discussions” with the Pentagon in recent days regarding the continued use of Claude or a “smooth transition” if agreements could not be reached. Trump allotted the military six months to phase out Claude, which is already integral to multiple military and national security platforms. Amodei stressed the importance of ensuring that warfighters are not deprived of essential tools during major combat operations.
Some military contractors have begun to sever ties with Anthropic, a rising name in tech that supplies Claude to various businesses and government entities. Lockheed Martin stated it would “adhere to the direction of the President and the Department of War” and seek alternative suppliers of large language models.
The company commented, “We anticipate minimal impact as Lockheed Martin does not rely on any single LLM vendor for our work.”
The extent of the Pentagon’s risk designation remains uncertain. Amodei mentioned that a notice Anthropic received indicated the designation only applies to Claude’s usage in connection with specific military contracts.
Microsoft reported that its legal team has reviewed the rule and confirmed that it can continue collaborating with Anthropic on projects not related to defense.
The Pentagon’s decision drew significant criticism, as the rule applied to Anthropic was designed primarily to address supply threats from foreign adversaries. This designation is characterized as a “risk that an adversary may sabotage, introduce unwanted functionality, or undermine” a system to disrupt, damage, or spy on it.
U.S. Senator Kirsten Gillibrand, a Democrat from New York serving on the Senate Armed Services Committee and Senate Intelligence Committee, criticized the decision as a “dangerous misuse of a tool meant to deal with adversary-controlled technology.”
She described it as “reckless, shortsighted, and detrimental — essentially a gift to our adversaries.”
Neil Chilson, a former chief technologist for the Federal Trade Commission and current lead for AI policy at the Abundance Institute, labeled the action as “massive overreach with the potential to harm both the U.S. AI sector and the military’s access to advanced technology.”
Earlier, a coalition of former defense and national security officials sent a letter to U.S. lawmakers raising “serious concerns” over the designation.
“Using this authority against a domestic company marks a significant departure from its intended purpose and establishes a dangerous precedent,” remarked the letter, signed by former policy makers, including ex-CIA Director Michael Hayden and retired leaders from the Air Force, Army, and Navy.
They asserted that this classification is meant to protect the nation from foreign infiltration, not to penalize American firms operating lawfully and transparently. Denoting a U.S. company for declining to eliminate safeguards against mass domestic surveillance and fully autonomous weaponry constitutes a critical error with far-reaching implications.
Anthropic has seen a surge in consumer adoption, with over a million users signing up for Claude daily this week, as many are rallying behind its ethical stance. The app has surpassed OpenAI’s ChatGPT and Google’s Gemini in popularity across more than 20 countries in Apple’s app store.
The ongoing dispute with the Pentagon has intensified Anthropic’s rivalry with OpenAI, which commenced following the founding of Anthropic in 2021 by former OpenAI leaders including Amodei.
Shortly after the Pentagon’s action against Anthropic last Friday, OpenAI announced a deal to effectively replace it with ChatGPT in classified military settings.
OpenAI later acknowledged needing protections against domestic surveillance and fully autonomous weapon systems but ultimately had to revise its agreements, with CEO Sam Altman admitting he had rushed a deal that appeared “opportunistic and careless.”
Amodei expressed regret over his own role in what he termed a “difficult day for the company,” and he publicly apologized for an internal message he sent criticizing OpenAI’s conduct, suggesting that Anthropic was facing retribution for not offering “dictatorial praise” to Trump.
MATT O’BRIEN
O’Brien covers the technology and AI business for The Associated Press.
What do YOU think? Click here to jump to the comments!



