NEED TO KNOW

  • Trump ordered all federal agencies to stop using Anthropic technology and the Pentagon designated the company a “supply chain risk” — a label normally reserved for foreign adversaries like Huawei
  • Anthropic refused to remove two safeguards from its $200 million military contract: no fully autonomous weapons and no mass domestic surveillance of Americans
  • Hours after Anthropic was blacklisted, OpenAI announced a Pentagon deal that included the same two restrictions Anthropic was punished for demanding

WASHINGTON, DC (TDR) — Artificial intelligence company Anthropic announced Friday it will challenge the Pentagon’s decision to designate it a national security supply chain risk in court — a designation typically reserved for companies from adversarial nations — after refusing to allow the U.S. military unrestricted use of its AI models for autonomous weapons and mass domestic surveillance.

The clash, which escalated from a contract dispute into a presidential directive within 48 hours, raises fundamental questions about who sets the rules for AI in warfare, whether the government can compel a private company to remove safety restrictions on technology it built, and why the Pentagon accepted from one company the same safeguards it punished another for demanding.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.” — Anthropic statement

How A Contract Dispute Became A Presidential Order

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

The timeline moved fast. Anthropic signed a $200 million contract with the Pentagon last July and became the first AI company to integrate its models into classified military networks. The company’s AI model, Claude, was used in the operation to capture Venezuelan President Nicolás Maduro in January and has been described by defense officials as having “wider and deeper reach across the military” than any competing system.

But Anthropic’s contract included two restrictions: the AI could not be used for fully autonomous weapons — meaning AI making final targeting decisions without human involvement — or for mass domestic surveillance of American citizens. The Pentagon demanded those restrictions be removed, insisting all AI tools must be available for “all lawful purposes” without private company limitations.

Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline of 5:01 p.m. Friday. When that deadline passed, the dominoes fell quickly.

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think there is more to the story about the disappearance of Nancy Guthrie that we're not being told?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

President Donald Trump posted on Truth Social: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” He called the company “Leftwing nut jobs” making a “DISASTROUS MISTAKE.”

Hegseth designated Anthropic a supply chain risk to national security — a label that, if enforced, would bar any military contractor or supplier from doing commercial business with the company. Federal agencies were given six months to phase out all Anthropic products.

Anthropic responded that it was “deeply saddened” by the decision, called the designation “legally unsound,” and said it would fight the blacklisting in federal court.

The Contradiction: OpenAI Got The Same Terms

Hours after Anthropic was blacklisted, Sam Altman, CEO of rival OpenAI, announced his company had reached a deal with the Pentagon to deploy its AI models on classified networks — with virtually identical restrictions.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.” — Sam Altman, OpenAI CEO

The Pentagon accepted from OpenAI the same two red lines it punished Anthropic for maintaining. Axios reported that the Defense Department agreed to OpenAI’s safety conditions after spending weeks insisting those conditions were unworkable. The key difference, according to sources: OpenAI framed its restrictions as aligned with existing law, while Anthropic argued existing law hadn’t caught up with AI capabilities — particularly around mass collection of publicly available data like social media posts and geolocation information.

Altman publicly sided with Anthropic’s position while securing his own deal, telling CNBC he doesn’t “personally think the Pentagon should be threatening DPA against these companies” and calling the red lines ones “we share with Anthropic and that other companies also independently agree with.” He urged the Pentagon to “offer these same terms to all AI companies.”

Elon Musk‘s xAI had already signed a deal earlier in the week allowing its model, Grok, to operate in classified settings under the Pentagon’s “all lawful purposes” standard. Musk wrote on X that “Anthropic hates Western Civilization.”

What The Experts And Employees Are Saying

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10