NEED TO KNOW

  • Trump ordered all federal agencies to stop using Anthropic technology and the Pentagon designated the company a “supply chain risk” — a label normally reserved for foreign adversaries like Huawei
  • Anthropic refused to remove two safeguards from its $200 million military contract: no fully autonomous weapons and no mass domestic surveillance of Americans
  • Hours after Anthropic was blacklisted, OpenAI announced a Pentagon deal that included the same two restrictions Anthropic was punished for demanding

WASHINGTON, DC (TDR) — Artificial intelligence company Anthropic announced Friday it will challenge the Pentagon’s decision to designate it a national security supply chain risk in court — a designation typically reserved for companies from adversarial nations — after refusing to allow the U.S. military unrestricted use of its AI models for autonomous weapons and mass domestic surveillance.

The clash, which escalated from a contract dispute into a presidential directive within 48 hours, raises fundamental questions about who sets the rules for AI in warfare, whether the government can compel a private company to remove safety restrictions on technology it built, and why the Pentagon accepted from one company the same safeguards it punished another for demanding.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.” — Anthropic statement

How A Contract Dispute Became A Presidential Order

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

The timeline moved fast. Anthropic signed a $200 million contract with the Pentagon last July and became the first AI company to integrate its models into classified military networks. The company’s AI model, Claude, was used in the operation to capture Venezuelan President Nicolás Maduro in January and has been described by defense officials as having “wider and deeper reach across the military” than any competing system.

But Anthropic’s contract included two restrictions: the AI could not be used for fully autonomous weapons — meaning AI making final targeting decisions without human involvement — or for mass domestic surveillance of American citizens. The Pentagon demanded those restrictions be removed, insisting all AI tools must be available for “all lawful purposes” without private company limitations.

Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline of 5:01 p.m. Friday. When that deadline passed, the dominoes fell quickly.

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think there is more to the story about the disappearance of Nancy Guthrie that we're not being told?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

President Donald Trump posted on Truth Social: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” He called the company “Leftwing nut jobs” making a “DISASTROUS MISTAKE.”

Hegseth designated Anthropic a supply chain risk to national security — a label that, if enforced, would bar any military contractor or supplier from doing commercial business with the company. Federal agencies were given six months to phase out all Anthropic products.

Anthropic responded that it was “deeply saddened” by the decision, called the designation “legally unsound,” and said it would fight the blacklisting in federal court.

The Contradiction: OpenAI Got The Same Terms

Hours after Anthropic was blacklisted, Sam Altman, CEO of rival OpenAI, announced his company had reached a deal with the Pentagon to deploy its AI models on classified networks — with virtually identical restrictions.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.” — Sam Altman, OpenAI CEO

The Pentagon accepted from OpenAI the same two red lines it punished Anthropic for maintaining. Axios reported that the Defense Department agreed to OpenAI’s safety conditions after spending weeks insisting those conditions were unworkable. The key difference, according to sources: OpenAI framed its restrictions as aligned with existing law, while Anthropic argued existing law hadn’t caught up with AI capabilities — particularly around mass collection of publicly available data like social media posts and geolocation information.

Altman publicly sided with Anthropic’s position while securing his own deal, telling CNBC he doesn’t “personally think the Pentagon should be threatening DPA against these companies” and calling the red lines ones “we share with Anthropic and that other companies also independently agree with.” He urged the Pentagon to “offer these same terms to all AI companies.”

Elon Musk‘s xAI had already signed a deal earlier in the week allowing its model, Grok, to operate in classified settings under the Pentagon’s “all lawful purposes” standard. Musk wrote on X that “Anthropic hates Western Civilization.”

What The Experts And Employees Are Saying

The dispute triggered an unusual show of cross-company solidarity. More than 430 employees from Google and OpenAI signed an open letter titled “We Will Not Be Divided,” urging their companies to refuse the Pentagon’s demands. “They’re trying to divide each company with fear that the other will give in,” the letter stated.

Google DeepMind Chief Scientist Jeff Dean wrote on X that “mass surveillance violates the Fourth Amendment” and “has a chilling effect on freedom of expression.”

Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, warned that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.” He called AI large language models “not ready for prime time in national security settings” for autonomous weapons and said Anthropic’s red lines were “reasonable.”

Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, told CNBC: “There are no winners in this.” She warned the government could push away companies with promising products, adding: “I’m really, truly, honestly worried that private companies will say, ‘It’s not worth our time to work with the defense sector moving forward.'”

Sen. Mark Warner (D-VA), vice chair of the Senate Intelligence Committee, condemned the action, saying it “raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”

The Pentagon’s Position

Pentagon spokesperson Sean Parnell maintained that the department has “no interest” in using AI for mass surveillance or fully autonomous weapons, both of which he noted are illegal. The Pentagon’s argument centers on principle: once the military purchases a tool, its own standards and procedures — not a private company’s terms of service — should determine how it is used.

Defense Undersecretary Emil Michael called Amodei a “liar” with a “God complex” who was “putting our nation’s safety at risk.” Michael told Bloomberg Radio that the Pentagon was “at the final stages” of a deal with Anthropic that would have “agreed to what they wanted in substance” — but that Amodei’s public statement Thursday torpedoed the negotiations.

Axios reported that Michael was on the phone offering Anthropic a deal at the same moment Hegseth tweeted the supply chain risk designation — and that the deal would have required allowing the collection or analysis of data on Americans, including geolocation and web browsing data.

“This is a very unusual, very public fight. I think it’s reflective of the nature of AI.” — Jerry McGinn, Center for the Industrial Base, CSIS

What Happens Next

The legal battle could take months. Anthropic contends that a supply chain risk designation can only apply to Pentagon contracts — not force private companies to cut ties with it. The company has six months to transition off government systems, during which time the military will rely on the same AI technology it just called a national security risk.

Defense officials privately acknowledge that replacing Claude would be a “huge pain” and that the model remains deeply embedded in military operations. The Washington Post reported that the dispute intensified after a discussion about a hypothetical nuclear strike scenario, though both sides gave conflicting accounts of what was said.

The broader question extends beyond one company. If the Pentagon accepted OpenAI’s autonomous weapons and surveillance restrictions as reasonable, why did it blacklist Anthropic for insisting on the same protections? And if AI technology is genuinely “not ready for prime time” in autonomous weapons — as the Pentagon’s own former AI chief stated — who should have the authority to enforce that limit: the companies building the technology, the military using it, or Congress, which hasn’t legislated on the question at all?

When the Pentagon blacklists an American AI company for demanding safeguards against autonomous weapons and mass surveillance — then grants those same safeguards to a competitor hours later — is the dispute really about safety principles, or about which companies get to profit from the most consequential technology of the century?

Sources

This report was compiled using information from CNBC, Axios, NPR, CNN, The Washington Post, TechCrunch, Al Jazeera, Bloomberg, official statements from Anthropic and OpenAI, and analysis from the Center for Strategic and International Studies and Georgetown’s Center for Security and Emerging Technology.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10