NEED TO KNOW

  • Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until 5:01 PM ET Friday to remove AI safety restrictions
  • Anthropic refuses to allow its Claude AI to be used for mass domestic surveillance or fully autonomous weapons
  • Pentagon threatens to cancel $200M contract, declare Anthropic a “supply chain risk,” and invoke Defense Production Act

WASHINGTON (TDR) — The Pentagon issued an extraordinary ultimatum to artificial intelligence company Anthropic this week, demanding the removal of AI safety guardrails that prevent military use of the company’s Claude chatbot for mass surveillance and autonomous weapons operations. Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday at 5:01 PM ET to comply or face termination of the company’s $200 million defense contract, potential designation as a “supply chain risk,” and possible invocation of the Defense Production Act to compel cooperation.

Pentagon Demands Unrestricted Military Access

The confrontation reached a boiling point after months of private negotiations between the Defense Department and Anthropic, which has positioned itself as the most safety-conscious major AI developer. The Pentagon insists it must be able to use Claude for “all lawful purposes” without contractor-imposed restrictions on surveillance capabilities or weapons systems.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

“We will not let ANY company dictate the terms regarding how we make operational decisions,” Pentagon spokesman Sean Parnell posted on social media Thursday. “Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW.”

The Defense Department has also threatened to use the Defense Production Act, a Korean War-era law that allows the military to commandeer private facilities during national emergencies. This would potentially force Anthropic to modify its AI models regardless of the company’s ethical objections or corporate governance structure.

“We will not employ AI models that won’t allow you to fight wars.” — Secretary Pete Hegseth

Emil Michael, the Defense Undersecretary for Research and Engineering, escalated the rhetoric by accusing Amodei of having a “God-complex” and wanting to “personally control the US Military” through contractual restrictions.

Anthropic Draws ‘Bright Red Lines’ on Surveillance and Weapons

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think there is more to the story about the disappearance of Nancy Guthrie that we're not being told?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

Anthropic has refused to budge on two specific prohibitions: using Claude for mass domestic surveillance of American citizens or to power fully autonomous weapons systems that could kill without human oversight. Amodei described these as “entirely illegitimate” uses of AI technology that are “simply outside the bounds of what today’s technology can safely and reliably do.”

In a detailed statement Thursday, Amodei explained that new contract language from the Pentagon, “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.” He emphasized that the company “cannot in good conscience accede” to the demands while maintaining its commitment to responsible AI development.

“Those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” — Dario Amodei

The standoff creates a strategic dilemma for Anthropic, which has built its reputation on AI safety protocols. The company’s founders left OpenAI over disagreements about safety measures, and Anthropic recently donated $20 million to political groups campaigning for AI regulation.

Military Veterans Question Pentagon’s Hardline Approach

The Pentagon’s threats have drawn unusual criticism from both Republican and Democratic lawmakers, as well as from military veterans who have previously supported aggressive AI integration. Retired Air Force Gen. Jack Shanahan, who led the Pentagon’s controversial Project Maven AI targeting program during the first Trump administration, expressed sympathy for Anthropic’s position.

“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here. Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.” — Gen. Jack Shanahan

Shanahan noted that large language models like Claude are “not ready for prime time in national security settings,” particularly for autonomous weapons. He argued that current AI capabilities lack the reliability necessary for lethal decision-making.

Geoffrey Gertz, a senior fellow at the Center for a New American Security, observed that designating Anthropic as a supply chain risk while simultaneously invoking the Defense Production Act to force its compliance creates a logical paradox that undermines the Pentagon’s negotiating position.

“It’s this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the system no matter what.” — Geoffrey Gertz

Industry Alignment Against Pentagon Pressure

The dispute has prompted rare solidarity among AI competitors. Tech workers from OpenAI and Google signed an open letter supporting Anthropic’s stance, warning that the Pentagon is attempting to “divide each company with fear that the other will give in.” Both OpenAI and Google maintain their own military contracts but have not publicly endorsed the Pentagon’s demands for unrestricted access.

Anthropic’s Claude model was the first large language model approved for classified government use and remains widely deployed across military and intelligence agencies. Losing access would force the Pentagon to rebuild systems around alternatives like Elon Musk’s xAI, which recently agreed to classified use without the same restrictions.

Legal and Economic Implications

Despite the $200 million contract value representing only a fraction of Anthropic’s projected $18 billion annual revenue, a supply chain risk designation could prove more damaging by forcing government contractors to choose between working with the Pentagon or using Anthropic’s technology. This could effectively banish the company from the federal contracting ecosystem.

Legal experts note that invoking the Defense Production Act to compel AI model modifications would face significant constitutional challenges under the First Amendment and Fifth Amendment takings clauses. However, the mere threat creates immediate business uncertainty for Anthropic and the broader AI industry.

The confrontation highlights growing tension between national security imperatives and corporate ethical frameworks in the artificial intelligence sector. As AI capabilities advance, the question of who controls the guardrails—and who bears liability when they fail—becomes increasingly urgent.

When national security imperatives collide with corporate ethical frameworks, who bears the responsibility for determining legitimate boundaries—and what precedent does government coercion set for the future development of artificial intelligence?

Sources

This report was compiled using information from Understanding AIYahoo News/APHouston Public Media/NPRCNNABC NewsAl JazeeraNotus, and NYU Stern Business & Human Rights.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10