• The Pentagon is considering terminating its contract with Anthropic after the AI company refused to allow unrestricted military use of its Claude model, insisting that mass surveillance of Americans and fully autonomous weapons remain off-limits
  • Tensions escalated after reports revealed Claude was used during the military operation to capture Venezuelan President Nicolás Maduro, with an Anthropic executive reportedly questioning Palantir about the deployment
  • Three rival AI firms — OpenAI, Google and Elon Musk’s xAI — have shown more willingness to lift safety restrictions for Pentagon use, with Defense Secretary Pete Hegseth declaring “AI will not be woke”

EDITOR’S NOTE: The Dupree Report uses Claude, Anthropic’s AI model, as a research tool in its editorial process. In the interest of full transparency, readers should be aware of this relationship when evaluating our coverage of Anthropic. All reporting in this article is based on independently published sources from Reuters, Axios, The Wall Street Journal, Fox News and other outlets.

WASHINGTON, DC (TDR) — The Pentagon Anthropic AI contract — a deal worth up to $200 million that made Claude the first AI model deployed on classified military networks — is now at risk of termination after months of failed negotiations over how far the Defense Department can push commercial artificial intelligence into the machinery of war.

A senior administration official told Axios on Saturday that the Pentagon is “getting fed up” with Anthropic‘s refusal to permit unrestricted military use of Claude. The company has drawn two firm red lines: no mass surveillance of American citizens and no fully autonomous weaponry without human oversight.

“Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward.”

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

The administration official told the Washington Times.

Pentagon Anthropic AI Contract: The Core Dispute

The standoff centers on a January 9 Department of War memo establishing that the military should be free to deploy commercial AI for “all lawful purposes” — regardless of any company’s internal usage policies. Pentagon officials argue that operational decisions should rest with elected leaders and military commanders, not private technology firms.

Defense Secretary Pete Hegseth made the administration’s position clear in a January 12 speech at SpaceX headquarters:

“We will not employ AI models that won’t allow you to fight wars. AI will not be woke.”

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think there is more to the story about the disappearance of Nancy Guthrie that we're not being told?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

Sources described the comment as a direct reference to Anthropic.

Anthropic CEO Dario Amodei has staked out a different position. In a 38-page essay titled “The Adolescence of Technology” published in January, Amodei argued that democracies should use AI for national defense but with clear boundaries:

“AI should support national defense in all ways except those which would make us more like our autocratic adversaries.”

Amodei identified two absolute red lines: AI-powered domestic mass surveillance and fully autonomous weapon swarms. He acknowledged that these positions are “politically unpopular” but argued they are essential to preserving the democratic values that distinguish the United States from its adversaries.

The Pentagon views this framing as unworkable. The senior official told Axios there is “considerable gray area” around what would fall into those categories, and that negotiating individual use cases with Anthropic — or having Claude unexpectedly block certain applications — creates operational risk.

Claude Used in Maduro Capture Operation

The dispute escalated dramatically after The Wall Street Journal reported Friday that Claude was used during the U.S. military operation to capture Venezuelan President Nicolás Maduro in January. The AI model was deployed through Anthropic’s partnership with Palantir Technologies, whose platforms are widely used by the Defense Department and federal law enforcement.

The operation involved U.S. special operations forces capturing Maduro and his wife, with airstrikes on several sites in the Venezuelan capital. Venezuela’s defense ministry said the raids killed up to 83 people. Seven U.S. service members were injured.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10