• The Pentagon is considering terminating its contract with Anthropic after the AI company refused to allow unrestricted military use of its Claude model, insisting that mass surveillance of Americans and fully autonomous weapons remain off-limits
  • Tensions escalated after reports revealed Claude was used during the military operation to capture Venezuelan President Nicolás Maduro, with an Anthropic executive reportedly questioning Palantir about the deployment
  • Three rival AI firms — OpenAI, Google and Elon Musk's xAI — have shown more willingness to lift safety restrictions for Pentagon use, with Defense Secretary Pete Hegseth declaring "AI will not be woke"

EDITOR'S NOTE: The Dupree Report uses Claude, Anthropic's AI model, as a research tool in its editorial process. In the interest of full transparency, readers should be aware of this relationship when evaluating our coverage of Anthropic. All reporting in this article is based on independently published sources from Reuters, Axios, The Wall Street Journal, Fox News and other outlets.

WASHINGTON, DC (TDR) — The Pentagon Anthropic AI contract — a deal worth up to $200 million that made Claude the first AI model deployed on classified military networks — is now at risk of termination after months of failed negotiations over how far the Defense Department can push commercial artificial intelligence into the machinery of war.

A senior administration official told Axios on Saturday that the Pentagon is "getting fed up" with Anthropic's refusal to permit unrestricted military use of Claude. The company has drawn two firm red lines: no mass surveillance of American citizens and no fully autonomous weaponry without human oversight.

"Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward."

The administration official told the Washington Times.

Pentagon Anthropic AI Contract: The Core Dispute

The standoff centers on a January 9 Department of War memo establishing that the military should be free to deploy commercial AI for "all lawful purposes" — regardless of any company's internal usage policies. Pentagon officials argue that operational decisions should rest with elected leaders and military commanders, not private technology firms.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

Defense Secretary Pete Hegseth made the administration's position clear in a January 12 speech at SpaceX headquarters:

"We will not employ AI models that won't allow you to fight wars. AI will not be woke."

Sources described the comment as a direct reference to Anthropic.

Anthropic CEO Dario Amodei has staked out a different position. In a 38-page essay titled "The Adolescence of Technology" published in January, Amodei argued that democracies should use AI for national defense but with clear boundaries:

"AI should support national defense in all ways except those which would make us more like our autocratic adversaries."

Amodei identified two absolute red lines: AI-powered domestic mass surveillance and fully autonomous weapon swarms. He acknowledged that these positions are "politically unpopular" but argued they are essential to preserving the democratic values that distinguish the United States from its adversaries.

The Pentagon views this framing as unworkable. The senior official told Axios there is "considerable gray area" around what would fall into those categories, and that negotiating individual use cases with Anthropic — or having Claude unexpectedly block certain applications — creates operational risk.

Claude Used in Maduro Capture Operation

The dispute escalated dramatically after The Wall Street Journal reported Friday that Claude was used during the U.S. military operation to capture Venezuelan President Nicolás Maduro in January. The AI model was deployed through Anthropic's partnership with Palantir Technologies, whose platforms are widely used by the Defense Department and federal law enforcement.

The operation involved U.S. special operations forces capturing Maduro and his wife, with airstrikes on several sites in the Venezuelan capital. Venezuela's defense ministry said the raids killed up to 83 people. Seven U.S. service members were injured.

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you support the U.S. government increasing restrictions or a potential ban on TikTok over national security concerns?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

According to the Axios report, an Anthropic executive contacted a counterpart at Palantir after the raid to ask whether Claude had been used in the operation:

"It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot."

The senior administration official told Axios. Anthropic flatly denied that characterization, telling the outlet it had "not discussed the use of Claude for specific operations with the Department of War."

An Anthropic spokesperson told Fox News Digital:

"We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed."

A source familiar with the matter told Fox News Digital that Anthropic has visibility into both classified and unclassified usage and is confident all deployments have complied with its policies.

Pentagon Anthropic AI Contract: Rivals Show Flexibility

What makes the Pentagon Anthropic AI contract dispute particularly significant is that three rival companies have shown more willingness to accommodate the Pentagon's demands.

OpenAI, Google and Elon Musk's xAI each hold similar contracts worth up to $200 million. All three have agreed to lift the safety guardrails that apply to ordinary users for their work with the Pentagon. The senior official claimed one of the three has already agreed to the "all lawful purposes" standard, with the other two showing more flexibility than Anthropic.

Hegseth announced on January 12 that xAI's Grok would be deployed across all Defense Department networks, including classified systems. Google's Gemini is already powering GenAI.mil, the military's new internal AI platform. OpenAI continues negotiating its own arrangements.

However, the senior official conceded a key limitation:

"It would be difficult for the military to quickly replace Claude, because the other model companies are just behind when it comes to specialized government applications."

This admission underscores a paradox at the heart of the Pentagon Anthropic AI contract dispute: the model the Pentagon most wants to use without restrictions is the one whose maker is most insistent on maintaining them.

The Deeper Question: Who Decides Military AI Limits?

The dispute raises a fundamental governance question that extends far beyond one contract: should private technology companies have the power to set boundaries on how the military uses their products, or should those decisions rest solely with the government?

Pentagon officials argue the answer is clear — elected leaders and military commanders should make operational decisions, and private firms should not hold veto power over battlefield applications. The January 9 memo formalizing this position reflects a broader Trump administration view that "AI free from ideological constraints" is a national security imperative.

Anthropic's position is more nuanced. The company has invested heavily in national security work, was the first AI developer on classified networks, and has said it remains committed to supporting the military mission. But its engineers designed Claude with built-in safety limits — modifying those limits would require significant technical rework and would undermine the safety-first identity that distinguishes Anthropic from its competitors.

Amodei has also faced internal pressure from engineers uncomfortable with Pentagon work, according to a source familiar with the dynamic. The CEO's public criticism of the Minneapolis immigration enforcement shootings — which he called "a horror" on X — has further complicated the company's relationship with an administration that views such criticism as politically motivated.

What Happens Next

The contract remains active for now, and both sides have signaled willingness to continue negotiations. Anthropic told Reuters its AI is "extensively used for national security missions by the U.S. government" and that it remains in "productive discussions with the Department of War about ways to continue that work."

But a formal severing would carry consequences beyond the $200 million contract. For Anthropic, which is preparing for an eventual public offering and counts Amazon as a major investor with over $4 billion committed, losing the Pentagon relationship would create both revenue and reputational risks. For the Pentagon, replacing Claude in classified networks would be technically challenging and could slow deployments at a moment when the administration is trying to accelerate military AI adoption.

The Biden administration's 2024 framework had directed national security agencies to expand AI use while prohibiting applications that would violate civil rights or automate nuclear weapons deployment. It remains unclear whether those prohibitions survive under the Trump administration's approach.

Should the companies that build the most powerful AI systems in the world retain the right to set limits on military use — or does national defense require that the government alone decides how those tools are deployed, and what safeguards, if any, govern them?

Sources

This report was compiled using information from Reuters' exclusive reporting on the Pentagon-Anthropic standoff, Axios' reporting on the contract termination threat and Maduro operation, The Wall Street Journal's report on Claude's use in the Maduro raid via Fox News Digital, eWeek's analysis of the military AI governance dispute, Fintool's reporting on the $200 million contract and rival AI firms, HuffPost's coverage of the Reuters exclusive, Newsweek's reporting on Hegseth's Grok announcement, PBS News' coverage of the Pentagon AI strategy, the Washington Times' reporting on the Maduro operation, The Decoder's analysis of Amodei's essay, National Security News' overview of Amodei's policy framework, and Ground News' aggregated coverage of the Claude-Venezuela report.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10