NEED TO KNOW

  • Palantir’s Maven Smart System, powered by Anthropic’s Claude, generated approximately 1,000 prioritized targets in the first 24 hours of Operation Epic Fury
  • The Pentagon blacklisted Anthropic as a “supply chain risk” for refusing to remove guardrails against autonomous weapons and mass domestic surveillance
  • More than 120 House Democrats are demanding answers about whether AI assisted in a strike that killed at least 175 people, mostly children, at an Iranian girls’ school

WASHINGTON, D.C. (TDR) — A new investigation published Thursday by Wired reporter Caroline Haskins using software demonstrations and Pentagon records offers the most granular public account yet of how commercial artificial intelligence is now embedded in U.S. combat operations. The reporting centers on Palantir Technologies and its use of Anthropic‘s Claude chatbot inside the Maven Smart System, the Pentagon’s flagship AI targeting platform, during Operation Epic Fury, the joint U.S.-Israeli military campaign against Iran that began Feb. 28.

The disclosures land in the middle of an extraordinary contradiction: the Pentagon is actively relying on an AI tool built by a company it has simultaneously blacklisted, sued, and accused of undermining national security, with Claude still processing battlefield intelligence in Iran even as Defense Secretary Pete Hegseth declared the technology a threat.

How the Maven Smart System Uses Claude AI

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

Palantir’s Maven system sits at the intersection of intelligence analysis and targeting operations. It ingests classified feeds from satellites, surveillance drones and archived intelligence data, then uses Claude to synthesize that information into prioritized target lists, complete with precise GPS coordinates, weapons recommendations and automated legal justifications for strikes. What previously required an intelligence staff of roughly 2,000 analysts during the 2003 Iraq invasion now reportedly requires approximately 20 people, according to defense experts cited in recent coverage.

According to the Washington Post, which first reported the scale of Claude’s battlefield role, Maven powered by Claude helped generate and prioritize approximately 1,000 targets in the first 24 hours of Operation Epic Fury alone, more than double the air power deployed during the entire opening phase of the 2003 Iraq invasion.

“Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance.” — Washington Post, citing sources familiar with the system

Anthropic CEO Dario Amodei has acknowledged the tension between commercial AI development and national security deployment.

“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” — Dario Amodei, Anthropic CEO

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think the U.S. should drill more domestically to bring down gas prices?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

The Wired investigation adds new technical specificity, detailing how Claude is deployed within Palantir’s Impact Level 6 environment, a classified system authorized to handle data up to the “secret” level. Demo recordings obtained by the outlet show Claude functioning as a natural-language interface, allowing military planners to query intelligence databases and receive tactical summaries in plain English.

NBC News separately confirmed two sources with direct knowledge of the system saying the Palantir AI platform identifies potential targets in Iran while Claude helps analysts process and sort that intelligence. Crucially, a source familiar with Anthropic’s government work told NBC that Claude “does not directly offer targeting recommendations,” framing the tool as a decision support system rather than an autonomous targeting engine.

That distinction matters legally and ethically. Whether it holds in practice when a system produces prioritized strike coordinates at machine speed is a question neither Palantir nor the Pentagon has publicly answered.

CENTCOM Confirms AI’s Role; Hegseth Scrapped Rules of Engagement

U.S. Central Command commander Admiral Brad Cooper confirmed the military’s AI dependency in a video message released Wednesday.

“Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.” — Admiral Brad Cooper, U.S. Central Command

Cooper insisted humans retain final authority over strike decisions.

“Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes used to take hours and sometimes even days into seconds.” — Admiral Brad Cooper, U.S. Central Command

The acknowledgment comes against a backdrop of deliberate erosion of the oversight frameworks traditionally designed to protect civilians. On March 2, Hegseth publicly declared there would be “no stupid rules of engagement” for Operation Epic Fury. President Donald Trump has separately stated, “I don’t need international law.” In February 2025, Hegseth fired the Pentagon’s top military lawyers responsible for compliance with the laws of war, including those governing civilian harm protections.

Those decisions form critical context for evaluating what it means when AI-assisted targeting operates at accelerated speed without the institutional guardrails that previously governed U.S. strike authority.

The School Strike and Congress Demands Answers

The abstraction of AI-assisted warfare collided with ground-level consequence on Feb. 28 when a missile struck the Shajareh Tayyebeh girls’ school in Minab, Iran, in the opening hours of Operation Epic Fury. At least 175 people were killed, the majority of them schoolgirls between ages 7 and 12.

The New York Times reported Thursday, citing U.S. officials, that a preliminary Pentagon investigation concluded a U.S. Tomahawk missile was likely responsible, and that outdated intelligence from the Defense Intelligence Agency may have caused the targeting error. The building had reportedly been previously associated with an Iranian military facility, but that data was no longer current at the time of the strike.

On Thursday, 121 House Democrats led by Rep. Sara Jacobs (D-CA), Rep. Jason Crow (D-CO) and Rep. Yassamin Ansari (D-AZ) sent a formal letter to Hegseth demanding answers. Their questions went directly at the AI accountability question.

“What is the role of artificial intelligence, if any, in selecting targets, assessing intelligence, and making legal determinations during Operation Epic Fury? Was artificial intelligence, including the use of Maven Smart System, used to identify the Shajareh Tayyebeh school as a target? If so, did a human verify the accuracy of this target?” — Rep. Sara Jacobs, Rep. Jason Crow, and Rep. Yassamin Ansari, letter to Defense Secretary Hegseth

The lawmakers set a deadline of March 20 for a response.

Republican Rep. Pat Harrigan (R-NC) offered a counterpoint, telling NBC News that AI-assisted targeting has already proven essential for rapidly processing military intelligence in Iran, and that restricting it would put American service members at risk.

The Paradox: Pentagon Blacklists the AI It Can’t Stop Using

The dispute between Anthropic and the Pentagon reached a breaking point on Feb. 27, one day before Operation Epic Fury launched. After months of negotiations over the terms of Claude’s military deployment, CEO Dario Amodei declined to remove two firm guardrails: no use of Claude for mass domestic surveillance of Americans and no powering of fully autonomous lethal weapons without human oversight. Hegseth demanded unrestricted access for “all lawful use.”

“Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic’s founding purpose and public commitments.” — Anthropic, federal lawsuit

Hegseth responded by designating Anthropic a supply chain risk to national security, a designation historically applied to foreign adversaries, and ordered all federal agencies and defense contractors to cease using the company’s products. Trump posted on Truth Social that federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology.”

The Pentagon said the phaseout would take six months due to operational dependencies. That means the military has been simultaneously bombing Iran with Claude’s assistance while publicly labeling Anthropic a threat.

Anthropic filed two federal lawsuits on March 9, one in California and one in the D.C. Circuit Court of Appeals. The company’s 48-page filing called the designation “unprecedented and unlawful” and accused the administration of retaliating against Anthropic for its protected speech about AI safety limitations.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” — Anthropic, federal lawsuit filing

The White House responded through spokeswoman Liz Huston.

“Under the Trump Administration, our military will obey the United States Constitution, not any woke AI company’s terms of service.” — Liz Huston, White House spokeswoman

Legal experts have questioned whether the supply chain risk designation can withstand judicial review. Lawyers Michael Endrias and Alan Rozenshtein, writing in the nonprofit publication Lawfare, argued the designation “exceeds what the statute authorizes” and that Hegseth’s own public statements “may have doomed the government’s litigation posture before it even begins.”

AI Accuracy: A 60 Percent Track Record

Beneath the competing legal arguments lies a technical concern neither side disputes: AI targeting systems are imperfect. According to internal military testing data reported by Tech Brew, Maven correctly identified objects, distinguishing a truck from a tree, for example, at approximately a 60% accuracy rate as of 2024 testing. Human analysts performed at roughly 84% accuracy in comparable evaluations.

Defense experts note that the 2003 Iraq invasion required approximately 2,000 intelligence analysts to perform targeting work now assigned to roughly 20 people with AI assistance. That efficiency gain is precisely what defenders of the technology cite. Critics counter that compression of the targeting cycle and reduction of human review increases the odds of errors like the one that appears to have destroyed the school in Minab.

The precedent being set in Iran also has structural implications. OpenAI signed a new agreement with the Pentagon in the hours after Anthropic was blacklisted, providing its own models for classified military networks. CEO Sam Altman stated the agreement includes safeguards around autonomous weapons and mass surveillance and urged the Pentagon to offer equivalent terms to all AI companies. Former OpenAI employees publicly criticized the deal, arguing the company was prioritizing defense revenue over founding principles of AI safety.

The broader architecture is now clear: commercial AI developed for consumer products is being rapidly retrofitted for classified military applications through contractors like Palantir. Whether the safety frameworks those companies negotiate before deployment can hold under pressure from a government that has declared those frameworks a threat is no longer a hypothetical question. In Iran, it is an operational one.

As AI systems compress targeting cycles from days to seconds, and as the gap between algorithmic recommendations and human judgment narrows in real time, what accountability structures are robust enough to ensure a 175-person error at a girls’ school is the exception rather than the new baseline?

Sources

This report was compiled using information from Wired and NBC News reporting on Palantir and Claude’s military role, the Washington Post on Claude’s central role in Operation Epic Fury, CBS News and NPR on the Anthropic lawsuit, official statements and press releases from Rep. Sara Jacobs, Rep. Jason Crow and Rep. Yassamin Ansari, AI warfare analysis from The Conversation and Tech Brew, and Al Jazeera on the Minab school strike investigation.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10