• Source:JND

Artificial intelligence tools are becoming increasingly critical in modern warfare, but triggering their use is now at the heart of a political and national security controversy in the United States. The government of the United States employed artificial intelligence tools created by Anthropic during a recent airstrike on Iran, only hours after announcing publicly that it would stop using the start-up’s technology, according to a report. The news underscores the difficulty of winding down powerful A.I. systems that have already seeped into military operations.

Iran Attack Uses Anthropic’s Claude AI

Many of the U.S. military commands that had access to Claude used it ahead of the Iran airstrike, according to a report by The Wall Street Journal, where U.S. Central Command is among those with access to Anthropic’s AI model. The AI system was allegedly used to assess intelligence, identify targets, and simulate battle scenarios.

ALSO READ: Apple Announces 'Big Week Ahead', iPhone 17e And Affordable MacBooks Could Be On The Way

The report also said Anthropic’s AI had previously been deployed by the Pentagon in the capture of Nicolás Maduro, highlighting its deep entrenchment in high-profile military operations.

Why the Ban Was Delayed

The reportedly continued use of Claude in sensitive operations is one factor the US government cited for deciding on a six-month phase-out period instead of an immediate ban. And yet Donald Trump, in a Truth Social post, verbally attacked Anthropic, calling it “leftwing nut jobs” and "woke" and accusing its actions of endangering American lives and national security.

Trump ordered all federal agencies to “immediately cease” using technology from Anthropic, while acknowledging that it would need some time for an eventual shift in cases like the Department of Defense, which are already accustomed to using the tools.

ALSO READ: Reliance Jio Prepaid Recharge Plan With Unlimited 5G, YouTube Premium And Prime Video: Check Price And Details

Anthropic and the US government have been at loggerheads for months about how the company’s AI models are deployed in national defence, according to reports. The AI startup says it gave the US DoD access to its tech with two essential caveats: it wouldn’t allow mass surveillance of Americans on domestic soil and no fully autonomous weapons.

Anthropic has also resisted being classified as a “supply chain risk” by the US government, saying it would fight the designation in court. The company argued that it would be unprecedented for an American company to take such action and could set a dangerous legal and policy precedent.


Also In News