Crypto: Us Military Used Anthropic In Iran Strike Despite Ban Order By...

Crypto: Us Military Used Anthropic In Iran Strike Despite Ban Order By...

The US military reportedly relied on Anthropic’s Claude AI for intelligence analysis and targeting during an Iran strike hours after Trump ordered a ban on the company’s systems.

The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems.

Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic’s Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations.

The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows.

On Friday, the Trump administration instructed agencies to stop working with the company and directed the Defense Department to treat it as a potential security risk. The order came after contract talks broke down, with Anthropic refusing to grant unrestricted military use of its AI for any lawful scenario requested by defense officials.

Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ

Anthropic had previously secured a multiyear Pentagon contract worth up to $200 million alongside several major AI labs. Through partnerships involving Palantir and Amazon Web Services, Claude became approved for classified intelligence and operational workflows. The system was reportedly also involved in earlier operations, including a January mission in Venezuela that resulted in the capture of President Nicolás Maduro.

Tensions intensified after Defense Secretary Pete Hegseth demanded the company permit unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing certain applications as ethical boundaries the company would not cross, even if it meant losing government business.

In response, the Pentagon began lining up replacement providers, reaching an agreement with OpenAI to deploy its AI models on classified military networks.

Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents

Source: CoinTelegraph