Anthropic's AI hypothesis behind the Iran raids, how US Central Command may have used Claude for the attack

Anthropic’s AI hypothesis behind the Iran raids, how US Central Command may have used Claude for the attack

Image generated with AI.

The US Central Command he may have resorted to tools artificial intelligence to support the planning and execution of the recent war in Iran that began with the Israel-US attacks of February 28th. According to the reconstructions of Wall Street Journal And Axiosthe US military would use Claude, the language model of Anthropicduring the military operation conducted jointly with Israel. In this case, AI would have been used in areas of intelligencein the selection of objectives and also in construction of operational simulations.

All this despite an order of Donald Trump which imposed (at least until a few hours before the attack) theinterruption of each collaboration with the company. For the record, the US president had even declared on his social network Truth that Anthropic is «a radical left-wing AI company run by people who have no idea what the real world is». The friction with Anthropic arose following the raid perpetrated last January against the president of Venezuela, Nicolás Maduro. On that occasion, news came out that the Army had used Claude to perpetrate the mission, and Anthropic expressed its disappointment about it, noting that the service’s terms of use do not allow Claude to be used for violent and surveillance purposes.

How US Central Command would use Anthropic’s AI against Iran

According to the mutually agreeing reconstructions made by Wall Street Journal and from Axiosduring the joint Israeli-American bombing, the United States Central Command would continue to use Claude to develop intelligence assessments, that is, structured analyzes of information coming from different sources, and to conduct battlefield simulationsdigital tools that allow you to virtually test the outcome of different tactical options. AI has also been used foridentification of targets.

Now, by this we do not mean that an algorithm “decides” what and who to hit, but that it could have helped human analysts to identify correlations, priorities and possible consequences of attacks, speeding up processes and decisions that would otherwise have taken much longer.

On the operational level, the attack saw the use of Tomahawk cruise missiles, stealth fighter And one-way attack dronesi.e. unmanned aircraft designed to hit a single target and not return to base. It is important to clarify that AI does not directly control these weapons: its contribution concerns the analysis and planning, therefore the strategic part of the attacks perpetrated by the USA against Iran, and not the executive part of the same.

Trump’s breakup and the inclusion of OpenAI

The breaking point between Anthropic and the Pentagon was reached just hours before the start of operations against Iran, when the president Donald Trump had ordered theimmediate termination of all relationships with Anthropicpublicly accusing the company of having an ideological orientation incompatible with the needs of national defense. The presidential decision included a six-month period to phase out the company’s products from government systems. However, as several technical sources have underlined, when an AI model is already integrated into classified networks (i.e. IT infrastructures that manage secret data), its removal is not a simple administrative act. New security certifications, staff training, parallel tests and expensive alternative integrations are needed. Midhun Krishna Mco-founder and CEO of TknOps.io, a cost tracking service for LLMs, explained:

When AI tools are already integrated into simulation and real-time intelligence systems, decisions made at the top do not immediately translate into concrete changes. (…) When a model is incorporated into classified simulation and intelligence systems, there are sunk costs of integration, redevelopment, new security certifications and parallel testing, so a six-month phase-out may seem decisive, but the real financial and operational burden is much deeper.

The relationship between Anthropic and the United States, as anticipated at the beginning, has its roots in a previous episode: the use of AI in an operation against the president of Venezuela, Nicolás Madurowhich had prompted Anthropic to point out that the use of Claude for the case in question had represented a clear violation of the terms of use of the service. These explicitly prohibit violent applications, the development of autonomous weapons or mass surveillance, i.e. the systematic and large-scale monitoring of civilian populations. Since then, relations between the Pentagon and the company have deteriorated. The Secretary of Defense Pete Hegseth accused Anthropic of representing a «national security risk in the supply chain».

After the breakup with Anthropic OpenAI announced an agreement with the Pentagon for the use of its tools, including ChatGPT, on classified military networks, as confirmed by the company itself in the post on X which you can read below.