DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Careers Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

TechTonic: US military deploys Anthropic’s Claude AI in Iran strikes despite Trump’s ban

The development has laid bare the deep integration of advanced AI tools into military planning and raised urgent questions about governance, ethics and the future of AI in war

  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

In one of the most striking intersections of artificial intelligence and modern warfare, the US military reportedly deployed Claude, the large-language AI model developed by the San Francisco-based company Anthropic, in its recent attacks on Iran even after President Donald Trump ordered all federal agencies to stop using it.

Advertisement

The development has laid bare the deep integration of advanced AI tools into military planning and raised urgent questions about governance, ethics and the future of AI in war.

Advertisement

According to reports by The Wall Street Journal and other outlets, the US military incorporated Claude into its operational planning during the massive US-Israel strikes on Iranian targets. The AI system was used for intelligence analysis (assisting commanders in processing and interpreting complex battlefield data), target Identification and running simulations to support strategic decisions about how operations might unfold.

Advertisement

As per reports, Claude’s use went beyond mere technical assistance, reflecting how deeply embedded AI tools have become in the US military workflows even as political tensions over those tools escalate.

The controversy stems from Trump’s decision, issued just hours before the Iran attack began, to require all federal agencies to immediately cease using Anthropic’s AI technologies. Trump publicly denounced Anthropic as a “radical Left AI company” and criticised its leadership for not aligning with military needs, claiming its reluctance to fully open up technology rights posed national security risks.

Advertisement

Despite this directive, the US Central Command and other military bodies continued using Claude in the Iran operation.

The reason cited by the command centres was that Claude was already deeply integrated into military intelligence platforms and there was no ready substitute that could be deployed on such short notice. As the Pentagon itself acknowledged, detaching from a widely embedded technology could not happen overnight.

Anthropic’s Claude had become one of the few AI systems cleared for use within classified US military networks, allowing it to handle sensitive intelligence data. Its integration involved partnerships with defence-oriented data systems and cloud infrastructure, which made it useful for real-world operational tasks.

This integration had begun well before the Iran conflict, including deployment in operations such as the mission to capture Venezuelan President Nicolás Maduro. Anthropic later objected that such use violated its terms, which explicitly bar the deployment of Claude for lethal autonomous weapons or surveillance without human oversight.

Read what others can’t with The Tribune Premium

Advertisement
Advertisement
Advertisement
tlbr_img1 Classifieds tlbr_img2 Videos tlbr_img3 Premium tlbr_img4 E-Paper tlbr_img5 Shorts