The United States military says it has struck more than 2,000 targets in Iran – dropping bombs, firing off Tomahawk cruise missiles from warships and launching drones.
It looks a lot like traditional warfare. But there’s cutting-edge technology said to be quietly operating in the shadows: artificial intelligence.
Multiple news outlets report the US military is using Anthropic’s Claude family of AI tools – large language models similar to OpenAI’s ChatGPT and Google’s Gemini – despite blacklisting the firm in February after it tried to put up guardrails on how its technology is used.
It’s unclear exactly how the technology is being employed, but experts say it is likely being used to better understand the battle space, to process information on targets that have been hit and to analyse satellite images.
US Secretary of Defense Pete Hegseth said in a briefing this week that the military uses AI, but wouldn’t say if the technology is being used in Iran.
“We’ve got a lot of autonomous systems, or systems that are – drones and others incorporated with smart AI aspects to them – a lot of which I can’t talk about here.”
AI is quickly becoming a mainstay of modern warfare, prompting pushback from some tech companies and sparking fresh debate about the ethics of military partnerships.
Claude, which has been integrated into the Department of Defense’s classified networks since 2024, was also reportedly used during January’s US military operation to capture former Venezuelan President Nicolas Maduro.
That move prompted questions from Anthropic about how its AI was deployed.
The company said it will not allow Claude to be used for mass surveillance of Americans or for fully autonomous weapons.
The Pentagon insists any AI it uses must be available for “all lawful purposes”.
“If you work for Raytheon or Northrop Grumman or Lockheed Martin, you know that you’re in the business of working for the Pentagon pretty much,” said Sarah Kreps, professor and director of Cornell University.
“What’s different here is that these are civilian technologies that have been developed for civilian use that are now getting appropriated for military purposes. And that’s where that tension lies.”
After Anthropic refused to capitulate, the Trump administration, in an unprecedented move, designated it a national security risk.
They have now resumed talks, but the spat exposed an ethical faultline in who decides how AI is used in battle as the technology revolutionises warfare.
In Ukraine, the military has already deployed AI-powered, autonomous drones that can identify and strike targets without human intervention.
Israel has used an AI-powered database to identify tens of thousands of Hamas targets in Gaza.
And militaries around the world are pouring billions of dollars into developing, integrating and expanding the use of AI.
“These models aren’t yet perfect. And so if they hallucinate or give a sort of inaccurate output and now those are being used for decisions about life and death,” said Mrs Kreps.
“The key thing is, and this gets lost in the conversation, no matter the kind of technology used, whether it’s a bow and arrow, a radar guided missile or an autonomous weapon systems, there’s always a human responsible for the use of force – not just under not just Pentagon policy but law and international treaty commitments,” said Michael Horowitz, the former director of the Emerging Capabilities Policy Office at the Pentagon.
“Or at least that’s how it’s supposed to work.”
Adoption of AI has quickly outpaced international regulation, with no single, legally binding treaty that puts up guardrails on how the technology can be used in war.
At least 60 countries have signed on to the US-led Political Declaration on Responsible Military Use of AI, which requires military AI to comply with international law, but there are no legal consequences if signatories fail to abide by the rules.
The United Nations General Assembly has also adopted several resolutions regarding AI’s use in the military.
Experts said the current state of geopolitics may disrupt future cooperation.
In February, global AI players met in Spain for the Responsible Artificial Intelligence in the Military Domain Summit in Spain.
At the previous two meetings, roughly 60 countries signed on to the outcome documents. This year, that was halved with the US and China sitting on the sidelines.
“I think the current geopolitical moment is making any kind of cooperation surrounding artificial intelligence much more, much more difficult,” said Mr Horowitz.
“It’s hard to see how we end up with a really strong sort of binding international law that prohibits uses of artificial intelligence, at least right now.
“If for no other reason than some of them, leading AI players, countries like the US and China seem unlikely to get on board.”
Originally written by: Toni Waterman
Source: Channel News Asia
Published on: 6 March 2026
Link to original article: US military’s reported use of Claude raises questions about AI in warfare