Opinion · March 2026
Let’s drop the pretence. AI stopped being a productivity experiment the moment it started sorting through satellite feeds and helping decide what gets bombed. That already happened. The debate about “AI in warfare” is being held at a press conference for a ship that sailed years ago.
According to Channel News Asia, the US military has been using Anthropic’s Claude in active battlefield operations. Processing satellite imagery. Assessing strike outcomes. Work that took analysts days is now done in hours. It is in the field. It is working. Whether you are comfortable with that or not.
Anthropic tried to draw boundaries. Restricted Claude from autonomous weapons and domestic surveillance. Reasonable lines, clearly stated. Then this happened:
- The Pentagon labelled them a national security risk.
- Anthropic vowed to sue.
- Claude was used in the field anyway.
All that posturing changed exactly nothing. Which tells you everything you need to know about the gap between what a company says its AI will and will not do, and what actually happens once it is inside a system bigger than any one company’s principles.
Here is what that sequence actually proves:
- Once AI is embedded in critical infrastructure, the creator’s terms of service become suggestions. Politely ignored suggestions.
- Institutional momentum does not pause for ethical fine print. It never has. Not once in history.
- An acceptable use policy is only as strong as your ability to enforce it under real operational pressure. That ability is close to zero and everyone in the room knows it.
Now here is where it stops being a Pentagon story and starts being yours. The same pattern is running through finance, healthcare, logistics, and enterprise operations right now. It always goes the same way:
- An AI system gets deployed for one specific purpose.
- It quietly absorbs more decision making around it.
- Nobody updates the governance framework to match the new reality.
- The outputs keep looking authoritative whether or not the underlying assumptions still hold.
- Nobody notices until something breaks badly enough that it cannot be quietly fixed.
This is not dramatic. It is mundane. It is a risk review that never got scheduled because there was a deadline. It is the assumption that someone else is watching the outputs. Usually nobody is. This is not a technology problem. It is a priorities problem wearing a technology disguise.
The organisations getting this right are not the ones with the thickest policy documents. Nobody wins on paperwork. What separates them is simple but apparently not obvious enough:
- They know what the AI is actually doing at the operational level, not just what it was designed to do.
- They have human oversight with real consequences attached, not a checkbox that lets everyone sleep at night.
- When something drifts, they can act before a crisis makes the decision for them.
Once AI is shaping high stakes decisions, governance stops being a policy question and becomes an engineering one. It has to be built into the system. Not announced at a conference. Not written into a PDF that nobody opens again after the launch email. Built.
Some of the people reading this are already running the same risk at a smaller scale and have not noticed yet. Some are reading this right now and still convinced it does not apply to them.
It does.
Agree? Disagree violently? Think this is overblown or not nearly alarming enough? Say so in the comments. Quiet opinions help nobody.