It looks like the US used Claude AI to strike the girls' school in southern Iran, killing 168
It’s been a couple of days since comedian Omid Djalili launched a bizarre attack on Zack Polanski, accusing the Green leader of spreading “misinformation” on Twitter. Polanski had called out the Epstein axis over the bombing of a girls’ school in Minab, southern Iran. His post clearly upset Djalili, who offered a bizarre conspiracy theory that the Iranians deliberately bombed their own school and filled it up with bodies they had kept on ice after the recent protest violence.
Djalili’s rant gained a ton of traction which I hope was down to bot activity rather than real people being fooled. Djalili is a regular on the BBC where he pushes the narrative that Iranians want this regime-change war, as though he speaks for them all. Well, Djalili now looks foolish because the side responsible for levelling schools in Gaza has finally admitted to bombing this school too. I can sense your shock.
Reuters is reporting that the preliminary assessment of military investigators is that the US was likely behind the attack. The US had initially blamed a misfired Iranian rocket, but that story has fallen apart following investigations from outlets like BBC Verify.
Analysis of video footage and satellite imagery shows the school was hit by multiple near-simultaneous strikes. Horrifically, this appears to have been a double-tap where you bomb once, wait for rescuers to arrive and bomb again.
Even BBC Verify conceded the missiles likely came from the US, based on the precision munitions and alignment with US operations near the Strait of Hormuz. Don’t forget the BBC initially started every headline with the words “Iran says” to cast doubt on a story it is now conceding is true. This is a pattern we have seen through over two years of genocide in Gaza.
As the evidence became undeniable, the US stopped denying responsibility, but says it’s still investigating—as if it doesn’t know exactly what happened. Pete Hegseth and Marco Rubio insist there was no deliberate targeting of civilians, but the truth is somehow even more chilling: multiple outlets are reporting that Anthropic’s Claude AI likely selected the school as a target. Yes, we’ve entered the nightmare era when machines decide on kills. My thoughts are with Sarah Connor at this difficult time.
The Guardian and WSJ report that Claude played a direct role in target selection for the Iran strikes, including intelligence assessments and battlefield simulations. Claude is integrated into Palantir’s Maven Smart System which crunches intel, prioritises targets, and runs simulations. The Washington Post says Claude generated over 1,000 targets on day one alone, supplying GPS co-ordinates, weapon suggestions, and even legal justifications. Why not just rename the damn thing Skynet?
It seems likely Claude flagged the Minab site after mistaking the school for part of the nearby IRGC base by relying on old maps and flawed data. Reports suggest Claude made errors that led to 168 innocents dying because human operators either didn’t bother with oversight or didn’t care. Either way, that would be a war crime.
Journalists are now asking who we could prosecute when AI kills civilians, as though this is a profound question. But the answer is simple: the designers of the system, the people who allowed it to choose targets, and the leaders who launched this illegal war. The International Criminal Court should be issuing arrest warrants for all involved.
The irony is that Anthropic has recently clashed with the Trump administration over military use of its AI. The company rejected demands for unrestricted access and refused to enable fully autonomous weapons or mass domestic surveillance. In retaliation, Trump ordered all federal agencies to immediately stop using Anthropic’s tools.
The president denounced the firm as a “Radical Left AI company” and a “supply-chain risk to national security” because it still maintains a shred of morality. Trump imposed a six-month phase-out for embedded systems and blacklisted Anthropic from military contracts. He also pressured contractors to drop the company, yet hours later, Claude was still powering strikes in Iran.
Media reports suggest the military used Claude because it was too deeply embedded in systems like Maven, meaning the US can’t do away with AI, even when it wants to. Isn’t that a reassuring thought?
Thank you for reading. All of my content will always be freely available, but if you wish to support my work, you can do so at Ko-fi or Patreon. Likes, shares and comments also help massively.



I think the whole "Claude" story is to hide the fact that the US has adopted lot, stock and barrel the Israeli Dahiya doctrine. This is shown by the continuing flattening of hospitals, red cross posts, squares with people, etc. In other words, this was totally deliberate. And true: AI is used to select targets, but when you instruct AI to select targets with maximum civilian casualties, it will do so. https://en.wikipedia.org/wiki/Dahiya_doctrine
They can't blame this on AI. The intentional strike on children was a signal of commitment to the mission, both to the Iranians and to the ruling classes and soldiers in the West.