Children filled the school and park; the algorithm marked them for death
How the US and Israel are using AI to automate mass murder in Iran
TEHRAN — During the first days of the 2026 U.S.-Israeli aggression against Iran, families gathered at Tehran's Police Park.
Children chased each other across the grass. Parents pushed strollers along shaded paths.
Then the missiles came.
Somewhere in a command center, an artificial intelligence system had scanned satellite imagery and street names, detected the word "police," and flagged the location as a government target.
Even if it were a police station, it would still be criminal aggression. However, it starkly exposes how callous algorithms (and those who deploy them) are in turning civilians into targets.
Did a human analyst review the coordinates and pull up photographs showing playground equipment and picnic blanket? Or did the algorithm decide based on the data it was fed?
Either way, the decision was executed. And Iranian families paid with their blood.
This is not a malfunction. This is how war is waged in the twenty-first century—by machines that kill without conscience, enabled by humans who refuse to look.
From Gaza's laboratory
The AI systems now consuming Iranian lives have been ruthlessly tested in Gaza, where Israel spent over two years treating 2.3 million Palestinians as laboratory specimens for what tech journalist Jacob Ward calls "lethal beta."
+972 Magazine's investigation exposed the machinery. There is "The Gospel" (Habsora)—a "mass assassination factory" that processes surveillance data into bombing targets at industrial scale.
Where human analysts once generated fifty targets annually, still deliberately murderous, The Gospel produces one hundred per day.
There is "Lavender"—a machine learning system that ranked Gaza's entire male population by probability of militant affiliation, flagging 37,000 men for assassination in six weeks.
Its "error rate," according to the Israeli magazine: ten percent. Meaning thousands of innocent civilians were algorithmically marked for death.
Intelligence officers told investigators they spent roughly twenty seconds reviewing each Lavender recommendation.
Twenty seconds to end a human life and everyone near it. "We were not interested in how the machine arrived at its conclusions," one source admitted. "We only wanted to know if the target was male."
Then there is "Where's Daddy?" This system tracks flagged individuals via their mobile phones.
When a "target" entered his family home, it alerted operators—enabling strikes specifically timed to maximize civilian casualties.
The Israeli military deliberately waited until men were surrounded by their wives and children before bombing.
This is not collateral damage. This is collateral design.
The Silicon Valley pipeline
The United States has not merely funded and endorsed Israel's algorithmic warfare—it has integrated it into its own operations against Iran with breathtaking speed.
The Washington Post reports that U.S. forces deployed "the most advanced artificial intelligence it's ever used in warfare" in the current campaign of aggression.
At the center stands Palantir Technologies, the data analytics firm founded by Peter Thiel and led by CEO Alex Karp.
Palantir's "Maven Smart System" reportedly U.S. commanders identify over one thousand Iranian "targets" in the war's first twenty-four hours alone—a number that would have taken human analysts months to compile.
Maven digests satellite imagery, communication intercepts, signals intelligence, and even power consumption patterns to locate Iranian leadership and military assets.
Its "Gotham" platform fuses these streams into targeting recommendations delivered directly to strike operators.
Until recently, Maven incorporated Claude—an AI language model created by Anthropic.
The pairing generated hundreds of potential targets, provided precise geographic coordinates, and ranked them by operational priority.
The Minab massacre
On February 28, a missile struck the Shajar-e Tayyeb elementary school for girls in Minab, Hormozgan province.
School was in session. The first strike killed children at their desks. The second, minutes later—a classic "double tap"—killed first responders and parents racing to pull bodies from the rubble.
Approximately 150 girls, most between seven and twelve years old, were murdered that day.
Wes Bryant, a former Pentagon targeting official, told The New York Times the evidence pointed to "perfectly precise" strikes, meaning the school was likely hit due to "misidentification" by the targeting system.
The Pentagon's response has been a masterclass in evasion. War Secretary Pete Hegseth stated, "We never target civilian targets," while confirming "an investigation."
White House press secretary Karoline Leavitt went further, accusing Iran of using "propaganda quite effectively" and suggesting reporters had "fallen for that propaganda."
Trump himself resorted to shameless lying and said the strike "was done by Iran."
Iran’s Foreign Minister Abbas Araghchi described the U.S. president's accusation as "funny," adding that "there is evidence that this school was attacked by an American jet fighter."
The message is unmistakable: when American and Israeli bombs kill Iranian children, the problem is not the bombs. The problem is that anyone noticed.
The AI alibi
This brings us to the most insidious function of algorithmic warfare: accountability evaporation.
When a school is bombed, when a park is destroyed, when a family is incinerated in their home—who is responsible?
The officer who approved the strike in twenty seconds? The programmer who wrote the code? The commander who set the threshold values? The company that sold the system? The political leaders who authorized the war?
The answer that many in the age of AI warfare have pushed is that it may be no one at all.
The New Republic observed that "autonomous weapons are, by design, accountability-dissolving machines."
When an algorithm makes a targeting recommendation, and a human approves it without adequate information—or not at all, as many systems may soon be human-free—the chain of responsibility dissolves into the machine.
Peter Asaro, chair of the International Committee for Robot Arms Control, posed the essential question: "If something does go wrong, then who's responsible?"
The question hangs unanswered over the rubble of Minab, over the blood-stained grass of Police Park, over the more than 1,230 Iranian civilians killed in this war so far—including at least 175 children.
We can already anticipate the defense: the AI made a mistake. The threshold values were set too low. The training data was insufficient.
But this is a convenient fiction. The systems were designed by humans.
The decision to bomb family homes is a human decision. The decision to spend twenty seconds reviewing a target before authorizing death is a human decision.
Blaming AI is the ultimate evasion. It allows the architects of mass killing to wash their hands of blood while continuing to profit from the technologies that spilled it.
Leave a Comment