Need help?
<- Back

Comments (187)

  • beloch
    "Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target – which aircraft, drone or missile to use, which weapon to pair with it – what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution."----------------Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
  • Lerc
    "the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
  • phillipcarter
    Worth mentioning that the author wrote about this first on his substack: https://artificialbureaucracy.substack.com/p/kill-chain
  • tristanj
    A similar situation happened a few weeks ago when the US/Israel started targeting police facilities. They bombed a public park in Iran called "Police Park" because it had the name "Police" in it. It's a normal park.https://x.com/clashreport/status/2029574288253026510 https://x.com/tparsi/status/2029555364262228454
  • tunesmith
    Really fascinating article. Bits of bias here and there, like "The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed" -- you can respond to seeing and understanding something without destroying it -- but it underscores, to me at least, how much denser the "fog of war" has become. The fog of media reporting in general. Those first few paragraphs felt like a breath of fresh air.
  • jmward01
    I blame everything on the air force AOC concept and the joint targeting cycle. They are, at their core, an attempt to manage every aspect of a war from one room. It 'works' in peace time when you have exactly 3 real decisions to make a day and a staff of hundreds to orchestrate it but in war it is completely unresponsive, blind because all information comes through the telephone game and bought 100% into the idea that 'if all you have is a hammer, everything is a nail' problem. We have bombs. Let's bomb them. This is why we loose wars.Our operational level of war is junk. We have forgotten how to create a task force that has has a clear mission with a clear duration, resources, battlespace, ROE and, most importantly, authority to act. McChrystal 'rediscovered' empowering small teams that every flag officer rediscovers eventually in war. If your supporting the commander's cycle means enabling them to make all the decisions then you have just decided to loose the war. They can't make all the decisions. They need to expand that decision making power. That is their job. Build teams that have the authority and resources. Let those teams, if needed, also build teams if the problem is too big. Most importantly though, let those teams act. If you can't trust those commanders to make decisions and act on them then you shouldn't have put them in the job. Divide and conquer is the only solution here and the JTS/AOC model of warfare is the antithesis of this.
  • machinecontrol
    Interesting article. Seems like AI-washing isn't just for layoffs anymore.
  • tacheiordache
    I wonder if the coordinates for the school were passed to US intel on purpose to have US deeper entrenched into a war with Iran.. Who would benefit from this?
  • keiferski
    Before it was the gods, then God, then Nature, and now AI. Human beings really have a fundamental issue with accepting responsibility for their actions.From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.
  • coffinbirth
    The US is a morally and ethically bankrupt country, that's why something like this happens. Not the first time either[1].[1]: https://en.wikipedia.org/wiki/Amiriyah_shelter_bombing
  • burnte
    When AI gets something wrong, it's the operator's fault, IMO.
  • DrProtic
    It’s well known that US doesn’t commit war crimes, they just make mistakes.
  • Betelbuddy
    Its not a war crime if the AI does it?
  • ZeroGravitas
    The House of Saud put out an interesting think piece suggesting the whole war might be a result of AI psychosis.https://news.ycombinator.com/item?id=47540422The submission here is flagged dead though.
  • shykes
    You can't have a serious discussion of this bombing without addressing the information warfare component. To this day we don't know what actually happened. Between the general public and the facts, there are many middlemen, all with their own distorting factor: the IRGC; the US government; western press outlets such as the Guardian; and the people quoted by the press.IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?
  • worik
  • albatross79
    Had Iran done anything to the US as heinous as this one "mistake" in the last 50 years that compares? Imagine if some country did this to us and just brushed it off as a mistake.
  • sessionfs
    Ai makes mistakes, we all know that.
  • nailer
    The school was located adjacent to (or on the border of) an Islamic Revolutionary Guard Corps (IRGC) naval compound/base. Evidence (satellite imagery, verified videos, missile fragments consistent with a US Tomahawk cruise missile, and geolocation) shows the area was targeted as part of strikes on that military site> Within days, the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target.Really? Everyone thought the US had *missed*.
  • jameskilton
    Something that a lot of tech people, especially in Silicon Valley, seem to want to forget, is that at every level you still have people making decisions. AI is suggesting but someone, somewhere, still has to make the decision to act on that suggestion.It's still people doing people things.
  • ck2
    You know how that was done with a TomahawkThey've now burnt though almost ONE THOUSAND of thoseThey cost $4 million each, so that's another $4 BILLION that has to be replaced tooImagine several more months of that or even through 2029
  • EtienneDeLyon
    Isn't it a more reasonable explanation that the IDF deliberately had this school bombed because those schoolgirls were the children of Islamic Revolutionary Guard Corps officers?The intentional murder of enemy children is a tactic of the IDF. They've done it for decades.
  • expedition32
    Are Americans not even aware of their own history? The US carpet bombed the SHIT out of Korea and Vietnam. All it did was convince their enemies to fight.And then in Afghanistan and Iraq the US terrified of every shadow blew up anything that looked suspicious- again only serving their enemies.It is all just so damn tiresome and America never learns because it literally cannot go 5 years without starting some unnecessary and ultimately futile conflict.Imagine how much money China is saving.
  • pugchat
    [dead]
  • throwaway613746
    AI isn't an excuse for war crimes. Remember this at, and after, election time.
  • gowld
    [flagged]
  • csmpltn
    [flagged]
  • nahuel0x
    Israel and the US are bombing lots of schools and hospitals and civilian infrastructure, this is not the only case. This is intentional genocide, not a software/organizational/human error.
  • amarant
    >The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...
  • sva_
    Turning a military building into a girl's school, and then having this school right next to other military buildings - is this something that happens often? Or were there ulterior motives behind it?
  • ognav
    The Guardian carrying water for the AI industry. The distinction between Maven and Claude is futile. We get that Maven is Palantir, but it integrates Claude:https://www.reuters.com/technology/palantir-faces-challenge-...Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.I hope this is a single "journalist" and that the Guardian has not been bought.
  • rnab147
    WaPO writes that Claude selected targets:https://www.washingtonpost.com/technology/2026/03/04/anthrop...This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.Just like two five year olds: "You have a big nose." "No, you have a big nose."We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.