Need help?
<- Back

Comments (33)

  • hibikir
    LLMs don't lack the virtue of laziness: it has it if you want it to, by just having a base prompt that matches intent. I've had good success convincing claude backed agents to aim for minimal code changes, make deduplication passes, and basically every other reasonable "instinct" of a very senior dev. It's not knowledge that the models haven't integrated, but one that many don't have on their forefront with default settings. I bet we've all seen the models that over-edit everything, and act like the crazy mid-level dev that fiddles with the entire codebase without caring one bit about anyone else's changes, or any risk of knowledge loss due to overfiddling.And on Jess' comments on validating docs vs generating them... It's a traditional locking problem, with traditional solutions. And it's not as if the agent cannot read git, and realize when one thing is done first, in anticipation of the other by convention.I'm quite senior: In fact, I have been a teammate of a couple of people mention in this article. I suspect that they'd not question my engineering standards. And yet I've no seen any of that kind of debt in my LLM workflows: if anything, by most traditional forms of evaluating software quality, the projects I work on are better than what they were 5, 10 years ago, using the same metrics as back then. And it's not magic or anything, but making sure there are agents running sharing those quality priorities. But I am getting work done, instead of spending time looking for attention in conferences.
  • kvisner
    I see what Martin is saying here, but you could make that argument for moving up the abstraction layers at any point. Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it.My counter is that technical intent, in the way he is describing it, only exists because we needed to translate human intent into machine language. You can still think deeply about problems without needed to formulate them as domain driven abstractions in code. You could mind map it, or journal about it, or put post-it notes all over the wall. Creating object oriented abstractions isn't magic.
  • backprop1989
    > The problem is that LLMs inherently lack the virtue of laziness.I assure you, they do not.
  • ryanisnan
    I think Martin isn't wrong here, but I've first hand seen AI produce "lazy" code, where the answer was actually more code.A concrete example, I had a set of python models that defined a database schema for a given set of logical concepts.I added a new logical concept to the system, very analogous to the existing logical set. Claude decided that it should just re-use the existing model set, which worked in theory, but caused the consumers to have to do all sorts of gymnastics to do type inference at runtime. It "worked", but it was definitely the wrong layer of abstraction.
  • brodouevencode
    > ...to develop the powerful abstractions that then allow us to do much more, much more easily. Of course, the implicit wink here is that it takes a lot of work to be lazyThis lines up with YAGNI, but most people believe the opposite, often using YAGNI to justify NOT building the necessary abstractions.
  • PaulHoule
    Hits the spot for me. I am always pushing back on AI to simplify and improve concision.
  • JulienMer
    [dead]
  • mfiguiere
    Wrong link. Technical, Cognitive and Intent Debt was discussed here: https://martinfowler.com/fragments/2026-04-02.html