Need help?
<- Back

Comments (193)

  • evanmoran
    I’m writing a new type of CRDT that supports move/reorder/remove ops within a tree structure without tombstones. Claude Code is great at writing some of the code but it keeps adding tombstones back to my remove ops because “research requires tombstones for correctness”.This is true for a usual approach, but the whole reason I’m writing the CRDT is to avoid these tombstones! Anyway, a long story short, I did eventually convince Claude I was right, but to do it I basically had to write a structural proof to show clear ordering and forward progression in all cases. And even then compaction tends to reset it. There are a lot of subtitles these systems don’t quite have.
  • lateforwork
    Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
  • 01100011
    I don't expect AI to replace me anytime soon, but...AI is already letting me care less about the languages I use and focus more on the algorithms. AI helps me write tests. AI suggests improvements and catches bugs before compiling. AI writes helper scripts/tools for me. All of these things are good enough for me to accept paying a few hundred dollars every month, although I don't have to because my employer already does do that for me.6 months ago I was arguing that AI wasn't very good and code was more precise than english for specifying solutions. The first part is not true anymore for many things I care about. The second is still true but for many things I care about it doesn't matter.I'm getting tired of articles that try to tell me what to think about AI. "AI is great and will replace all programmers!"... "AI sucks and will ruin your brain and codebase!"... both of these are tired and meaningless arguments.
  • pacman128
    In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
  • picafrost
    So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.
  • randcraw
    Krouse points to a great article by Simon Willison who proposes that the killer role for vibe coding (hopefully) will be to make code better and not just faster.By generating prototypes that are based on different design models each end product can be assessed for specific criteria like code readability, reliability, or fault tolerance and then quickly be revised repeatedly to serve these ends better. No longer would the victory dance of vibe coding be simply "It ran!" or "Look how quickly I built it!".
  • ihodes
    I agree that programming language can be a better (denser, more precise) encapsulator of intent than natural language. But the converse is more often true; natural language is a denser and more precise encapsulator of intent than programming language.I think there's some irony in using Russell's quote being used this way. My intent will often be less clear to a reader once encoded in a language bound inextricably to a machine's execution context.Good abstraction meaningfully whittles away at this mismatch, and DSLs in powerful languages (like ML-family and lisp-family languages) have often mirrored natural(ish) language. Observe that programming languages themselves have natural language specifications that are meaningfully more dense than their implementations, and often govern multiple implementations.Code isn't just code. Some code encapsulates intent in a meaningfully information and meaning-dense way: that code is indeed poetry, and perhaps the best representation of intent available. Some code, like nearly every line of the code that backs your server vs client time example, is an implementation detail. The Electric Clojure version is a far better encapsulation of intent (https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...). A natural language version, executed in the context of a program with an existing client server architecture, is likely best: "show a live updated version of the servers' unix epoch timestamp and the client's, and below that show the skew between them."Given that we started with Russell, we could end with Wittgenstein's "Is it even always an advantage to replace an indistinct picture by a sharp one? Isn't the indistinct one often exactly what we need?"
  • bluGill
    A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.
  • vicchenai
    the integration glue comment really resonates. ive been using agents mostly for wiring up oauth flows and api integrations between services - stuff where theres no creativity involved, just reading 3 different docs and getting the tokens right. saved me hours on stuff i used to dread. but the moment i need to think about actual architecture decisions or tradeoffs, im back to my own brain. feels like thats where things will settle for a while.
  • abcde666777
    It seems to be inevitable that with any new technology we go through a phase of super duper excitement about the possibilities, where we try to use it to the extreme, and through that process start to absorb what it actually is and isn't capable of.The hype cycle's distasteful of course, but I've accepted that this is how humans figure out what things are. Like a child we have to abuse it before we learn how to properly use it.I think many of us sense and have sensed that the promises made of agentic programming smell too good to be true, owing to our own experiences as programmers and engineers. But experts in a domain are always the minority, so we have to understand that everyone else is going to have to reach the same intuition the hard way.
  • flitzofolov
    r0ml's third law states that: “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".Source: https://www.youtube.com/watch?v=h5fmhYc4U-Y
  • idopmstuff
    I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)
  • deadbabe
    My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
  • ljlolel
    Code will be replaced by EnglishScript running on ClaudeVM https://jperla.com/blog/the-future-is-claudevm
  • erichocean
    > If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...
  • woeirua
    The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.
  • rvz
    From "code" to "no-code" to "vibe coding" and back to "code".What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.We have a term for that already and it is called "comprehension debt". [0]With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).[0] https://addyosmani.com/blog/comprehension-debt/[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...[2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...
  • gedy
    When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!
  • rglover
    It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.
  • _pdp_
    Remember Deep Thought, the greatest computer ever built that spent 7.5 million years computing the Answer to the Ultimate Question of Life, the Universe, and Everything? The answer was 42, perfectly correct, utterly useless because nobody understood the question they were asking.That's what happens when you hand everything to a machine without understanding the problem yourself.AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.True understanding is indistinguishable from doing.
  • cratermoon
    I can't tell if the author's "when we get AGI" is sarcasm or genuine.
  • soumyaskartha
    Every few years something is going to kill code and here we are. The job changes, it does not disappear.
  • cratermoon
    Yet again we can pull out Edsger W.Dijkstra's 1978 article, "On the foolishness of "natural language programming"""In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."
  • aplomb1026
    [dead]
  • aplomb1026
    [dead]
  • Plutarco_ink
    [dead]
  • jee599
    [dead]
  • lucasay
    [dead]
  • developic
    What is this
  • pjmlp
    This is coping, with tools like Boomi, n8n, Langflow, and similar, there are plenty of automated tasks that can already be configured and that's it.
  • anon
    undefined
  • neversupervised
    The author’s intuition is still backward calibrated, even though he talks about the future. He doesn’t have an intuition for the future. All code will be AI generated. There’s no way to compete with the AI. And whatever new downsides this brings will be solved in ways we aren’t fully anticipating. But the solution is not to walk back vibecoding. You have to be blind to believe not most code will be vibecoded very soon.
  • lionkor
    To all the vibe coders:When you let an LLM author code, it takes ownership of that code (in the engineering sense).When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?