Need help?
<- Back

Comments (91)

  • agentultra
    I’m working as a single solo developer of a tiny video game. I’m writing it in C with raylib. No coding assistants, no agents, not even a language server.I only work on it for a few hours during the week. And it’s progressing at a reasonable pace that I’m happy with. I got cross-compilation from Linux to Windows going early on in a couple of hours. Wasn’t that hard.I’ve had to rework parts of the code as I’ve progressed. I’ve had to live with decisions I made early on. It’s code. It’s fine.I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.Agree with writing less code though. The economics of throwing out 37k lines of code a week is… stupid in the extreme. If we get paid by the line we could’ve optimized for this long before LLM’s were invented. It’s not like more lines of code means more inventory to sell. It’s usually the opposite: the more bugs to fix, the more frustrated customers, the higher churn of exhausted developers.
  • osm3000
    I am a machine learning engineer. I've been in the domain almost 12 years now (different titles and roles).In my current role (and by no means that is unique), I don't know how to write less code.Here are problems I am facing: - DS generating a lot of code - Managers who have therapy sessions with Gemini, and in which their ideas have been validated - No governance on DS (you want this package? import it) - No governance on Infrastructure (I spent a couple of months upskilling in a pipeline technology that were using: reading documentation and creating examples, until I became very good it...just for the whole tech to be ditched) - Libraries and tools that have been documentation, or too complex (GCP for example)The cognitive overload is immense.Back few years ago, when I was doing my PhD, immersing in PyTorch and Scipy stack had a huge return on investment. Now, I don't feel it.So, how do I even write less code? Slowly, I am succumbing to the fact that my tools and methods are inappropriate. I am steadily shifting towards offloading this to Claude and its likings.Is it introducing risks? For sure. It's going to be a disaster at one point. But I don't know what to do. Do I need a better abstraction? Different way to think about it? No clue
  • bob1029
    > Nowadays many people are pushing AI-assisted code, some of them in a responsible way, some of them not. So... what do we do?You hold them accountable.Once upon a time we used to fire people from their jobs for doing things poorly. Perhaps we could return to something approximating this model.
  • gbro3n
    My current take is that AI is helping me experiment much faster. I can get less involved with the parts of an application that matter less and focus more (manually) on the parts that do. I agree with a lot of the sentiment here - even with the best intentions of reviewing every line of AI code, when it works well and I'm working fast on low stakes functionality, that sometimes doesn't happen. This can be offset however by using AI efficiencies to maintain better test coverage than I would by hand (unit and e2e), having documentation updated with assistance and having diagrams maintained to help me review. There are still some annoyances, when the AI struggles with seemingly simple issues, but I think that we all have to admit that programming was difficult, and quality issues existed before AI.
  • Witty0Gore
    I think that generally creators being responsible for what they ship applies across the board. That doesn't change because AI has it's fingers in it.
  • voidUpdate
    I'm not entirely sure I can trust the opinions of someone on LLMs when their blog is sponsored by an AI company. Am I not simply seeing the opinions that the AI company is paying for?
  • Radle
    My repos for personal projects are split in two. One side contains code of better quality than I could write myself. The other side is throwaway vibe-coded shit that works somehow.
  • travelalberta
    Code Complete came out in '93 and even then they acknowledge most of the work around development wasn't actually programming but architecture, requirements, and design.Sure you can let Claude have a field day and churn out whatever you want but the question is: a) Did you read the diffs and provide the necessary oversight to make sure it actually does what you want properly, b) Is the feature actually useful?If you've worked on legacy systems you know there's so much garbage floating around that the bar isn't that high generally for code as long as it seems to work. If you read the code and documentation Claude makes thoroughly and aren't blindly accepting every commit there is not really a problem as long as you are responsible and can put your stamp of approval on it. If you are pushing garbage through it doesn't matter if a junior dev, yourself, or Claude wrote it, the problem isn't the code but your CI/CD process.I think the problem is expectations. I know some devs at 'AI-native' organizations that have Claude do a lot for them. Which is fine, for a lot of boiler plate or standard requests they can now ship 2X code. The problem is the expectation is now that they ship 2X code. I think if you leave timelines relatively the same as pre-AI then having an agent generate, document, refactor, test, and evaluate code with you can lead to a better product.
  • nour833
    Yeah many newbies are thinking that all ai generated code is safe while they can poison the next gen ai by training on wrong data.
  • chillaranand
    For various internal tools & other projects, I started using config only tools and avoid code as much as possible.https://avilpage.com/2026/03/config-first-tools.html
  • andai
    After experimenting with various approaches, I arrived at Power Coding (like Power Armor). This requires:- small codebases (whole thing is injected into context)- small, fast models (so it's realtime)- a custom harness (cause everything I tried sucks, takes 10 seconds to load half my program into context instead of just doing it at startup lmao)The result is interactive, realtime, doesn't break flow (no waiting for "AI compile", small models are very fast now), and most importantly: active, not passive.I make many small changes. The changes are small, so small models can handle them. The changes are small, so my brain can handle them. I describe what I want, so I am driving. The mental model stays synced continuously.Life is good.
  • qudat
    A similar post with more emphasis on validating changes: https://bower.sh/thinking-slow-writing-fast
  • AlexSalikov
    Good framing. I’d add that “be responsible” extends well beyond code quality - it’s about product responsibility.AI making code cheaper to produce doesn’t make the decisions around it any cheaper. What to build, for whom, and why — that’s still fully on you. It should free up more time for strategy, user understanding, and saying “no” to things that shouldn’t exist regardless of how easy they are to ship.The maintainability concern Orhun raises is real, but I think the root cause isn’t AI — it’s ownership. If you don’t understand what was built, you can’t evolve it. It’s the same failure mode as a PM who doesn’t grasp the technical implementation — they end up proposing expensive features that fight the architecture instead of working with it. Eventually, someone has to pay for that disconnect, and it’s usually the team
  • stratts
    It was always possible to write large amounts of crappy code if you were motivated or clueless enough (see https://github.com/radian-software/TerrariaClone). It's now just easier, and the consequences less severe, as the agent has code comprehension superpowers and will happily extend your mud ball of a codebase.There are still consequences, however. Even with an agent, development slows, cost increases, bugs emerge at a higher rate, etc. It's still beneficial to focus on code quality instead of raw output. I don't think this is limited writing it yourself, mind - but you need to actually have an understanding of what's being generated so you can critique and improve it.Personally, I've found the accessibility aspect to be the most beneficial. I'm not always writing more code, but I can do much more of it on my phone, just prompting the agent, which has been so freeing. I don't feel this is talked about enough!
  • philipwhiuk
    > It's something ethical that I don't know the answer to. In my case, it was the guy's first ever open source project and he understandably went for the quickest way of creating an app. While I appreciate their contribution to open source, they should be responsible for the quality of what they put out there.Pitching this is the exact opposite of the maintainer burden of expectation.> Sometimes I discover a project that is truly wonderful but visibly vibe-coded. I start using it without the guarantee of next release not running rm -rf and wipe my system.For me this is on you, not the developer.
  • shevy-java
    > So you are saying that the quality of the projects is going down?The website seems to at the least be semi-generated via AI. But I think the statement that the quality of many projects went downwards, is true.I am not saying all projects became worse, per se, but if you, say, search for some project these days, often you land on a github page only. Or primarily. How is the documentation there? Usually there is README.md and some projects have useful documentation. But in most cases that I found, open source projects really have incredibly poor documentation for the most part. Documentation is not code, so the code could be great, but I am increasingly noticing that even if the code gets better, the documentation just gets worse; rarely updated, if at all. Even when you file requests for specific improvements, often there is no response or change, probably because the author just lacks time to do so, anyway.But I am also seeing that the code also gets worse. AI generated slop is often unreadable and unmaintainable. I have even recently seen AI spam slop used on mailing lists - look here:https://lists.ffmpeg.org/archives/list/ffmpeg-devel@ffmpeg.o...Michael Niedermayer does not seem to understand why AI slop is a problem. One comment reveals that. I don't read mailing lists myself really (never was able to keep up with traffic) but I would be pissed to no ends if AI spam like that would land into my mailbox and waste my time. Yet the people who use AI spam, don't seem to understand mentally why that is a problem. This is interesting. They suddenly think spam is ok if AI generated it. So the overall trend is that quality goes down more and more. Not in all projects but in many of them.
  • jsxyzb
    [dead]
  • Kiyo-Lynn
    [dead]