Need help?
<- Back

Comments (188)

  • simonw
    This is pretty recent - the survey they ran (99 respondents) was August 18 to September 23 2025 and the field observations (watching developers for 45 minute then a 30 minute interview, 13 participants) were August 1 to October 3.The models were mostly GPT-5 and Claude Sonnet 4. The study was too early to catch the 5.x Codex or Claude 4.5 models (bar one mention of Sonnet 4.5.)This is notable because a lot of academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation.
  • runtimepanic
    The title is doing a lot of work here. What resonated with me is the shift from “writing code” to “steering systems” rather than the hype framing. Senior devs already spend more time constraining, reviewing, and shaping outcomes than typing syntax. AI just makes that explicit. The real skill gap isn’t prompt cleverness, it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants. That part doesn’t scale magically.
  • lesuorac
    > Most Recent Task for Survey> Number of Survey Respondents> Building apps 53> Testing 1I think this sums up everybody complaints about AI generated code. Don't ask me to be the one to review work you didn't even check.
  • AYBABTME
    It feels like we're doing another lift to a higher level of abstraction. Whereas we had "automatic programming" and "high level programming languages" free us from assembly, where higher level abstractions could be represented without the author having to know or care about the assembly (and it took decades for the switch to happen), we now once again get pulled up another layer.We're in the midst of another abstraction level becoming the working layer - and that's not a small layer jump but a jump to a completely different plane. And I think once again, we'll benefit from getting tools that help us specify the high level concepts we intend, and ways to enforce that the generated code is correct - not necessarily fast or efficient but at least correct - same as compilers do. And this lift is happening on a much more accelerated timeline.The problem of ensuring correctness of the generated code across all the layers we're now skipping is going to be the crux of how we manage to leverage LLM/agentic coding.Maybe Cursor is TurboPascal.
  • danavar
    So much of my professional SWE jobs isn't even programming - I feel like this is a detail missed by so many. Generally people just stereotype SWE as a programmer, but being an engineer (in any discipline) is so much more than that. You solve problems. AI will speed up the programming work-streams, but there is so much more to our jobs than that.
  • websiteapi
    we've never seen a profession drive themselves so aggressively to irrelevance. software engineering will always exist, but it's amazing the pace to which pressure against the profession is rising. 2026 will be a very happy new year indeed for those paying the salaries. :)
  • geldedus
    The "Ai-assisted programming" mistaken for "vibe coding" is getting old and annoying
  • throw-12-16
    Getting big "I'll keep making saddles in the era of automobiles" vibes from these comments.
  • amkharg26
    The title is provocative but there's truth to it. The distinction between "vibing" with AI tools and actually controlling the output is crucial for production code.I've seen this with code generation tools - developers who treat AI suggestions as magic often struggle when the output doesn't work or introduces subtle bugs. The professionals who succeed are those who understand what the AI is doing, validate the output rigorously, and maintain clear mental models of their system.This becomes especially important for code quality and technical debt. If you're just accepting AI-generated code without understanding architectural implications, you're building a maintenance nightmare. Control means being able to reason about tradeoffs, not just getting something that "works" in the moment.
  • ramoz
    > Takeaway 3c: Experienced developers disagree about using agents for software planning and design. Some avoided agents out of concern over the importance of design, while others embraced back-and-forth design with an AI.Im in the back-and-forth camp. I expect a lot of interesting UX to develop here. I built https://github.com/backnotprop/plannotator over the weekend to give me a better way to review & collaborate around plans - all while natively integrated into the coding agent harness.
  • senshan
    Excellent survey, but one has to be careful when participating in such surveys:"I’m on disability, but agents let me code again and be more productive than ever (in a 25+ year career). - S22"Once Social Security Administration learns this, there goes the disability benefit...
  • andy99
    Is the title an ironic play on AI’s trademark writing style, is it AI generated, or is the style just rubbing off on people?
  • banbangtuth
    You know what. After seeing all these articles about AI/LLM for these past 4 years, about how they are going to replace me as software developers and about how I am not productive enough without using 5 agents and being a project manager.I. Don't. Care.I don't even care about those debates outside. Debates about do LLM work and replace programmers? Say they do, ok so what?I simply have too much fun programming. I am just a mere fullstack business line programmer, generic random replaceable dude, you can find me dime a dozen.I do use LLM as Stack Overflow/docs replacement, but I always code by hand all my code.If you want to replace me, replace me. I'll go to companies that need me. If there are no companies that need my skill, fine, then I'll just do this as a hobby, and probably flip burgers outside to make a living.I don't care about your LLM, I don't care about your agent, I probably don't even care about the job prospects for that matter if I have to be forced to use tools that I don't like and to use workflows I don't like. You can go ahead find others who are willing to do it for you.As for me, I simply have too much fun programming. Now if you excuse me, I need to go have fun.
  • senshan
    I often tell people that agentic programming tools are the best thing since cscope. The last 6 months I have not used cscope even once after decades of using it nearly daily.[0] https://en.wikipedia.org/wiki/Cscope
  • game_the0ry
    > Through field observations (N=13) and qualitative surveys (N=99)...Not a statistically significant sample size.
  • 000ooo000
    Have to wonder about the motivations of research when the intro leads with such a quote.
  • zkmon
    I haven't seen the definition of an agent, in the paper. Do they differentiate agents from generic online chat interfaces?
  • SunlitCat
    Funny how the title alone evokes the old “real programmers” trope https://xkcd.com/378/
  • zwnow
    Idk, I still mostly avoid using it and if I do, I just copy and paste shit into the Claude web version. I wont ever manage agents as that sounds just as complicated as coding shit myself.
  • softwaredoug
    The new layer of abstraction is tests. Mostly end-to-end and integration tests. It describes the important constraints to the agents, essentially long lived context.So essentially what this means is a declarative programming system of overall system behavior.
  • 4b11b4
    I like to think of it as "maintaining fertile soil"
  • andrewstuart
    Don’t let anyone tell you the right way to program a computer.Do it in the way that makes you feel happy, or conforms to organizational standards.
  • anon
    undefined