Need help?
<- Back

Comments (203)

  • snickerer
    After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming.They never helped me solve complex problems with low-level libraries. They can not find nontrivial bugs. They don't get the logic of interwoven layers of abstractions.LLMs pretend to do this with big confidence and fail miserably.For every problem I need to turn my brain to ON MODE and wake up, the LLM doesn't wake up.It surprised me how well it solved another task: I told it to set up a website with some SQL database and scripts behind it. When you click here, show some filtered list there. Worked like a charm. A very solved problem and very simple logic, done a zillion times before. But this saved me a day of writing boilerplate.I agree that there is no indication that LLMs will ever cross the border from simple-boilerplate-land to understanding-complex-problems-land.
  • trashb
    "Edgar Dijkstra called it nearly 50 years ago: we will never be programming in English, or French, or Spanish. Natural languages have not evolved to be precise enough and unambiguous enough. Semantic ambiguity and language entropy will always defeat this ambition." this is the most important quote for any AI coding discussion.Anyone that doesn't understand how the tools they use came to be is doomed to reinvent them.
  • llmslave2
    In my career I've seen many programmers who cut corners to finish their work faster. It's rarely laziness, but something else that I'm not sure of. But they race to churn out something that appears to work before jumping into the next thing, usually with messy unmaintainable code with poor performance characteristics, lacking any handling of edge cases and with subtle (and not so subtle) bugs.I think AI is like crack for these programmers, since the output of LLM's is so similar to what they would normally produce. In some cases it really is a 10x speedup for them. But it's also a 10x slowdown for those who have to fix and maintain that software. Which I believe explains both the lack of increased production output and variance of opinion on this.
  • mohsen1
    I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.But something tells me “this time is different” is different this time for real.Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me. I’m basically just the conductor of all those processes.Oh, and don't ask about coding. If you use AI for tasks above, as a result you'll get very well defined coding task definitions which an AI would ace.I’m still hired, but I feel like I’m doing the work of an entire org that used to need twenty engineers.From where I’m standing, it’s scary.
  • holri
    It is just the Eliza effect on a massive scale: https://en.wikipedia.org/wiki/ELIZA_effectReading Weizenbaum today is eye opening: https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...
  • frankie_t
    Just like the pro-AI articles, it reads to me like a sales pitch. And the ending only adds to it: the author invites to hire companies to contract him for training.I would only be happy if in the end the author turns out to be right.But as the things stand right now, I can see a significant boost to my own productivity, which leads me to believe that fewer people are going to be needed.
  • EagnaIonat
    I read a book called "Blood in the machine". It's the history of the Luddites.It really put everything into perspective to where we are now.Pre-industrial revolution whole towns and families built clothing and had techniques to make quality clothes.When the machines came out it wasn't overnight but it wiped out nearly all cottage industries.The clothing it made wasn't to the same level of quality, but you could churn it out faster and cheaper. There was also the novelty of having clothes from a machine which later normalised it.We are at the beginning of the end of the cottage industry for developers.
  • ChicagoDave
    Here’s how I see it. Writing code or building software well requires knowledge of logic, data structures, reliable messaging, and separation of concerns.You can learn a foreign language just fine, but if you mangle the pronunciation, no one will talk to you. Same thing with hacking at software without understanding the above elements. Your software will be mangled and no one will use it.
  • mikewarot
    >WYSIWYG, drag-and-drop editors like Visual Basic and Delphi were going to end the need for programmers.VB6 and Delphi were the best possible cognitive impedance match available for domain experts to be able to whip up something that could get a job done. We haven't had anything nearly as productive in the decades since, as far as just letting a normie get something done with a computer.You'd then hire an actual programmer to come in and take care of corner cases, and make things actually reliable, and usable by others. We're facing a very similar situation now, the AI might be able to generate a brittle and barely functional program, but you're still going to have to have real programmers make it stable and usable.
  • laszlojamf
    The way I see it, the problem with LLMs is the same as with self-driving cars: trust. You can ask an LLM to implement a feature, but unless you're pretty technical yourself, how will you know that it actually did what you wanted? How will you know that it didn't catastrophically misunderstand what you wanted, making something that works for your manual test cases, but then doesn't generalize to what you _actually_ want to do? People have been saying we'll have self-driving cars in five years for fifteen years now. And even if it looks like it might be finally happening now, it's going glacially slow, and it's one run-over baby away from being pushed back another ten years.
  • aizk
    This time it actually is different. HN might not think so, but HN is really skewed towards more senior devs, so I think they're out of touch with what new grads are going through. It's awful.
  • pjmlp
    As someone having watched AI systems being good enough to replace jobs like content creation on CMS, this is being in denial.Yes software developer are still going to be need, except much fewer of us, exactly like fully automated factories still need a few humans around, to control and build the factory in first place.
  • d_silin
    In aviation safety, there is a concept of "Swiss cheese" model, where each successful layer of safety may not be 100% perfect, but has a different set of holes, so overlapping layers create a net gain in safety metrics.One can treat current LLMs as a layer of "cheese" for any software development or deployment pipeline, so the goal of adding them should be an improvement for a measurable metric (code quality, uptime, development cost, successful transactions, etc).Of course, one has to understand the chosen LLM behaviour for each specific scenario - are they like Swiss cheese (small numbers of large holes) or more like Havarti cheese (large number of small holes), and treat them accordingly.
  • zkmon
    I see it as pure deterministic logic being contaminated by probabilistic logic at higher layers where human interaction happens. Seeking for human comfort by forcing computers to adapt to the human languages. Building adapters that can allow humans to stay in their comfort zone instead of dealing with the sharp-edged computer interfaces.At the end, I don't see it going beyond being a glorified form-assistant who can search internet for answers and summarize. That boils down to chat bots that will remain and become part of every software component that ever need to interface with humans.Agent stuff is just a fluff that is providing hype-cushion around chat bots and will go away with hype cycle.
  • TheOtherHobbes
    The future of weaving is automated looms, but sheep will still be needed to provide the raw materials.
  • anoplus
    The future of problem solving is problem solvers
  • postexitus
    "We really could produce working software faster with VB or with Microsoft Access"Press X to doubt.
  • simonw
    I nodded furiously at this bit:> The hard part of computer programming isn't expressing what we want the machine to do in code. The hard part is turning human thinking -- with all its wooliness and ambiguity and contradictions -- into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.> That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer's IP address). And it's the hard part when they're prompting language models to predict plausible-looking Python.> The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.I don't agree with this:> To folks who say this technology isn’t going anywhere, I would remind them of just how expensive these models are to build and what massive losses they’re incurring. Yes, you could carry on using your local instance of some small model distilled from a hyper-scale model trained today. But as the years roll by, you may find not being able to move on from the programming language and library versions it was trained on a tad constraining.Some of the best Chinese models (which are genuinely competitive with the frontier models from OpenAI / Anthropic / Gemini) claim to have been trained for single-digit millions of dollars. I'm not at all worried that the bubble will burst and new models will stop being trained and the existing ones will lose their utility - I think what we have now is a permanent baseline for what will be available in the future.
  • berdon
    There is a guaranteed cap on how far LLM based AI models can go. Models improve by being trained on better data. LLMs being used to generate millions of lines of sloppy code will substantially dilute the pool of good training data. Developers moving over to AI based development will cease to grow and learn - producing less novel code.The massive increase in slop code and loss of innovation in code will establish an unavoidable limit on LLMs.
  • aidenn0
    In past cases of automation, quantity was the foot-in-the-door and quality followed. Early manufactured items were in many cases inferior to hand-built items, but one was affordable and the other not.Software is incredibly expensive and has made up for it with low marginal costs. Many small markets could potentially be served by slop software, and it's better than what they would have otherwise gotten (which is nothing).
  • brap
    >AGI seem as far away as it’s always beenThis blurb is the whole axiom on which the author built their theory. In my opinion it is not accurate, to say the least. And I say this as someone who is still underwhelmed by current AI for coding.
  • pointbob
    [dead]