<- Back
Comments (222)
- nayrocladeIt's hilarious to me to see the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity to be protected at all costs, suddenly, and with no hint of shame, start preaching about about the vital importance of collaborative activities and the apparent inconsequence of code and coding, the moment a machine was able to do the latter faster than them. I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.
- jugg1esI think veteran engineers have always known that the real problems with velocity have always been more organizational than technical. The inability for the business to define a focused, productive roadmap has always been the problem in software engineering. Constantly jumping to the next shiny thing that yields almost no ROI but never allowing systemic tech debt to be addressed has crippled many company's I have worked at in the long-term.
- jmilloyCode is a liability.I think it can be easy to look at code as an asset, but fundamentally it is a liability. Some of the "bottlenecks" to new code are in place to make sure that the yield outweighs the increased liability. Agents that produce more code faster are producing more liability faster. Much of the excitement and much of the skepticism about coding agents is about whether the immediate increased productivity (new features) and even immediate yield (new products or new revenue) outweighs the increased long term liabilities. I'd say we won't find out for another 1-3 years, and of course that the answer will differ in different domains.From this perspective, attempting to build these bottlenecks into the agentic workflow directly makes some sense. Supplying coding agents with additional context that values a coherent project vision and that pushes back against new features or unconstrained processes would be valuable.Is this what the article is trying to get at? Is this attempting to make some agents essentially take on product management responsibilities, synthesizing as much as possible into a cohesive product vision and reminding the coding agents of that vision as strictly as possible? Should these agents review new proposals and new pull requests for "adherence to the full picture", whether you want to call this "context" or "vision" or something else?I think these agents might do an exceptionally good job at synthesizing context and presenting a cohesive roadmap that appears, linguistically, to adhere to the team values and vision. But I'm doubtful that they can have the discernment that a quality manager or team can have. Rapidly and convincingly greenlighting a particular roadmap could do more harm than good.
- ChrisMarshallNY> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.Love that.I agree, in particular, about the context. That’s where long-retention, experienced, teams pay off.I managed one of those for decades. When they finally rolled up our department, the engineer with the least seniority, had ten years.When a team is together for that long, the communication overhead drops to an almost negligible level.That’s what I find most upsetting about the current culture of mayfly-lifespan employment tenures.Nowadays, I work mostly alone. I’m highly productive, but my scope is really limited.I miss being on a good team.
- nilirlBottleneck for what? More features?I don't think amount of software is what determines whether a company does well.I don't think capturing quantity of context is that important either.Now, quality of context. How well do the humans reason?Then, attitude. How well do the humans respond to bad situations?Then, resource management. How well does the company treat people and money?Finally, luck. How much of the uncontrollables are in our favor?Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
- paldepind2From the article:> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
- AntibabelicWhat kind of projects are people working on, where understanding what features the management wants is the only difficult part and the rest can just be "typed out" (or, today, offloaded to an LLM)? If that's what you do, then I'm not surprised so many people on HN think LLMs can replace them.
- syntax-sailorAn awful lot of problems can in fact be solved by 'more code' in fact. People seem to straw man this in terms of product feature surface.A lot of places skip creation and maintenance of decent observability - that's code.We can now easily use advanced, code heavy testing techniques like property testing - code.We can create environmental simulations to speed up and improve integration testing - code.We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.
- web-cowboyI'm finding counterexamples of this constantly now that I can have an agent rewrite large sections of my codebase that have been sorely needing it.- Moving to a newer and more modern test library- Refactoring my data layer so it's easier to read, based on years of organic changes that need to be baked in and simplified- Porting some functionality to another language to vastly improve performanceI agree with the overall sentiment, but having an agent at my finger tips who can really crank out large-scale, involved code changes is unclogging quite a few backburnered todos lately for me.
- rudyp_devI think the argument here misses critical nuance; there is a difference between code used to implement a product and when code _is_ the product.It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.
- j16sdiz(not related to the article)The flashing red dot on the web page is very annoying. Is there some design reason for that?edit: I meant the <svg> inside `trail-map-container`
- tlkanOne of the bottlenecks has always been the code. That code has been stolen and is being laundered while companies rely on mediocre engineers who have never written anything of value to promote the burglary tools and call the process "writing software".It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.
- ZeWaka> Producing easily consumable context is precisely the thing humans don’t like to do.I don't think this sentence speaks for me. This is the sort of thing I love to do.
- frollogastonDoesn't add up. I used to spend more than half my time coding, as did others. Besides the obvious cost, that coding took wall-time which meant talks had to wait. Sure a poor collaborator will jam things up a ton, but a team of at least ok collaborators used to be bottlenecked on code.
- lysiumCan someone explain the title? I think the author illustrates that the code was the bottleneck and it has shifted to context. What am I missing?
- theptip> They are waiting on the next well-formed specIs this actually true? Maybe in a widget factory. I think it’s an anti-pattern for the new world.When you look at places that are shipping at insane pace (like Anthropic) the secret is not accelerating the writing down of a roadmap and we’ll groomed backlog, it’s empowering smart individuals to run their own end-to-end product improvement loops.You can slightly reframe the OP by saying “the bottleneck is product ideas”, but “well formed backlog items” IMO frames it as more structured and hierarchical than it should be.
- pu_peSometimes code is definitely the bottleneck. For example some organizations have a very bureaucratic process guarding which projects get access to a development team and when. That's not needed if implementation is now faster/cheaper.I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
- kadhirvelmTotally agree, we wrote our own piece similar to this: https://productnow.ai/blogs/teams-that-coordinateI really think as code becomes cheap, misalignment between people, teams, and organizations is going to hurt a lot more, especially when everyone is trying to move at break neck speeds.I also think a big piece of this is human attention and inertia. Aka, why bother doing the hard work to coordinate with others when you can just ship whatever you’re thinking. I think whichever organizations can figure out the human and cultural aspects to this will do phenomenally
- kylestlbAbsolutely matching the gut feel I've had lately. We've always been pretty good at producing bad code very fast. All of the other stuff - dependency management, learning what's valuable, ownership & boundaries, context switching costs, etc... have always been the bottlenecks and it's just more obvious now.
- anonundefined
- nibabthe tediousness of keeping documentation up to date and the natural tendency towards small attention spans has always come up as a tax on organizational efficiency: complicated org structures, legibility exercises, communication tollgates etc. there is real value in reducing the friction in the former so that the latter becomes less of a burden.at the same time, context poisoning is a real cognitive problem for humans too and I can't tell you the number of times I've seen irrelevant details become a drag on execution. my fear is that having too much context will only cause bikeshedding and a revisiting of prior decisions.frankly, our organizational structures were already pretty good at creating mechanisms for eliciting the right implicit context at large scales. it is possible that we're just going to come up with the same mechanisms from first principles...
- jorisw> Agents that consume context need agents that produce it. Once that loop is running, the organization has a written substrate it would never have produced on its own.I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.
- jaccolaThe company website linked in the article is broken https://www.dottxt.ai/ on (mobile and desktop) Safari. Looks like your cert doesn’t cover the www subdomain.
- randallsquaredSo managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.
- BrysonbwEverything in life revolves around people, and even more so today
- stego-techThe bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
- shartsVelocity, velocity, velocity! Ah yes, velocity always seems to matter except to those that don’t need to worry about it.
- zabzonk> Real programmers don’t document their programs.Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
- blueTiger33the bottleneck was never the software, that is the ship we ride,people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,the ship should carry food people want,team decides what food will be consumed,captain tries first the food,if food is good and people want it, people buy more
- BrokenBuildI can see the division here already, and the cogs are afraid. As a dev of 25+ years, currently working for a small company who came from a global company, I see both sides. I'm very excited about AI and love to see my projects come to life so much faster. I still love the craft of code, but its always been about the product for me.
- luodaintThe paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.
- wesm
- spiderfarmerFor me it was. Solo entrepreneurs are the ones who profit the most from AI assisted development.
- lynx97If thats true, I am sure some C-suite manager knows this already. Assuming management knows what they do, after all, they're getting payed for this. The time where engineer are trying to educate people above them should be over. Management gets payed for the big decisions. If they tank the company, so be it. I no longer care.
- HarHarVeryFunny> What may save us it that agents are unreasonably good at reading exhaustively. An agent will read every PR comment, every closed issue, every commit message, every stale design doc ...> Not just “this module exists,” but “this module is weird because the migration had to preserve old behavior,” or “this benchmark matters because a previous optimization silently changed the distribution.”The thesis here is that an LLM will document code better than a human (although based on human artifacts), since churning through huge quantities of text is what they are good at.A few thoughts:1) Yes, an LLM may be able to pull comments out of commits and PR comments and put them back in the code where they belong, but I question how often a developer too lazy to put a vital comment in the code would put it in a commit message instead!2) "The truth is in the code" has always been true, and will always remain true. If the comments differ from the code, the code defines the truth. Pulling comments from stale external documentation and putting them in the code does more harm than good.3) Comments that can be auto-generated from the code don't add much value (lda #1; add one to the accumulator).4) Comments about the purpose or motivation of the code, distinct from 3), such as the "we had to preserve backwards compatibility" example, or "this code does this non-obvious tricky thing because ...", are where the value is, but the LLM is highly unlikely to be able to discern any unwritten motivation by itself. If the human developer left a comment somewhere then great (assuming it is still relevant)Most of the discussion we see about LLM coding is how fast it can churn out thousands of LOC on a greenfield project, or how good they can be at finding bugs, but neither of these are very relevant to the main job of developers which is maintaining and extending existing codebases. It would be lovely if most projects were greenfield, but they are not.In any large project that has been maintained over a few years or more, there will inevitably be an ever growing accumulation of bug fixes and patches for specific issues that have been discovered in production, likely poorly documented and out of sync with any original documentation that may have existed (which anyway tends to be more idealistic and architectural in nature, not capturing these types of post-deployment detail and special cases).The natural tendency of an LLM is to want to rewrite code to match the statistics of what it was trained on, and they need to be reigned in via prompting to resist this and not touch more code than is minimally needed for what is being asked. Of course asking an LLM to do something is a bit like asking a dog to do something - sometimes it will, and sometimes it won't. I expect over the next few years we'll be experiencing, and reading about, more and more cases where LLMs have introduced bugs and regressions into mature code bases because of this - rewriting code that should have been left alone. The general rule is that if you are tempted to rewrite something you better first understand why it was there, coded the way it is, in the first place.I can't help but compare the current state of "AI" (LLMs) to the early days of things like computer speech recognition or language translation when they were considered amazing, and everyone was gushing about them, but at the end of the day the accuracy still wasn't good enough to make them very useful - that would take another 10-20 years.Another historical lesson/perspective would be expert systems which at the time were considered as AI and the future of machine intelligence (the Japanese "5th generation systems" were going to take over the world, CYC promised to offer human level intelligence), but in retrospect were far less important. It won't be until we move on from LLMs to something more brain-like, deserving to be called AGI, that LLMs will be put in their historical perspective.At the moment DeepMind seems to be the only one of the big labs admitting/recognizing that scaling LLMs isn't going to achieve AGI and that "a few more transformer-level breakthroughs" are needed. Hassabis has however talked about LLMs (GPTs) still being a part of what they are envisaging, which one could either regard as a pragmatic stepping stone to real AGI, or perhaps that they are not being ambitious enough - building something that still needs to be spoon-fed language rather than being capable of learning it from scratch.
- freejazzIt seems like so many developers know this, yet here we are. SV pushing this AI slop economy. More code! Faster! Less testing! Less understanding! It's what we NEED!
- woodydesign[dead]
- dividendflow[flagged]
- bucktrack[flagged]
- tuo-lei[flagged]
- SadErn[dead]
- auggieroseI cringe every time I read the word "load-bearing" in an article.