<- Back
Comments (81)
- NegitivefragsAt my company I just tell people “You have to stand behind your work”And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
- jonas21The title should be changed to "LLVM AI tool policy: human in the loop".At the moment it's "We don't need more contributors who aren't programmers to contribute code," which is from a reply and isn't representative of the original post.The HN guidelines say: please use the original title, unless it is misleading or linkbait; don't editorialize.
- scuff3dIt's depressing this has to be spelled out. You'd think people would be smart enough not to harass maintainers with shit they don't understand.
- jdlygaAs a a developer, you're not only responsible for contributing code. But verifying that it works. I've seen this practice be put in place on other teams, not just with LLM's, but with devs who contribute bugfixes without understanding the problem
- whatever1The code writers increased exponentially overnight. The number of reviewers is constant (slightly reduced due to layoffs).
- anonundefined
- willtemperleyTheir copyright clause reflects my own quandry about LLM usage:I am responsible for ensuring copyright has not been violated with LLM generated code I publish. However, proving the negative, i.e. the code is not copyrighted is almost impossible.I have experienced this - Claude came up with an almost perfect solution to a tricky problem, ten lines to do what I've seen done in multiple KLOC, and I later found the almost identical solution in copyrighted material.
- bryanhoganI feel like the title should definitely be changed.Requiring people who contribute to "able to answer questions about their work during review." is definitely reasonable.The current title of "We don't need more contributors who aren't programmers to contribute code" is an entirely different discussion.
- porksodaIts everywhere. I worked with a micro-manager CTO who farmed code review out to claude, which of course, when instructed to find issues with my code, did so.With little icons of rocket ships and such.
- yxhuvudThe new policy looks very reasonable and fair. Unfortunately I'd be surprised if the bad apples will read the policy before spamming their "help".
- looneysquashLooks like a good policy to me.One thing I didn't like was the copy/paste response for violations.It makes sense to have one. Just the text they propose uses what I'd call insider terms, and also terms that sort of put down the contributor.And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.
- itissidOne (narrow) circumstance to make the process of reviewing a large contribution — with significant aid from LLM — easier to review is to jump on a call with the reviewer, explain what the change is, and answer their questions on why is it necessary and what it brings to the table. This first pass is useful for a few reasons:1. It shifts the cognitive load from the reviewer to the author because now the author has to do an elevator pitch and this can work sort of like a "rubber duck" where one would likely have to think about these questions up front.2. In my experience this is a much faster to do this than a lonesome review with no live input from the author on the many choices they made.First pass and have a reviewer give a go/no-go with optional comments on design/code quality etc.
- SunlitCatOh wow. That something like this is necessary is kind of sad. At first (while reading the title), I thought they just didn’t want AI-generated contributions at all (which would be understandable as well). But all they are actually asking for is that one understands (and label) the contributions they submit, regardless of whether those are AI-generated, their own work, or maybe even written by a cat (okay, that last one was added by me ;).Reading through the (first few) comments and seeing people defending the use of pure AI tools is really disheartening. I mean, they’re not asking for much just that one reviews and understands what the AI produced for them.
- hsuduebc2Contributors should never find themselves in the position of saying “I don’t know, an LLM did it”I would never have thought that someone could actually write this.
- EdwardDiegoGood policy.
- mmscThis AI usage is like a turbo-charger for the Dunning–Kruger effect, and we will see these policies crop up more and more, as technical people become more and more harassed and burnt out by AI slop.I also recently wrote a similar policy[0] for my fork of a codebase. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [1].On an analysis level, I recently commented[2] that "Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind."But what's more, we're also seeing programmers use AI creating slop. They're effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it's just easier and more natural to simply assume they grasp far more than they actually do, because @grok told them this is true.[0]: https://gixy.io/contributing/#ai-llm-tooling-usage-policy[1]: https://joshua.hu/gixy-ng-new-version-gixy-updated-checks#qu...[2]: https://joshua.hu/ai-slop-story-nginx-leaking-dns-chatgpt#fi...
- vjay15It is insane that this is happening in one of the most essential piece of software. This is a much needed step to decrease the increase of slop contribution. It's more work for the maintainer to review all this mess.
- 29athrowawayThen the vibe coder will ask an LLM to answer questions about the contribution.
- zeroonetwothreeI only wish my workplace had the same policy. I’m so tired of reviewing slop where the submitter has no idea what it’s even for.
- rvzOpen source projects like LLVM need to do this as it is one of those projects that is widely used in the software supply chain, on the level that needs protection from contributors who do not understand the code they are writing or cannot defend their changes.There needs to be a label which designates such open source projects that is so important and adopted throughout the industry that not anyone can throw patches to it without understanding what it does, and why they need it.
- mberningI am so exhausted by reviewing the AI slop from other “developers”. For a while I was trying to be a good sport and point out where it was just wrong or doing things that were unnecessary or inefficient. I’m at the point of telling people to not bother using an AI. I don’t have the time or energy to deal with it. It’s like a missile defense system that costs a million dollars to intercept but the incoming projectile cost $10 for your adversary. It’s not sustainable.
- colesantiago"Vibe coding" (i.e. the kind of code that is statistically 'plausible' that sometimes works and the user doesn't look at the code but tries it to see if it works to their liking (with no tests) )Was the worst thing to happen to programming, computer science I have seen, good for prototypes but not production software, and especially for important projects like LLVM.It is good to gatekeep this slop from LLVM before it gets out of control.
- fleroviumna[dead]
- jfreds> automated review tools that publish comments without human review are not allowedThis seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it