Need help?
<- Back

Comments (129)

  • jihadjihad
    s/Django/the codebase/g, and the point stands against any repo for which there is code review by humans:> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.> For a reviewer, it’s demoralizing to communicate with a facade of a human.> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
  • lukasgraf
    The best people I've worked with tended to go out of their way to make it as easy for me as possible to critique their ideas or implementations.They spelled out exactly their assumptions, the gaps in their knowledge, what they have struggled with during implementation, behavior they observed but don't fully understand, etc.Their default position was that their contribution was not worth considering unless they can sell it to the reviewer, by not assuming their change deserves to get merged because of their seniority or authority, but by making the other person understand how any why it works. Especially so if the reviewer was their junior.When describing the architecture, they made an effort to communicate it so clearly that it became trivial for others to spot flaws, and attack their ideas. They not only provided you with ammunition to shoot down their ideas, they handed you a loaded gun, safety off, and showed you exactly where to point it.If I see that level of humility and self-introspection in a PR, I'm not worried, regardless of whether or not an LLM was involved.But then there's people that created PRs with changes where the stack didn't even boot / compile, because of trivial errors. They already did that before, and now they've got LLMs. Those are the contributions I'm very worried about.So unlike people in other threads here, I don't agree at all with "If the code works, does it matter how it was produced and presented?". For me, the meta / out-of-band information about a contribution is a massive signal, today more than ever.
  • EMM_386
    This is getting really out of control at the moment and I'm not exactly sure what the best way to fix it is, but this is a very good post in terms of expressing the why this is not acceptable and why the burden if shifting on the wrong people.Will humans take this to heart and actually do the right thing? Sadly, probably not.One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process."I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
  • kanzure
    I like the idea of donating money instead of tokens. I think django contributors are likely to know how to spend those tokens better than I might, as I am not a django core contributor.Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.Another project has started to decline to let users directly open issues ( https://news.ycombinator.com/item?id=46460319 ).There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).edit: Also I applaud the debian project for their recent decision to defer and think harder about the nature of this problem. https://news.ycombinator.com/item?id=47324087
  • module1973
    It's like every new innovation at this point is exacerbating the problem of us choosing short term rewards over long time horizon rewards. The incentive structure simply doesn't support people who want to view things from the bird's eye view. Once you see game theory, you really can't unsee it.
  • jrochkind1
    "For a reviewer, it’s demoralizing to communicate with a facade of a human."This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.
  • MidnightRider39
    Great message but I wonder if the people who do everything via LLM would even care to read such a message. And at what point is it hard/impossible to judge whether something is entirely LLM or not? I sometimes struggle a lot with this being OSS maintainer myself
  • comboy
    Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
  • __mharrison__
    Curious what simon thinks about using an LLM to work on Django...I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
  • Havoc
    Interesting breadth of takes in the comments.Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
  • rowanseymour
    I totally get this and I also think it's now the case that making a PR of any significant complexity, for a project you're not a maintainer of, isn't necessarily giving that project anything of value. That project's maintainers can run the same prompts you are running - and if they do, they'll do it with better oversight and understanding. If you want to help then maybe's it's more useful to just hashout the plan that'll be given to an AI agent by a maintainer.
  • tombert
    I agree with the sentiment but I am not sure the best way to go forward.Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?It was so easy prior to AI.
  • yuppiepuppie
    I love Django. Ive been using it professionally and on side projects extensively for the past 10 years. Plus I maintain(ed) a couple highly used packages for Django (django-import-export and django-dramatiq).Last year, I had some free time to try to contribute back to the framework.It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.I empathize with the authors argument tho. God knows what kind of slop they are served everyday.This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.Dont have any solutions ATM, only money to donate to these folks.
  • ThomIves
    With my type of development, I haven't run into the types of things, directly, that you very well explained, but I have personally run into the pain, I confess, of being OVERLY reliant on LLMs. I continue to try and learn from those hard lessons and develop a set of best practices in using AI to help me avoid those pain points in the future. This growing set of best practices is helping me a lot. The reason that I liked your article is because it confirmed some of those best practices that I have had to learn the hard way. Thanks!
  • popcorncowboy
    > it’s such an honor to have your name among the list of contributorsI can't help but feel there's something very, very important in this line for the future of dev.
  • lijok
    By what metric is “the level of quality is much, much higher” in the Django codebase? ‘cause other than the damn thing actually working, the primary metric of a codebase being high quality is how easy it is to contribute to. And evidently, it’s not.
  • rigorclaw
    genuine question: if the maintainer burden keeps scaling like this, does it change the calculus for startups building on top of OSS projects with small core teams? feels like dependency risk that doesn't show up in any due diligence.
  • xpe
    Well said:> Before LLMs, [high quality code contribution] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
  • anon
    undefined
  • akkartik
    Very well said.
  • orsorna
    Someone better let Simon know!
  • iamleppert
    The solution to this problem is for LLMs to get better at producing code and descriptions that doesn't look LLM generated.It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
  • robutsume
    [dead]
  • butILoveLife
    [flagged]
  • readitalready
    [flagged]
  • yieldcrv
    I disagree with these takesIt is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
  • keybored
    Incredibly milquetoast. I would not like to work with anyone who goes against these points.
  • santiagobasulto
    I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella. AI is here to stay. We can’t stop it, for much we try.I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
  • weli
    Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.