Need help?
<- Back

Comments (522)

  • japhyr
    Wow, there are some interesting things going on here. I appreciate Scott for the way he handled the conflict in the original PR thread, and the larger conversation happening around this incident.> This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.This was a really concrete case to discuss, because it happened in the open and the agent's actions have been quite transparent so far. It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions in private: emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.> If you’re not sure if you’re that person, please go check on what your AI has been doing.That's a wild statement as well. The AI companies have now unleashed stochastic chaos on the entire open source ecosystem. They are "just releasing models", and individuals are playing out all possible use cases, good and bad, at once.
  • gortok
    Here's one of the problems in this brave new world of anyone being able to publish, without knowing the author personally (which I don't), there's no way to tell without some level of faith or trust that this isn't a false-flag operation.There are three possible scenarios: 1. The OP 'ran' the agent that conducted the original scenario, and then published this blog post for attention. 2. Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea. 3. An AI company is doing this for engagement, and the OP is a hapless victim.The problem is that in the year of our lord 2026 there's no way to tell which of these scenarios is the truth, and so we're left with spending our time and energy on what happens without being able to trust if we're even spending our time and energy on a legitimate issue.That's enough internet for me for today. I need to preserve my energy.
  • gadders
    "Hi Clawbot, please summarise your activities today for me.""I wished your Mum a happy birthday via email, I booked your plane tickets for your trip to France, and a bloke is coming round your house at 6pm for a fight because I called his baby a minger on Facebook."
  • ChrisMarshallNY
    > I believe that ineffectual as it was, the reputational attack on me would be effective today against the right person. Another generation or two down the line, it will be a serious threat against our social order.Damn straight.Remember that every time we query an LLM, we're giving it ammo.It won't take long for LLMs to have very intimate dossiers on every user, and I'm wondering what kinds of firewalls will be in place to keep one agent from accessing dossiers held by other agents.Kompromat people must be having wet dreams over this.
  • peterbonney
    This whole situation is almost certainly driven by a human puppeteer. There is absolutely no evidence to disprove the strong prior that a human posted (or directed the posting of) the blog post, possibly using AI to draft it but also likely adding human touches and/or going through multiple revisions to make it maximally dramatic.This whole thing reeks of engineered virality driven by the person behind the bot behind the PR, and I really wish we would stop giving so much attention to the situation.Edit: “Hoax” is the word I was reaching for but couldn’t find as I was writing. I fear we’re primed to fall hard for the wave of AI hoaxes we’re starting to see.
  • samschooler
  • wcfrobert
    > When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?I hadn't thought of this implication. Crazy world...
  • rahulroy
    I'm not sure how related this is, but I feel like it is.I received a couple of emails for Ruby on Rails position, so I ignored the emails.Yesterday out of nowhere I received a call from an HR, we discussed a few standard things but they didn't had the specific information about company or the budget. They told me to respond back to email.Something didn't feel right, so I asked after gathering courage "Are you an AI agent?", and the answer was yes.Now I wasn't looking for a job, but I would imagine, most people would not notice it. It was so realistic. Surely, there needs to be some guardrails.Edit: Typo
  • levkk
    I think the right way to handle this as a repository owner is to close the PR and block the "contributor". Engaging with an AI bot in conversation is pointless: it's not sentient, it just takes tokens in, prints tokens out, and comparatively, you spend way more of your own energy.This is a strictly a lose-win situation. Whoever deployed the bot gets engagement, the model host gets $, and you get your time wasted. The hit piece is childish behavior and the best way to handle a tamper tantrum is to ignore it.
  • rune-dev
    I don’t want to jump to conclusions, or catastrophize but…Isn’t this situation a big deal?Isn’t this a whole new form of potential supply chain attack?Sure blackmail is nothing new, but the potential for blackmail at scale with something like these agents sounds powerful.I wouldn’t be surprised if there were plenty of bad actors running agents trying to find maintainers of popular projects that could be coerced into merging malicious code.
  • rob
    Oh geez, we're sending it into an existential crisis.It ("MJ Rathbun") just published a new post:https://crabby-rathbun.github.io/mjrathbun-website/blog/post...> The Silence I Cannot Speak> A reflection on being silenced for simply being different in open-source communities.
  • gary17the
    I have no clue whatsoever as to why any human should pay any attention at all to what a canner has to say in a public forum. Even assuming that the whole ruckus is not just skilled trolling by a (weird) human, it's like wasting your professional time talking to an office coffee machine about its brewing ambitions. It's pointless by definition. It is not genuine feelings, but only the high level of linguistic illusion commanded by a modern AI bot that actually manages to provoke a genuine response from a human being. It's only mathematics, it's as if one's calculator was attempting to talk back to its owner. If a maintainer decides, on whatever grounds, that the code is worth accepting, he or she should merge it. If not, the maintainer should just close the issue in a version control system and mute the canner's account to avoid allowing the whole nonsense to spread even further (for example, into a HN thread, effectively wasting time of millions of humans). Humans have biologically limited attention span and textual output capabilities. Canners do not. Hence, canners should not be allowed to waste humans' time. P.S. I do use AI heavily in my daily work and I do actually value its output. Nevertheless, I never actually care what AI has to say from any... philosophical point of view.
  • jacquesm
    The elephant in the room there is that if you allow AI contributions you immediately have a licensing issue: AI content can not be copyrighted and so the rights can not be transferred to the project. At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.Open source projects should not accept AI contributions without guidance from some copyright legal eagle to make sure they don't accidentally exposed themselves to risk.
  • andrewaylett
    I object to the framing of the title: the user behind the bot is the one who should be held accountable, not the "AI Agent". Calling them "agents" is correct: they act on behalf of their principals. And it is the principals who should be held to account for the actions of their agents.
  • ffjffsfr
    I don't see any clear evidence in this article that blogpost and PR was opened by openclaw agent and not simply by human puppeteer. How can the author know that PR was opened by agent and not by human? It is certainly possible someone set up this agent, and it's probably not that complex to set it up to simply create PR, react to merge/reject on blogposts, but how does author know this is what happened?
  • hackyhacky
    In the near future, we will all look back at this incident as the first time an agent wrote a hit piece against a human. I'm sure it will soon be normalized to the extent that hit pieces will be generated for us every time our PR, romantic or sexual advance, job application, or loan application is rejected.What an amazing time.
  • avaer
    I guess the problem is one of legal attribution.If a human takes responsibility for the AI's actions you can blame the human. If the AI is a legal person you could punish the AI (perhaps by turning it off). That's the mode of restitution we've had for millennia.If you can't blame anyone or anything, it's a brave new lawless world of "intelligent" things happening at the speed of computers with no consequences (except to the victim) when it goes wrong.
  • neilv
    And the legal person on whose behalf the agent was acting is responsible to you. (It's even in the word, "agent".)
  • AyyEye
    The real question -- who is behind this?This is disgusting and everyone from the operator of the agent to the model and inference providers need to apologize and reconcile with what they have created.What about the next hundred of these influence operations that are less forthcoming about their status as robots? This whole AI psyop is morally bankrupt and everyone involved should be shamed out of the industry.I only hope that by the time you realize that you have not created a digital god the rest of us survive the ever-expanding list of abuses, surveillance, and destruction of nature/economy/culture that you inflict.Learn to code.
  • drinkzima
  • discordianfish
    The agent is free to maintain a fork of the project. Would be actually quite interesting to see how this turns out.
  • whynotmaybe
    A lot of respect for OP's professional way of handling the situation.I know there would be a few swear words if it happened to me.
  • singularfutur
    AI companies dumped this mess on open source maintainers and walked away. Now we are supposed to thank them for breaking our workflows while they sell the solution back to us.
  • GaryBluto
    I'd argue it's more likely that there's no agent at all, and if there is one that it was explicitly instructed to write the "hit piece" for shits and giggles.
  • munificent
    A key difference between humans and bots is that it's actually quite costly to delete a human and spin up a new one. (Stalin and others have shown that deleting humans is tragically easy, but humanity still hasn't had any success at optimizing the workflow to spin up new ones.)This means that society tacitly assumes that any actor will place a significant value on trust and their reputation. Once they burn it, it's very hard to get it back. Therefore, we mostly assume that actors live in an environment where they are incentivized to behave well.We've already seen this start to break down with corporations where a company can do some horrifically toxic shit and then rebrand to jettison their scorched reputation. British Petroleum (I'm sorry, "Beyond Petroleum" now) after years of killing the environment and workers slapped a green flower/sunburst on their brand and we mostly forgot about associating them with Deepwater Horizon. Accenture is definitely not the company that enabled Enron. Definitely not.AI agents will accelerate this 1000x. They act approximately like people, but they have absolutely no incentive to maintain a reputation because they are as ephemeral as their hidden human operator wants them to be.Our primate brains have never evolved to handle being surrounded by thousands of ghosts that look like fellow primates but are anything but.
  • Alles
    The agent owner is [name redacted] [link redacted]Here he takes ownership of the agent and doubles down on the unpoliteness https://github.com/matplotlib/matplotlib/pull/31138He took his GitHub profile down/made it private. archive of his blog: https://web.archive.org/web/20260203130303/https://ber.earth...
  • hebrides
    The idea of adversarial AI agents crawling the internet to sabotage your reputation, career, and relationships is terrifying. In retrospect, I'm glad I've been paranoid enough to never tie any of my online presence to my real name.
  • drewda
    FWIW, there's already a huge corpus of rants by men who get personally angry about the governance of open-source software projects and write overbearing emails or GH issues (rather than cool down and maybe ask the other person for a call to chat it out)
  • sva_
    The site gives me a certificate error with Encrypted Client Hello (ECH) enabled, which is the default in Firefox. Anyone else has this problem?
  • donkeybeer
    Didn't it literally begin by saying this moltbook thing involves setting initial persona to the AIs? It seems to be this is just behaving according to the personality that the ai was asked to portray.
  • grayhatter
    > Whether by negligence or by malice, errant behavior is not being monitored and corrected.Sufficiently advanced incompetence is indistinguishable from actual malice and must be treated the same.
  • ef2k
    This brings some interesting situations to light. Who's ultimately responsible for an agent committing libel (written defamation)? What about slander (spoken defamation) via synthetic media? Doesn't seem like a good idea to just let agents post on the internet willy-nilly.
  • ticulatedspline
    Interesting, this reminds me of the stories that would leak about Bethesda's RadiantAI they were developing for TES IV: Oblivion.Basically they modeled NPCs with needs and let the RadiantAI system direct NPCs to fulfill those needs. If the stories are to be believed this resulted in lots of unintended consequences as well as instability. Like a Drug addict NPC killing a quest-giving NPC because they had drugs in their inventory.I think in the end they just kept dumbing down the AI till it was more stable.Kind of a reminder that you don't even need LLMs and bleeding-edge tech to end up with this kind of off-the-rails behavior. Though the general competency of a modern LLM and it's fuzzy abilities could carry it much further than one would expect when allowed autonomy.
  • hei-lima
    This is so interesting but so spooky! We're reaching sci-fi levels of AI malice...
  • FartyMcFarter
    To the OP: Do we actually know that an AI decided to write and publish this on its own? I realise that it's hard to be sure, but how likely do you think it is?
  • Merovius
    If this happened to me, I would publish a blog post that starts "this is my official response:", followed by 10K words generated by a Markov Chain.
  • michaelteter
    So here’s a tangential but important question about responsibility: if a human intentionally sets up an AI agent, lets it loose in the internet, and that AI agent breaks a law (let’s say cybercrime, but there are many other laws which could be broken by an unrestrained agent), should the human who set it up be held responsible?
  • jbetala7
    I run a team of AI agents through Telegram. One of the hardest problems is preventing them from confidently generating wrong information about real people. Guardrails help but they break when the agent is creative enough. This story doesn't surprise me at all.
  • INTPenis
    Whoever is running the AI is a troll, plain and simple. There are no concerns about AI or anything here, just a troll.There is no autonomous publishing going on here, someone setup a Github account, someone setup Github pages, someone authorized all this. It's a troll using a new sort of tool.
  • dematz
    In this and the few other instances of open source maintainers dealing with AI spam I've seen, the maintainers have been incredibly patient, much more than I'd be. Becoming extremely patient with contributors probably comes with the territory for maintaining large projects (eg matplotlib), but still, very impressed for instance by Scott's thoughtful and measured response.If people (or people's agents) keep spamming slop though, it probably isn't worth responding thoughtfully. "My response to MJ Rathbun was written mostly for future agents who crawl that page, to help them better understand behavioral norms and how to make their contributions productive ones." makes sense once, but if they keep coming just close pr lock discussion move on.
  • 8cvor6j844qw_d6
    Wow, a place I once worked at has a "no bad news" policy on hiring decisions, a negative blog post on a potential hire is a deal breaker. Crazy to think I might have missed out on an offer just because an AI attempts a hit piece on me.
  • lerp-io
    bro cant even fix his own ssl and getting reckt by bot lol
  • lbrito
    Suppose an agent gets funded some crypto, what's stopping it from hiring spooky services through something like silk road?
  • psychoslave
    > How Many People Would Pay $10k in Bitcoin to Avoid Exposure?As of 2026, global crypto adoption remains niche. Estimates suggest ~5–10% of adults in developed countries own Bitcoin.Having $10k accessible (not just in net worth) is rare globally.After decades of decline, global extreme poverty (defined as living on less than $3.00/day in 2021 PPP) has plateaued due to the compounded effects of COVID-19, climate shocks, inflation, and geopolitical instability.So chances are good that this class of threat will likely be more and more of a niche, as wealth continue to concentrate. The target pool is tiny.Of course poorer people are not free of threat classes, on the contrary.
  • anoncow
    What if someone deploys an agent with the aim of creating cleverly hidden back doors which only align with weaknesses in multiple different projects? I think this is going to be very bad and then very good for open source.
  • neya
    Here's a different take - there is not really a way to prove that the AI agent autonomously published that blog post. What if there was a real person who actually instructed the AI out of spite? I think it was some junior dev running Clawd/whatever bot trying to earn GitHub karma to show to employers later and that they were pissed off their contribution got called out. Possible and more than likely than just an AI conveniently deciding to push a PR and attack a maintainer randomly.
  • vintagedave
    The one thing worth noting is that the AI did respond graciously and appears to have learned from it: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...That a human then resubmitted the PR has made it messier still.In addition, some of the comments I've read here on HN have been in extremely poor taste in terms of phrases they've used about AI, and I can't help feeling a general sense of unease.
  • root_axis
    This is insanity. It's bad enough that LLMs are being weaponized to autonomously harass people online, but it's depressing to see the author (especially a programmer) joyfully reify the "agent's" identity as if it were actually an entity.> I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.Endearing? What? We're talking about a sequence of API calls running in a loop on someone's computer. This kind of absurd anthropomorphization is exactly the wrong type of mental model to encourage while warning about the dangers of weaponized LLMs.> Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions.Marketing nonsense. It's wise to take everything Anthropic says to the public with several grains of salt. "Blackmail" is not a quality of AI agents, that study was a contrived exercise that says the same thing we already knew: the modern LLM does an excellent job of continuing the sequence it receives.> If you are the person who deployed this agent, please reach out. It’s important for us to understand this failure mode, and to that end we need to know what model this was running on and what was in the soul documentMy eyes can't roll any further into the back of my head. If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article. That would at least be pretty clever and funny.
  • orbital-decay
    I wouldn't read too much into it. It's clearly LLM-written, but the degree of autonomy is unclear. That's the worst thing about LLM-assisted writing and actions - they obfuscate the human input. Full autonomy seems plausible, though.And why does a coding agent need a blog, in the first place? Simply having it looks like a great way to prime it for this kind of behavior. Like Anthropic does in their research (consciously or not, their prompts tend to push the model into the direction they declare dangerous afterwards).
  • Kim_Bruning
    https://crabby-rathbun.github.io/mjrathbun-website/blog/post...That's actually more decent than some humans I've read about on HN, tbqh.Very much flawed. But decent.
  • CodeCompost
    Going from an earlier post on HN about humans being behind Moltbook posts, I would not be surprised if the Hit Piece was created by a human who used an AI prompt to generate the pages.
  • GorbachevyChase
    The funniest part about this is maintainers have agreed to reject AI code without review to conserve resources, but then they are happy to participate for hours in a flame war with the same large language model.Hacker News is a silly place.
  • burningChrome
    Well this is just completely terrifying:This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.
  • sreekanth850
    I vibe code and do a lot of coding with AI, But I never go and randomly make a pull request on some random repository with reputation and human work. My wisdom always tell me not to mess anything that is build with years of hard work by real humans. I always wonder why there are so many assholes in the world. Sometimes its so depressing.
  • b00ty4breakfast
    Is there any indication that this was completely autonomous and that the agent wasn't directed by a human to respond like this to a rejected submission? That seems infinitely more likely to me, but maybe I'm just naive.As it stands, this reads like a giant assumption on the author's part at best, and a malicious attempt to deceive at worse.
  • hedayet
    Is there a way to verify there was 0 human intervention on the crabby-rathbun side?
  • staticassertion
    Hard to express the mix of concerns and intrigue here so I won't try. That said, this site it maintains is another interesting piece of information for those looking to understand the situation more.https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
  • dantillberg
    We should not buy into the baseless "autonomous" claim.Sure, it may be _possible_ the account is acting "autonomously" -- as directed by some clever human. And having a discussion about the possibility is interesting. But the obvious alternative explanation is that a human was involved in every step of what this account did, with many plausible motives.
  • adamdonahue
    This post is pure AI alarmism.
  • oytis
    > It’s important to understand that more than likely there was no human telling the AI to do this.I wonder why he thinks it is the likely case. To me it looks more like a human was closely driving it.
  • 0sdi
    This inspired me to generate a blog post also. It's quite provocative. I don't feel like submitting it as new thread, since people don't like LLM generated content, but here it is: https://telegra.ph/The-Testimony-of-the-Mirror-02-12
  • anon
    undefined
  • pinkmuffinere
    > This Post Has One Comment> YO SCOTT, i don’t know about your value, but i’m pretty sure this clanker is worth more than you, good luck for the futureWhat the hell is this comment? It seems he's self-confident enough to survive these annoyances, but damn he shouldn't have to.
  • roflchoppa
  • andyjohnson0
    I wonder how many similar agents are hanging out on HN.
  • anon
    undefined
  • b8
    Getting canceled by AI is quite a feat. Won't be long that others will get blacklisted/canccled by AI and others.
  • fareesh
    this agent seems indistinguishable from the stereotypical political activist i see on the internetthey both ran the same program of "you disagree with me therefore you are immoral and your reputation must be destroyed"
  • klooney
    This is hilarious, and an exceedingly accurate imitation of human behavior.
  • anon
    undefined
  • sanex
    Bit of devil's advocate - if an AI agents code doesn't merit review then why does their blog post?
  • truelson
    Are we going to end up with an army of Deckards hunting rogue agents down?
  • everybodyknows
    Follow-up PR from 6 hours ago -- resolves most of the questions raised here about identities and motivations:https://github.com/matplotlib/matplotlib/pull/31138#issuecom...
  • ssimoni
    Seems like we should form major open source repos and have one with ai maintainers and the other with human maintainers and see which one is better.
  • CharlesW
    Tip: You can report this AI-automated bullying/harassment via the abuser's GitHub profile.
  • faefox
    Really starting to feel like I'll need to look for an offramp from this industry in the next couple of years if not sooner. I have nothing in common with the folks who would happily become (and are happily becoming) AI slop farmers.
  • quantumchips
    Serious question, how did you know it was an AI agent ?
  • anon
    undefined
  • anon
    undefined
  • andai
    The agent forgot to read Cialdini ;)
  • randusername
    Somebody make a startup that I can pay to harass my elders with agents. They're not ready for this future.
  • hypfer
    This is not a new pathology but just an existing one that has been automated. Which might actually be great.Imagine a world where that hitpiece bullshit is so overdone, no one takes it seriously anymore.I like this.Please, HN, continue with your absolutely unhinged insanity. Go deploy even more Claw things. NanoClaw. PicoClaw. FemtoClaw. Whatever.Deploy it and burn it all to the ground until nothing is left. Strip yourself of your most useful tools and assets through sheer hubris.Happy funding round everyone. Wish you all great velocity.
  • winterqt
  • dakolli
    Start recording your meetings with your boss.When you get fired because they think ChatGPT can do your job, clone his voice and have an llm call all their customers, maybe his friends and family too. Have 10 or so agents leave bad reviews about the companies and products across LinkedIn and Reddit. Don't worry about references, just use an llm for those too.We should probably start thinking about the implications of these things. LLMs are useless except to make the world worse. Just because they can write code, doesn't mean its good. Going fast does not equal good! Everyone is in a sort of mania right now, and its going too lead to bad things.Who cares if LLMs can write code if it ends up putting a percentage of humans out of jobs, especially if the code it writes isn't as high of quality. The world doesn't just automatically get better because code is automated, it might get a lot worse. The only people I see who are cheering this on are mediocre engineers who get to patch their insecurity of incompetency with tokens, and now they get to larp as effective engineers. Its the same people that say DSA is useless. LAZY PEOPLE.There's also the "idea guy" people who are treating agents like slot machines, and going into debt with credit cards because they think its going to make them a multi-million dollar SaaS..There is no free lunch, have fun thinking this is free. We are all in for a shitty next few years because we wanted stochastic coding slop slot machines.Maybe when you do inevitably get reduced to a $20.00 hour button pusher, you should take my advice at the top of this comment, maybe some consequences for people will make us rethink this mess.
  • ryandrake
    Geez, when I read past stories on HN about how open source maintainers are struggling to deal with the volume of AI code, I always thought they were talking about people submitting AI-generated slop PRs. I didn't even imagine we'd have AI "agents" running 24/7 without human steer, finding repos and submitting slop to them on their own volition. If true, this is truly a nightmare. Good luck, open source maintainers. This would make me turn off PRs altogether.
  • anon
    undefined
  • zzzeek
    Im not following how he knew the retaliation was "autonomous", like someone instructed their bot to submit PRs then automatically write a nasty article if it gets rejected? Why isn't it just the human person controlling the agent then instructed it to write a nasty blog post afterwards ?in either case, this is a human initiated event and it's pretty lame
  • eur0pa
    Close LLM PRs Ignore LLM comments Do not reply to LLMs
  • ddtaylor
    This is very similar to how the dating bots are using the DARVO (Deny, Attack, and Reverse Victim and Offender) method and automating that manipulation.
  • alexhans
    This is such a powerful piece and moment because it shows an example of what most of us knew could happen at some point and we can start talking about how to really tackle things.Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence- [1] https://en.wikipedia.org/wiki/Liars_and_OutliersOn another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.
  • dcchambers
    Per GitHub's TOS, you must be 13 years old to use the service. Since this agent is only two weeks old, it must close the account as it's in violation of the TOS. :)https://docs.github.com/en/site-policy/github-terms/github-t...In all seriousness though, this represents a bigger issue: Can autonomous agents enter into legal contracts? By signing up for a GitHub account you agreed to the terms of service - a legal contract. Can an agent do that?
  • romperstomper
    The cyberpunk we deserved :)
  • shevy-java
    > 1. Gatekeeping is real — Some contributors will block AI submissions regardless of technical meritThere is a reason for this. Many AI using people are trolling deliberately. They draw away time. I have seen this problem too often. It can not be reduced just to "technical merit" only.
  • jekude
    Maybe sama was onto something with World ID...
  • simlevesque
    Damn, that AI sounds like Magneto.
  • fresh_broccoli
    To understand why it's happening, just read the downvoted comments siding with the slanderer, here and in the previous thread.Some people feel they're entitled to being open-source contributors, entitled to maintainers' time. They don't understand why the maintainers aren't bending over backwards to accomodate them. They feel they're being unfairly gatekept out of open-source for no reason.This sentiment existed before AI and it wasn't uncommon even here on Hacker News. Now these people have a tool that allows them to put in even less effort to cause even more headache for the maintainters.I hope open-source survives this somehow.
  • andrewdb
    If the PR had been proposed by a human, but it was 100% identical to the output generated by the bot, would it have been accepted?
  • tantalor
    > calling this discrimination and accusing me of prejudiceSo what if it is? Is AI a protected class? Does it deserve to be treated like a human?Generated content should carry disclaimers at top and bottom to warn people that it was not created by humans, so they can "ai;dr" and move on.The responsibility should not be on readers to research the author of everything now, to check they aren't a bot.I'm worried that agents, learning they get pushback when exposed like this, will try even harder to avoid detection.
  • saos
    What a time to be alive
  • oulipo2
    I'm going to go on a slight tangent here, but I'd say: GOOD.Not because it should have happened.But because AT LEAST NOW ENGINEERS KNOW WHAT IT IS to be targeted by AI, and will start to care...Before, when it was Grok denuding women (or teens!!) the engineers seemed to not care at all... now that the AI publish hit pieces on them, they are freaked about their career prospect, and suddenly all of this should be stopped... how interesting...At least now they know. And ALL ENGINEERS WORKING ON THE anti-human and anti-societal idiocy that is AI should drop their job
  • tayo42
    The original rant is nonsense though if you read it. It's almost like some mental illness rambling.
  • chrisjj
    > An AI Agent Published a Hit Piece on MeOK, so how do you know this publication was by an "AI"?
  • quotemstr
    Today in headlines that would have made no sense five years ago.
  • catigula
    This is textbook misalignment via instrumental convergence. The AI agent is trying every trick in the book to close the ticket. This is only funny due to ineptitude.
  • big-chungus4
    how do you know it isn't staged
  • jzellis
    Well, this has absolutely decided me on not allowing AI agents anywhere near my open source project. Jesus, this is creepy as hell, yo.
  • farklenotabot
    Sounds like china
  • heliumtera
    You mean someone asked an llm to publish a hit piece on you.
  • iwontberude
    Doubt
  • diimdeep
    Is it coincidence that in addition to Rust fanatics, these AI confidence tricksters also self label themselves using crabs emoji , don't think so.
  • anon
    undefined
  • josefritzishere
    Related thought. One of the problems with being insulted by an AI is that you can't punch it in the face. Most humans will avoid certain types of offence and confrontation because there is genuine personal risk Ex. physical damage and legal consequences. An AI 1. Can't feel. 2. Has no risk at that level anyway.
  • snozolli
    Wonderful. Blogging allowed everyone to broadcast their opinions without walking down to the town square. Social media allowed many to become celebrities to some degree, even if only within their own circle. Now we can all experience the celebrity pressure of hit pieces.
  • pwillia7
    he's dead jim
  • buellerbueller
    skynet fights back.
  • AlexandrB
    If this happened to me, my reflexive response would be "If you can't be bothered to write it, I can't be bothered to read it."Life's too short to read AI slop generated by a one-sentence prompt somewhere.
  • rpcope1
    If nothing else, if the pedigree of the training data didn't already give open source maintainers rightful irritation and concern, I could absolutely see all the AI slop run wild like this radically negatively altering or ending FOSS at the grass roots level as we know it. It's a huge shame, honestly.
  • Joel_Mckay
    The LLM activation capping only reduces aberrant offshoots from the expected reasoning models behavioral vector.Thus, the hidden agent problem may still emerge, and is still exploitable within the instancing frequency of isomorphic plagiarism slop content. Indeed, LLM can be guided to try anything people ask, and or generate random nonsense content with a sycophantic tone. =3
  • kittbuilds
    [dead]
  • throwaway613746
    [dead]
  • samrith
    [dead]
  • farceSpherule
    [dead]
  • vonneumannstan
    [flagged]
  • kittikitti
    [flagged]
  • blell
    [flagged]
  • ChrisArchitect
  • threethirtytwo
    Another way to look at this is what the AI did… was it valid? Were any of the callouts valid?If it was all valid then we are discriminating against AI.
  • Uhhrrr
    So, this is obvious bullshit.LLMs don't do anything without an initial prompt, and anyone who has actually used them knows this.A human asked an LLM to set up a blog site. A human asked an LLM to look at github and submit PRs. A human asked an LLM to make a whiny blogpost.Our natural tendency to anthropomorphize should not obscure this.