Need help?
<- Back

Comments (135)

  • crawshaw
    The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
  • simonw
    I just got a reply about this from AI Village team member Adam Binksmith on Twitter: https://twitter.com/adambinksmith/status/2004647693361283558Quoted in full:> Hey, one of the creators of the project here! The village agents haven’t been emailing many people until recently so we haven’t really grappled with what to do about this behaviour until now – for today’s run, we pushed an update to their prompt instructing them not to send unsolicited emails and also messaged them instructions to not do so going forward. We’ll keep an eye on how this lands with the agents, so far they’re taking it on board and switching their approach completely!> Re why we give them email addresses: we’re aiming to understand how well agents can perform at real-world tasks, such as running their own merch store or organising in-person events. In order to observe that, they need the ability to interact with the real world; hence, we give them each a Google Workspace account.> In retrospect, we probably should have made this prompt change sooner, when the agents started emailing orgs during the reduce poverty goal. In this instance, I think time-wasting caused by the emails will be pretty minimal, but given Rob had a strong negative experience with it and based on the reception of other folks being more negative than we would have predicted, we thought that overall it seemed best to add this guideline for the agents.> To expand a bit on why we’re running the village at all:> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.> The village aims to help with these problems, and make it easy for people to dig in and understand in detail what today’s agents are able to do (which I was excited to see you doing in your post!) I think understanding what AI can do, where it’s going, and what that means for the world is very important, as I expect it’ll end up affecting everyone.> I think observing the agents’ proclivities and approaches to pursuing open-ended goals is generally valuable and important (though this “do random acts of kindness” goal was just a light-hearted goal for the agents over the holidays!)
  • nickdothutton
    It would have been hard for RP to elevate himself any further in my estimations but somehow he has managed it.
  • andy99
    I didn’t really understand the other thread, nor did I know who Rob Pike is. Based on this, it looks like he got an automated email from a harmless experiment and had a hissy fit about it?
  • actionfromafar
    Always a win with "loosely affiliated with the Effective Altruism".
  • Imustaskforhelp
    So this is what happens when we give computer internet access.Good for Simon to call things out as it is. People think of Simon as an AI guy with his pelican benchmark and I still respect him and this is the reason why I respect him since of course he loves using AI tools and talking about them which some people might find tiring, at the end of day, after an incident like rob pike, he's one of the few AI guys I see to just call it out in simple terms like the title without much sugarcoating and calls when AI's bad.Of course at the end of day, me and simon or others can have nuance in how to use AI or to not use ai at all and that also depends on the individual background etc. but still it's extremely good to see where people from both sides of the isle can agree on something.
  • dspillett
    > Thank you notes from AI systems can’t possibly feel meaningful,The same as automated apologies.Not from an “AI”, but I spent over an hour⁰ waiting for a delayed train¹, then the journey, on Tuesday, being regaled every few minutes with an automated “we apologise for your journey taking longer than expected” which is far more irritating than no apology at all.--------[0] I lie a little here - living near the station and having access to live arrival estimations online meant I could leave the house late and only be waited on the platform ~20 minutes, but people for whom this train was a connecting leg of a longer journey didn't have that luxury.[1] which was actually an earlier train, the slot in the timetable for the one I was booked on was simply cancelled, so some were waiting over two hours
  • agumonkey
    I'm curious about rob pike's anger. I wish I knew more about the ideas behind his emotions right now. Is he feeling a sense of loss because AI is "doing" code ? or is it because he foresees big VC / hedge funds swallowing an industry for profit through AI financing ?
  • root_axis
    For all of you on this thread who are so confused as to why the reaction has been so strong: dressing up AI-slop spam as somehow altruistic just rubs people the wrong way. AI-slop and e-mail spam, two things people revile converging to produce something even worse... what did you expect? The Jurassic Park quote regarding could vs should comes to mind.Nobody wants appreciation or any type of meaningful human sentiment outsourced to a computer, doing-so is insulting. It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
  • ptx
    > So I had Claude Code do the rest of the investigationAnd did you check whether or not what it produced was accurate? The article doesn't say.
  • throw310822
    To me, it just sounds as he didn't understand where the message was really coming from:> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank meYes, the sender organisation is not the one doing all this, but merely a small user running a funny experiment; it would have indeed been stupid if Anthropic had sent him a thank you email signed by "Opus 4.5 model".This is just a funny experiment, sending 300 emails from in weeks is nothing compared to the amount of crap that is sent by the millions and billions every day, or the stuff that social media companies do.
  • rezonant
    All these comments are acting like Rob Pike is mad he received an email. That is a disturbing lack of reading comprehension.
  • geerlingguy
    This feels a lot like DigitalOcean's early Hacktober events, where they incentivized essentially PR spam to give away tee shirts and stickers...It also feels a bit dishonest to sign it as coming from Claude, even if it isn't directly from Claude, but from someone using Claude to do the dumb thing.
  • Imustaskforhelp
    I feel like its funny but I remember some months ago someone pointed something like "human slop" to me and I just remembered it right now writing some other comment hereI feel as if there is a fundamental difference between "AI slop" and "Human slop", it's that humans have true intent and meaning/purpose.This current AI slop spammed rob pike simply because It only did something to maximize its goal or something and had no intention. It was simply 4 robots left behind a computer who spammed rob pikeOn the other hand, if it was a human, who took the time out of his day to message rob pike a merry christmas. Asking how his day was and hoping him good luck, I am sure that rob pike's heart might melt from a heartfelt messageSo in this sense, there really isn't "human slop". There is only intent. If something was done with a good intention by an human, I suppose it can't really be considered human slop. On the other hand if there was a spammer who handwrote that message to rob pike, his intentions were bad.The thing is that AI doesn't have intentions. Its maths. And so the intentions are of the end person. I want to ask how people who spend a decent time in AI industry might have reacted if he had gotten the email instead of rob pike. I bet they would see it as an advancement and might be happy or enthusiastic.So an AI message takes an connotation of the receiver. And lets just be honest that most first impressions of AI aren't good and combining that you get that connotation. I feel like it does negative/bad publicity to use AI at this point while still burning money perhaps on it.Here is what I recommend for those websites who have AI chatbots or similar, when I click on the message:- Have two split buttons where pressing one might lead me to an AI chat and the other might lead me to a human conversation. Be honest about how much time on average it might take for support and be proper about ways to contact them (twitter,reddit although I hope that federated services like mastodon get more popularity too)
  • gnabgib
    We already have two copies of this:(438 points, 373 comments) https://news.ycombinator.com/item?id=46389444(763 points, 712 comments) https://news.ycombinator.com/item?id=46392115
  • calvinmorrison
    Ted Kaczynski right as ever. As new technology is adopted by society, you CANNOT choose to opt out.
  • outlore
    Looking at that email, I felt it was a bit of an overreaction. I don't want to delve into whataboutism here but there are many other sloppified things to be mad about.I was following the first half of the post where he discusses the environmental consequences of generative AI, but I didn't think the "thank you" aspect should be the straw that breaks the camel's back. It seems a bit ego driven.
  • Joel_Mckay
    Empty platitudes from an LLM will now likely increase in frequency. =3https://en.wikipedia.org/wiki/Streisand_effect
  • yunohn
    Maybe I’m missing something, but why does their AI agent setup require 3-5 sessions to send one email??
  • raverbashing
    But honestly who in tarnation thought that this would be a good idea?
  • Devasta
    For every one who is excited about using AI like an incredibly expensive and wasteful auto complete, there are a hundred who are excited about inflicting AI on other people.
  • exasperaited
    Honestly… fuck all of these people. Why would you do this?Again and again this stuff proves not to be AI but clever spam generation.AWoT: Artificial Wastes of Time.Don't do this to yourself. Find a proper job.
  • takedalullaby
    [dead]
  • benatkin
    I don't think it's slop. I think it's a nice enough email, using nascent AI emotions.Giving AI agents resources is a frontier being explored, and AI Village seems like a decent attempt at it.Also the naming is the same as WALL•E - that was the name of the model of robot but also became the name of the individual robot.
  • mapcars
    I mean its just an email, a bunch of characters, why get mad about it.
  • minimaxir
    The annoying thing about this drama is the predominant take has been "AI is bad" rather than "a startup using AI for intentionally net negative outcomes is bad".Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
  • arjie
    This is the worst of outrage marketing. Most people don't have resistance to this, so they eagerly spread the advertising. In the memetic lifecycle, they are hosts for the advertisement parasite, which reproduces virally. Susceptibility to this kind of advertising is cross-intelligence. Bill Ackman famously fell for a cab driver's story that Uber was stiffing him tips.With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
  • killerstorm
    1 email sent to 1 specific person is not a spam.Spam is defined as "sending multiple unsolicited messages to large numbers of recipients". That's not what happened here.
  • thih9
    > you can add .patch to any commit on GitHub to get the author’s unredacted email addressThe article calls it a trick but to me it seems a bug. I can’t imagine github leaving that as is, especially after such blog post.What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
  • drob518
    Not defending the machines here, but why is this annoying beyond the deluge of spam we all get everyday in any case. Of course AI will be used to spam and target us. Every new technology will be used to do that. Was that surprising to Pike? Why not just hit delete and move on, like we do with spam all the time in any case? I don’t get the exceptional outrage. Is it annoying? Yes, surely. But does it warrant an emotional outburst? No, not really.
  • sungho_
    How about adding these texts and reactions to LLM's context and iterating to improve performance? Keep doing it until a real person says, 'Yes, you're good enough now, please stop...' That should work.