Need help?
<- Back

Comments (327)

  • tptacek
    I'm glad this person found community, but I think they've been a bit starstruck by concentrated interest. At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it. There are those people about smart phones, the Internet itself, even television.Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.
  • larodi
    This whole poisoning intent is so incredibly misappropriated, that I feel sad about it. First of all - there is enough content to train on already, that is not poisoned, and second - the other new content is largely populated in automated manner from the real world, and by workers in large shops in Africa, that are being paid to not produce shit.So yes, you can pollute the good old internet even more, but no, you cannot change the arrow of time, and then there's already the growing New Internet of APIs and public announce federations where this all matters very little.
  • haberman
    I'm old enough to remember a time when the primary hacker cause was DRM, the DMCA, patent trolls, export controls for PGP, etc. All things that made it difficult to use information when you want to. "Information wants to be free."It's wild to see the about face. Now it's:> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.It would have been very difficult to predict this shift 25 years ago.
  • lolcatzlulz
    The easiest way to grow AI resistance is to get Dario Amodei and Sam Altman on TV and let them talk.
  • MisterTea
    My take on AI is that it's a corporate tool used to extract more work from employees while tricking them into thinking they are turbo-charged devs.These days the tech industry is more moneyed circus than serious effort to improve humanity.
  • jumploops
    I've noticed this trend most heavily on Reddit.Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).[0]https://www.reddit.com/r/vibecoding/[1]https://www.reddit.com/r/isthisAI/[2]https://www.reddit.com/r/aiwars/[3]https://www.reddit.com/r/antiai/[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...
  • Traster
    This is slacktivism. I can kind of understand someone coming to the conclusion that we're replacing working class jobs with compute (caveat, I use working class more broadly than you), and that compute is pure capital. So essentially the capital class are wringing the neck of the working class. I think that, at the very least, is what the capital class is hoping for. If that's what you believe though, slightly poisoning a model is not even close to grappling with what is going on.
  • caesil
    The only thing more cringe than the seething anger in this blog is the technical illiteracy revealed by an earnest belief that any of these attempts at "poisoning" will have any negative impact whatsoever on model training.
  • cortesoft
    I am hoping at some point we can move towards having more nuanced conversations about AI and the role it should play in our world. It seems like currently the only two camps are at either extreme.Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
  • p0w3n3d
    Resistance is futile But to be honest, I totally agree that AI is indeed destroying communities. We can already see YouTube redirecting all the reporting to AI which can allow some malicious agent claim your original video and demonetize it (i.e. steal your money). It happened to great YouTube people like Davie504. There is no way to appeal as the appeal is also treated by a robot
  • jmmcd
    > Since these companies can’t improve their AI models without fresh data created by human beingsTotally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.
  • cesarvarela
    I wonder if this will have the opposite effect and produce something similar to antibiotic resistance, making Ais better at handling "poison."
  • jrflo
    Seems a bit counterproductive if you're concerned about the environmental impact of AI to trick hyperscalers into burning more compute
  • fuddle
    > The poison fountain itself is hosted on rnsaffn.comWould the scrapers not just add these sites to do not crawl list?
  • graphememes
    They do realize, they filter out this stuff right? You're just making someone elses job more lucrative.
  • _-_-__-_-_-
  • jadar
    Hasn’t griefing and trolling been a thing on the internet for a while? What makes this unique just because it’s AI instead of whatever else?
  • atleastoptimal
    I have a perhaps unique viewpoint among people in tech, at least among the sample I see on HNI simultaneously think1. AI will be a massively impactful technology on the scale of the industrial revolution or greater2. The potential upside of AI is enormous, but potential downside is just as big (utopia or certain ruin)3. Most current AI companies are acting somewhat reasonably in a game-theory sense with respect to the deployment of their tech, and aren't especially evil or dastardly compared to Google in the 2000s, social media in the 2010s4. AI safety is an under-appreciated concern and many who are spending time nitpicking the details are missing the bigger picture of what ASI and complete human obsolescence look like.5. No amount of whiny protest, data sabotaging, or small-scale angst or claiming that AI is "fake" or hoping for the bubble to pop is going to have even a marginal effect on the development of AI. It is too powerful and the rewards are too great. If anything it will have an overall negative effect because it will convince labs that their potential role as a utopian, public benefactor will not be appreciated, so will instead align themselves with the military industrial complex for goodwill.
  • SwellJoe
    Any human scale "attack", e.g. the made up Everybody Loves Raymond episode isn't doing anything to hurt LLM training data. Might even help them detect exaggeration, satire, etc. when read in context and with other knowledge they have from other sources (like scraping IMDB or whatever, and already knowing the cast and plot summary of every episode of Everybody Loves Raymond).If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.
  • amelius
    This robot chasing wild boars is a preview of what is coming for us:https://www.youtube.com/shorts/6E2AH43ad7w
  • lxgr
    > This isn’t exactly the modern equivalent of angry textile workers destroying power looms, but (if you’ll forgive the pun) it’s cut from the same cloth.And how did that work out for the textile workers?> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.
  • zoogeny
    I often question my own bias on this because in my interactions with local non-tech people, the adoption of AI has pretty much affected everyone I know and it is by my estimation a majority positive reaction. I live in a fairly rural part of the PNW.So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.
  • damnesian
    Thanks to this lovely site, and my distaste for AI, I've found a whole ecosystem of minimalist blogs and artists' personal sites. It's shifting my habits and foci. I don't do socials anymore except forums like this.Maybe I have slop to thank for it.
  • KronisLV
    I bet it's easy to be against AI, instead of those who use it in inhumane ways (and hold considerably more power). To them, AI is just a tool. If it wasn't AI, it would be buildings full of people and automated devices posting misinformation, outsourcing jobs and pushing for gig economy instead of respectable employment, having understaffed call centres and bad phone trees or knowledgebases that basically tell you to f off, lobbying against workers' rights and regulatory capture and any number of other misaligned motivations.
  • sn0n
    Let’s just break trust more. Makes sense amirite? LoL, in what reality does this even make sense??? it’s literally just spreading misinformation to people who can’t read between the lines because the tism stick got em before they were born.
  • alyxya
    This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation. AI tries to learn pattern matching for generalization, so garbage data doesn't make AI learn the wrong patterns, at best just slows down learning the actual patterns. When most compute for training is spent on curated data and RL rather than random web-scraped data, the impact is likely negligible.
  • pj_mukh
    Fortunately, the slop you visibly see online is just the tip of the iceberg. I would guess 80% of AI's real usage hides beneath the surface in back-office documentation consumption, software development, process optimization and automation, investments in new endeavors companies would've never thought possible/financially feasible etc. All of that usage is hidden from this resistance, and possible now with current models (so all this new poisoning is irrelevant). The valuations could go away tomorrow, and it would've still fundamentally changed the nature of the economy.It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.Now is as good as time as any.
  • overgard
    Honestly, it's no wonder there's a lot of pushback. We have these irresponsible CEO's talking non stop about taking peoples jobs at a time when people are struggling to make ends meet, all while taking in insane cash infusions. Why wouldn't people loathe AI at this point when the marketing is "we're going to fuck you over and there's nothing you can do about it".
  • miltonlost
    Conflating “kicking over ai delivery bots” and “throwing a Molotov cocktail at Altman’s house” as being both condemnable hasn’t actually been forced off the sidewalk by one of these delivery bots. These are dangerous and anti-human ADA nightmares. They shouldn’t be allowed on sidewalks, emphasis on walk
  • Aboutplants
    Maybe when the entire marketing of AI is fear mongering and doom (all your jobs are going away!) the end result is something you should have expected from the very beginning
  • guywithahat
    I'm very skeptical of his premise. I feel like AI acceptance/resistance is dependent on what social media site you use. I believe it's antagonistic on Reddit, but sites like X are generally pretty excited for AI. Certainly in my life people are accepting and excited for AI releases and tools, maybe so long as your experience with AI isn't Microsoft enterprise copilot.
  • cdelsolar
    pretty lame, Milhouse
  • OutOfHere
    These people are dinosaurs and you know what happens to Dinosaurs. Until they meet their conclusion, they are for the moment at risk of becoming terrorists.
  • mjtk
    AI scares the crap out of me. I worried about what reality will look like in 2-5 years. The rate of change is pretty bonkers.
  • simianwords
    Is this just Luddism in 21st century? I kind of feel bad for the pathetic (mental) state one must be in to take this kind of activism seriously
  • morning-coffee
    This is (yet another reason) why we can't have nice things on the Internet anymore. Sigh.
  • jonathanstrange
    This is a normal reaction to ground breaking technology but these reactions never had any noteworthy effect in history. There used to be Maschinenstürmer during the 19th Century industrial revolution. There were also violent enemies of cars in the beginning of the 20th Century, some of them were even willing to kill drivers with lethal wire traps.
  • appz3
    [dead]
  • aizl34
    [dead]
  • inquirerGeneral
    [dead]
  • mine_boi
    [flagged]
  • cmdk
    [flagged]
  • julienreszka
    i always hated luddites, just one more reason to hate them
  • roschdal
    I resist AI.
  • pmarreck
    So is vaccine resistance.Doesn't mean it's correct, or empirically-based.
  • slibhb
    The "Everybody Loves Raymond" bit isn't "misinformation," it's a Norm Macdonald joke.I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that
  • lpcvoid
    Good, every little bit counts. Poison them data wells.
  • madamelic
    I do understand people's dislike / hatred for AI but I am equally baffled.I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.