Need help?
<- Back

Comments (126)

  • bobosola
    I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.[0]https://developers.google.com/search/docs/essentials/spam-po...
  • tasuki
    > If you have a public website, they are already stealing your work.I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!
  • aldousd666
    This is ultimately just going to give them training material for how to avoid this crap. They'll have to up their game to get good code. The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped. The bottom has always been threatening to fall out of the ads paid for eyeballs, And nobody could anticipate the trigger for the downfall. Looks like we found it.
  • Art9681
    Can't we simple parse and remove any style="display: none;", aria-hidden="true", and tabindex="1" attributes before the text is processed and get around this trick? What am I missing?
  • CrzyLngPwd
    Way back in the day I had a software product, with a basic system to prevent unauthorised sharing, since there was a small charge for it.Every time I released an update, and new crack would appear. For the next six months I worked on improving the anti-copying code until I stumbled across an article by a coder in the same boat as me.He realised he was now playing a game with some other coders where he make the copyprotection better, but the cracker would then have fun cracking it. It was a game of whack-a-mole.I removed the copy protection, as he did, and got back to my primary role of serving good software to my customers.I feel like trying to prevent AI bots, or any bots, from crawling a public web service, is a similar game of whack-a-mole, but one where you may also end up damaging your service.
  • dwa3592
    Love it. Thanks for doing this work. Not sure why people are criticizing this. Also, insane amount of work has been done to improve scraping - which in my mind is just absolute bonkers and i didn't see people complaining about that.
  • effnorwood
    certainly don't allow anyone to access your content. perhaps shut the site down just to be safe.
  • madeofpalk
    Is there any evidence or hints that these actually work?It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.
  • kristopolous
    I did a related approach:A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hostedI think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!I'm open to crypto not all being hustles and scamsTell me what you think?https://github.com/kristopolous/tollbot
  • bluepeter
    A related technique used to work so well for search engine spiders. I had some software i wrote called 'search engine cloaker'... this was back in the early 2000s... one of the first if not the first to do the shadowy "cloaking" stuff! We'd spin dummy content from lists of keywords and it was just piles and piles. We made it a bit smarter using Markov chains to make the sentences somewhat sensible. We'd auto-interlink and get 1000s of links. It eventually stopped working... but it took a long while for that to happen. We licensed the software to others. I rationalized it because I felt, hey, we have to write crappy copy for this stupid "SEO" thing, so let's just automate that and we'll give the spiders what they seem to want.
  • hmokiguess
    Could this lead to something like the Streisand effect? I imagine these bots work at a scale where humans in the loop only act when something deviates from the standard, so, if a bot flags something up with your website then you’re now in a list you previously weren’t. Now don’t ask me what they do with those lists, but I guess you will make the cut.
  • holysoles
    If anyone is looking for a tool to actually send traffic to a tool like this, I wrote a Traefik plugin that can block or proxy requests based on useragent.https://github.com/holysoles/bot-wrangler-traefik-plugin
  • theandrewbailey
    Or you can block bots with these (until they start using them) https://developer.mozilla.org/en-US/docs/Glossary/Fetch_meta...
  • eliottre
    The data poisoning angle is interesting. Models trained on scraped web data inherit whatever biases, errors, and manipulation exist in that data. If bad actors can inject corrupted data at scale, it creates a malign incentive structure where model training becomes adversarial. The real solution is probably better data provenance -- models trained on licensed, curated datasets will eventually outcompete those trained on the open web.
  • ninjagoo
    Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?Is this a solution in search of a problem?
  • nosmokewhereiam
    My asthmarI'm assuming this is a reference to Lord of the flies
  • ninjagoo
    This is essentially machine-generated spam.The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.This project's selective protection of the major players reinforces that effect; from the README:" Be sure to protect friendly bots and search engines from Miasma in your robots.txt!User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "
  • meta-level
    Isn't posting projects like this the most visible way to report a bug and let it have fixed as soon as possible?
  • jijji
    why not just try to block them at the door instead of feeding them poisoned food...
  • superkuh
    Of course Googlebot, Bingbot, Applebot, Amazonbot, YandexBot, etc from the major corps are HTTP useragent spiders that will have their downloaded public content used by corporations for AI training too. Might as well just drop the "AI" and say "corporate scrapers".
  • snehesht
    Why not simply blacklist or rate limit those bot IP’s ?
  • foxes
    Wonder if you can just avoid hiding it to make it more believableWhy not have a library of babel esq labrinth visible to normal users on your website,Like anti surveillance clothing or something they have to sift through
  • rob
    "/brainstorming git checkout this miasma repo source code and implement a fix to prevent the scraper from not working on sites that use this tool"
  • imdsm
    Applied model collapse
  • anon
    undefined
  • Imustaskforhelp
    I wish if there was some regulation which could force companies who scrape for (profit) to reveal who they are to the end websites, many new AI company don't seem to respect any decision made by the person who owns the website and shares their knowledge for other humans, only for it to get distilled for a few cents.
  • rvz
    > > Be sure to protect friendly bots and search engines from Miasma in your robots.txt!Can't the LLMs just ignore or spoof their user agents anyway?
  • maltyxxx
    [dead]
  • devnotes77
    [dead]
  • SophieVeldman
    [dead]
  • firekey_browser
    [dead]
  • obsidianbases1
    I know there are real world problems to deal with, but at least I got one over on that evil open claw instance /s
  • GaggiX
    These projects are the new "To-Do List" app.
  • obsidianbases1
    Why do this though?It's like if someone was trying to "trap" search crawlers back in the early 2000s.Seems counterproductive
  • splitbrainhack
    -1 for the name
  • jstanley
    If you want to ruin someone's web experience based on what kind of thing they are, rather than the content of their character, consider that you might be the baddies.