Need help?
<- Back

Comments (61)

  • bndr
    I run a small startup called SEOJuice, where I need to crawl a lot of pages all the time, and I can say that the biggest issue with crawling is the blocking part and how much you need to invest to circumvent Cloudflare and similar, just to get access to any website. The bandwith and storage are the smallest cost factor.Even though, in my case, users add their own domains, it's still took me quite a bit of time to reach 99% chance to crawl a website — with a mix of residential proxies, captcha solvers, rotating user-agents, stealth chrome binaries, otherwise I would get 403 immediately with no HTML being served.
  • throwaway77385
    > spinning disks have been replaced by NVMe solid state drives with near-RAM I/O bandwidthAm I missing something here? Even Optane is an order of magnitude slower than RAM.Yes, under ideal conditions, SSDs can have very fast linear reads, but IOPS / latency have barely improved in recent years. And that's what really makes a difference.Of course, compared to spinning disks, they are much faster, but the comparison to RAM seems wrong.In fact, for applications like AI, even using system RAM is often considered too slow, simply because of the distance to the GPU, so VRAM needs to be used. That's how latency-sensitive some applications have become.
  • finnlab
    Nice work, but I feel like it's not required to use AWS for this. There are small hosting companies with specialized servers (50gbit shared medium for under 10$), you could probably do this under 100$ with some optimization.
  • sunpolice
    I was able to get 35k req/sec on a single node with Rust (custom http stack + custom html parser, custom queue, custom kv database) with obsessive optimization. It's possible to scrape Bing size index (say 100B docs) each month with only 10 nodes, under 15k$.Thought about making it public but probably no one would use it.
  • dangoodmanUT
    > because redis began to hit 120 ops/sec and I’d read that any more would cause issuesSuspicious. I don’t think I’ve ever read anything that says redis taps out below tens of thousands of ops…
  • thefounder
    Well the most important part seems to be glossed over and that’s the IP addresses. Many websites simply block /want to block anything that’s not google and is not a “real user”.
  • ph4rsikal
    When I read this, I realize how small Google makes the Internet.
  • handfuloflight
    There was a time when being able to do this meant you were on the path to becoming a (m)(b)illionaire. Still is, I think.
  • lovelearning
    As an experiment, it's interesting.If anyone actually needs such a dataset, look into CommonCrawl first. I feel using something that already exists will be more cooperative and considerate than everyone overloading every website with their spider. https://commoncrawl.org/overview
  • mudkipdev
    Does AWS actually allow you to crawl like this? I've been interested in a similar project but the cloud providers I typically use seem to ban it in their terms of service
  • corv
    Python is obviously too slow for web-scale
  • gethly
    > I also truncated page content to 250KB before passing it to the parser.WTF did I just read?
  • snowhale
    [dead]
  • umairnadeem123
    [dead]
  • T3RMINATED
    [dead]