Need help?
<- Back

Comments (39)

  • andai
    In uni I built a simple web scraper in JavaScript. It just ran in the browser. It would fetch a page, extract the links, then fetch those pages.You could watch it run in realtime and it would build out a tree of nested links as it crawled. It was a lot of fun to watch! (More fun than CLI based crawlers for sure.)The only issue I had was not being able to fetch pages from the front end due to CORS, so I just added a "proxy" to my server in 1 line of PHP. There, now it's secure? ;)
  • hansvm
    I really appreciate how out of all the security models they could've chosen, we ended up with the one which prevents you from writing better client-side frontends for incumbents or otherwise participating in a free and open ecosystem, while simultaneously being too confusing to use securely without a fair amount of explicit coaching and extremely careful code review for literally 100% of the junior devs I've met.TFA is just a manifestation of the underlying problem. You thought you were publishing your thoughts to a world wide web of information, but that behavior is opt-in rather than opt-out.
  • alessivs
    Let's get this behavior to be the default in Wordpress and Laravel for public sections—that would cover a lot of ground. I regularly encounter and suffer from unmodified instances of Laravel's default session cookie timeout of 120 minutes. If a more relaxed CORS policy were the default, it won't be an inconvenience and would likely be just as widespread.
  • travisvn
    Hey folks, I'm the developer working on Blogs Are Back. WakaTime has me clocked in at over 900 hours on this project so far...If CORS weren't an issue, it could've been done in 1/10th of that time. But if that were the case, there would've already been tons of web-based RSS readers available.Anyway, the goal of this project is to help foster interest in indie blogs and help a bit with discovery. Feel free to submit your blog if you'd like!If anyone has any questions, I'd be happy to answer them.
  • RandomGerm4n
    It's really cool that you can simply get the full text from sites that refuse to offer the entire text in their RSS feed, without having to go to their site. However, there are a few things that don't work so well. When you add feeds from YouTube, the video is not embedded. Even if the feature is out of scope, it would be good if the title and a link to the video were displayed instead. Also Bluesky posts lacks the embedded content. Furthermore, a maximum of 100 feeds is clearly not enough. If you add things like YouTube, Reddit, Lemmy, Bluesky, etc. you will reach the limit very quickly. Even if these are not content that you actually read in the reader, it would be annoying to have two different RSS Apps just for that reason.
  • mike-cardwell
    I have done this. I also relaxed my Cross-Origin-Embedder-Policy header - https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
  • arjie
    Huh, that's a pretty interesting request. And it makes sense to me. I've enabled it on my RSS feed. I wanted to see if I could add my blog feed to it to test but when I went to do so I had to install a Chrome extension on your app to do it. All right, if someone wants my blog for whatever reason that badly, they can now do it.
  • dmje
    Isn't the core audience of BaB people who already have a feed reader? I know I am.
  • impure
    I have noticed some sites block cross origin requests to their feeds. It’s annoying but I just use a server now so I don’t care. I very much recommend RSS readers to use a server as it means you get background fetch and never miss a story on sites with a lot of stories like HN.
  • anon
    undefined
  • hvb2
    This feels like such a weird ask?Why would anyone do this, so their content can be easily read elsewhere potentially with a load of ads surrounding it?This seems to really reason through only the happy path, ignoring bad actors, and there'll always be bad actors.