Need help?
<- Back

Comments (34)

  • max_k
    I agree with the article, FastCGI is better than HTTP for these things.Though I'd like to make another protocol known: Web Application Socket (WAS). I designed it 16 years ago at my dayjob because I thought FastCGI still wasn't good enough.Instead of packing bulk data inside frames on the main socket, WAS has a control socket plus two pipes (raw request+response body). Both the WAS application and the web server can use splice() to operate on a pipe, for example. No framing needed. Also, requests are cancellable and the three file descriptors can always be recovered.Over the years, we used WAS for many of our internal applications, and for our web hosting environment, I even wrote a PHP SAPI for WAS. Quite a large number of web sites operate with WAS internally.It's all open source:- library: https://github.com/CM4all/libwas - documentation: https://libwas.readthedocs.io/en/latest/ - non-blocking library: https://github.com/CM4all/libcommon/tree/master/src/was/asyn... - our web server: https://github.com/CM4all/beng-proxy - WebDAV: https://github.com/CM4all/davos - PHP fork with WAS SAPI: https://github.com/CM4all/php-src
  • nzoschke
    I’ve rediscovered plain old CGI as a great way for users to “vibe code” custom pages on our platform. [1]The scenario is we have our first party task lists and data viewers, but often users want to highly customize it. Say build a Kanban view or a custom dashboard with data filters and charts.The box has a coding agent which means the user can code anything vs us building traditional report builder tools.Go’s stdlib has good support on both the server side and user space. The coding agent makes a page-name/main.go that talks CGI and the server delegates requests to it.It’s all “person scale” data and page views so no real need to optimize with fast CGI even.What’s old is new again for agents!1. https://housecat.com
  • apitman
    > That said, using a vintage technology has some downsides. It was never updated to support WebSocketsWith widespread browser support for WHATWG streams, it's pretty easy to implement your own WebSockets over long-lived HTTP requests. Basically you just send a byte stream and prepend each message with a header, which can just be a size in many cases.Advantages over WebSockets:* No special path in your server layer like you need for WebSocket.* Backpressure* You get to take advantage of HTTP/2/3 improvements for free* Lower framing overheadUnfortunately AFAIK it's still not supported to still be streaming your request body while receiving the response, so you need a pair of requests for full bidirectional streaming.
  • nostrademons
    This is quite an interesting article for its omissions.I remember the great FastCGI vs. SCGI vs. HTTP wars: I was founding a Web2.0 startup right at the time these technologies were gaining adoption, and so was responsible for setting up the frontend stack. HTTP won because of simplicity: instead of needing to introduce another protocol into your stack, you can just use HTTP, which you already needed to handle at the gateway. Now all sorts of complex network topologies became trivial: you could introduce multiple levels of reverse proxies if you ran out of capacity; you could have servers that specialized in authentication or session management or SSL termination or DDoS filtering or all the other cross-cutting concerns without them needing to know their position in the request chain; and you could use the same application servers for development, with a direct HTTP connection, as you did in production, where they'd sit behind a reverse proxy that handled SSL and authentication and abuse detection.It also helped that nginx was lots faster than most FastCGI/SCGI modules of the time, and more robust. I'd initially setup my startup's stack as HTTP -> Lighttpd -> FastCGI -> Django, but it was way slower than just using nginx.The use of HTTP was basically the web equivalent of the End-to-End Principle [1] for TCP/IP. It's the idea that the network and its protocols should be agnostic to what's being transmitted, and all application logic should be in nodes of the network that filter and redirect packets accordingly. This has been a very powerful principle and shouldn't be discarded lightly.The observation the article makes is that for security, it's often better to follow the Principle of Least Privilege [2] rather than blindly passing information along. Allowlist your communications to only what you expect, so that you aren't unwittingly contributing to a compromise elsewhere in the network.And the article is highlighting - not explicitly, but it's there - the tension between these two principles. E2E gives you flexibility, but with flexibility comes the potential for someone to use that flexibility to cause harm. PoLP gives you security, but at the cost of inflexibility, where your system can only do what you designed it to do and cannot easily adapt to new requirements.[1] https://en.wikipedia.org/wiki/End-to-end_principle[2] https://en.wikipedia.org/wiki/Principle_of_least_privilege
  • tombert
    Interesting.Most of the stuff I've done for reverse proxies has been pretty straightforward and just using the stuff built into Nginx, but I have to admit that it wouldn't have even occurred to me to use FastCGI if I needed something more elaborate.I used FastCGI a bit about ten years ago to "convert" some C++ code I wrote to work on the web, but admittedly I haven't used it much since then.
  • athrowaway3z
    This seems like really bad advice or am i missing something?Using fastcgi requires you write your app to serve fastcgi.The upside of serving http/1.1 instead of fastcgi is that devs can instantly use their browser to test things instead of having to setup a reverse proxy on their machine.The bad parts of http/1.1 are fixed equally well by both http/2.0 and fastcgi. So just use http/2.0 and you get the proper framing as well as browser support.
  • chasil
    The PHP/Apache configuration that is distributed in the Red Hat family is "FastCGI Process Manager" (FPM).I don't know if anything else in the RHEL distributions use FastCGI. $ rpm -qi php-fpm | grep ^Summary Summary : PHP FastCGI Process Manager
  • Tepix
    I‘ve had good experiences with FastCGI back when Perl was popular. These days, WebTransport is the new sexy thing. Probably not a real FastCGI replacement.
  • simonw
    As I understand it FastCGI doesn't handle websockets, which is a shame. It should be able to handle SSE though since that's effectively just a regular slow-loading/streaming HTTP response.
  • sscaryterry
    I've fought many battles with perl + windows + apache + FastCGI in a previous life. No thank you.
  • scotty79
    What I'd like to see is someone creating local caching proxy for modern https infested world. I'm fed up with downloading same packages 100 times.
  • shevy-java
    I'd love for CGI to be updated, kind of merging what works and not really caring about what does not work. Getting a .cgi file to work on Linux is really easy. Naturally you get more leverage with e. g. rails, but there is also a lot more complexity and I really hate intrinsic complexity.
  • marlburrow
    [flagged]