Need help?
<- Back

Comments (174)

  • nickjj
    Docker Compose was production ready in 2015 and it still is today. I've lost track of how many projects I've deployed with it and never really ran into a single issue where Docker Compose was at fault. It's super solid.Some time ago I've written about my experiences using it in production https://nickjanetakis.com/blog/why-i-like-using-docker-compo.... Not just for my own projects but for $500 million dollar companies and more.
  • stackskipton
    SRE here, my thought is "Sure, Docker Compose is great for production assuming your needs are light and Docker Compose works well for you."K8s as small time is overkill for sure but make sure you don't fall into this trap. https://www.macchaffee.com/blog/2024/you-have-built-a-kubern...
  • 2ndorderthought
    Should you have a turkey sandwich for lunch in 2026? I don't know buddy just do whatever. There are ten thousand other sandwiches you could eat surely, but does turkey sound good for you?
  • noodlesUK
    I think many of these issues are also solved by Podman and systemd depending on what kind of "production" you're building for. If you're building a linux-y appliance and you need to run a few containers I think Podman is a much better and more ergonomic way of doing so. I think perhaps that's less true for running a web service (where the linux environment is just a means to that end).
  • merpkz
    How do you guys, who run Docker in production deal with managing nftables firewall on hosts running containers? By design docker daemon creates and manages a set of firewall rules to forward traffic between containers and ingress traffic into containers as well as masquarades the outgoing container traffic. That is all well until admin needs to alter hosts firewall to allow and deny other traffic unrelated to docker - and restarting nftables or even applying new nftables rules usually ( flush ruleset in /etc/nftables.conf ) purges all the docker created rules and effectively breaks everything until docker daemon is restarted and rules re-created. I have partially solved this by using nftables filter chains with different names - admin_input/admin_output and using input hook with negative priority - so that traffic I choose to block is evaluated before docker rules are applied - that feels a bit like hack, but so far is the only way I have found. It is good practice in this day and age to run local firewalls on all hosts with policy deny, so that only traffic explicitly allowed can pass, that can severely limit blast radius during compromise.
  • __jonas
    I like running docker compose for my simple needs because it consolidates pretty much all the config in one declarative file, and docker manages 'everything'. By now I know how to handle the handful of caveats listed in this article. Beyond what's listed there, I'd also give a mention to the way port publishing works (the fact that it ignores firewalls), as that's something that still trips people up if they don't know about it.> docker compose pull && docker compose up -d is a fine command if you are SSH’d into the host. At customer scale—dozens of self-managed environments behind firewalls, each with its own change-control process—that manual process doesn’t scale.No idea what this 'customer scale' operation is, but it seems like a pretty clear cut candidate for not using docker compose. I also don't think watchtower should be listed there, it's been archived and was never recommended for production usage anyways.
  • stevefan1999
    I really want something that is Docker Compose but for Kubernetes. I mean that I can have a simple way to declaring resources in just like Docker Compose, but I run the environment in Kubernetes so that I can get to test the behaviors when there are multiple copies of the softwares running together. I do rely on Kubernetes heavily for distributed and networked software deployment, so it is even better if we can emulate things like latency or burstable packet loss so that we can do a controlled chaos test for reliability test. I tried Skaffold, Tilt, Devspaces and Devpod/Coder v2, none of them are really simple like Compose.
  • tontony
    Compose is great, but a couple things always created friction for me when using it for non-local setups:* Lack of a user-friendly way of managing a Docker Compose installation on a remote host. SSH-forwarding the docker socket is an option, but needs wrappers and discipline.* Growing beyond one host (and not switching to something like Kubernetes) would normally mean migrating to Swarm, which is its own can of worms.* Boilerplate needed to expose your services with TLSUncloud [1] fixed all those issues for me and is (mostly) Compose-compatible.[1] https://github.com/psviderski/uncloud/
  • faangguyindia
    I am using systemd + go binary deploy. Running 10 years+ in production. Meanwhile docker based setup fail every now and then. And kubernetes? well forget about it.
  • raphinou
    I'm very happy using docker swarm on a single host with traefik as reverse proxy using the setup described here: https://dockerswarm.rocks/Super easy deployment of additional apps, defined completely in one file (incl setup on host, backups, reverse proxy config, etc).Never found a reason to migrate away. Swarm was already considered dead when I started using it in 2022[1], but the investment was so low and benefits so big, that it was the right choice for me. I think a lot of people are replicating swarm features with compose, losing a lot of time. But hey, to each their own choice!1: https://www.yvesdennels.com/posts/docker-swarm-in-2022/
  • kjuulh
    I am using docker-compose everywhere. I really enjoy using it. I have a single thing that is annoying for normal production deployments, and that is that it isn't super easy to have a rolling deployment, I just need two replicas for zero downtime deployment, and I don't really want docker swarm. I think it is the networking which breaks at that point, and you have to have a more involved setup, and at that point I'd just use kubernetes, as I know how that works.Could i survive with 10 seconds of downtime, probably, but I'd really like if I could avoid it.
  • anon
    undefined
  • jstanley
    > How Do You Handle Deployments?This section misses the one thing I was interested in: how do you avoid downtime in a deployment?I like to write web applications with Perl and Mojolicious, and a deployment is just "hypnotoad app", and then hypnotoad gracefully starts up new worker processes to handle new requests and lets the other ones exit once they've finished handling their in-flight requests.When I switched to Docker I found that there was no good way to handle this.
  • jackconsidine
    > Every docker compose pull keeps the previous image on disk. Every container with the default json-file log driver writes unbounded JSON to /var/lib/docker/containers/<id>/<id>-json.log. On a busy host this is one of the most common reasons for an outage: the disk fills and Docker stops being able to write anythingI ran docker compose in development a lot. Just an easy way to turn on / off 5 different services at once for a project. Over time this was filling up my machine's storage (like 1 TB). Every few months I needed to run docker compose prune and see 600GB free up
  • meander_water
    Surprised they didn't mention docker compose secrets - https://docs.docker.com/reference/compose-file/secrets/
  • perarneng
    If you love docker compose then you would love k3s. A single server with k3s is basically docker compose + the possibility to use helm to install all kinds of open source project such as monitoring and it just works.
  • Sarky
    I prefer Portainer to manage my docker composes. It is simple and can do it all instead of using cli. Added benefit if you have multiple hosts and want to manage them from one place. And you can extend the whole setup with git for version control.
  • jpalomaki
    Kubernetes sounds like overkill, but I've been running microk8s for few standalone servers. This feels a pretty good match when working with agents. Codex can manage the cluster also over ssh, schedule new pods, check statuses, logs etc.
  • philipallstar
    Very cool article. Wish it didn't have silly AI-isms:> This is the shape Distr lands on
  • rob
    I do this via Dokploy on a hosted Linode VPS and absolutely love it. Super easy to set up and maintain for tons of little side projects that don't require tons of resources.Seems like an ad for whatever "Distr" is though; I haven't run into any of these issues with Dokploy and everything's been running fine for months.
  • chuckadams
    Sure, it's stable enough, just keep in mind you won't get any autoscaling (or manual for that matter). Swarm is still supported by a third party, but that party has been loudly signaling that they intend to kill it off this year or next. Kubernetes isn't too big a leap, but damn are all those yaml manifests annoying to maintain. I usually just copy and tweak them from another project.
  • TheCapeGreek
    Somewhat adjacent in how I look at using Docker at all in prod, here's what I always wonder:Is using Docker/Compose "just" as the layer for installing & managing runtime environment and services correct? Especially for languages like PHP?I.e. am I holding it wrong if I run my "build" processes (npm, composer, etc) on the server at deploy time same as I would without containers? In that sense Docker Composer becomes more like Ansible for me - the tool I use to build the environment, not the entire app.For the purpose of my question, let's assume I'm building normal CRUD services that can go a little tall or a little wide on servers without caring about hyper scale.
  • marginalia_nu
    One big thing I think is whether you want some sort of non-trivial network configuration, such as multiple external IPs via ipvlan. That's technically possible off docker, but not in a responsible way as anything in the ipvlan will be accessible to the public internet. Overall the implementations for this are very janky and occasionally enter tilted states that are close to impossible to recover from short of a restart of the docker daemon.
  • INTPenis
    Sure why not, it just never fits into my model of how I design infrastructure.Docker compose assumes all your services can reach each other over docker, which I find horribly insecure.I separate all my services by user account at least, maybe even by VM, and I run them all in rootless podman containers. So it just doesn't fit my style, but I'm sure it works fine.
  • Havoc
    I really like developing against compose because it's light but gives you that escape hatch of translating to k8s if later circumstances call for it.Very few separate ecosystem transfers are quite that frictionless.
  • fabian2k
    My experience with docker-compose is a bit outdated, but my impression some years ago was that it was too sensitive and fragile. I encountered bugs or incompatibilities that broke the docker-compose setup often enough to be forced to pin the specific docker and docker-compose versions.And the error handling was terrible. Most of these problems resulted in a Python stack trace in some docker-compose internals instead of a readable error message. Googling the stack trace usually lead to a description of the actual problem, but that's really not something that inspires confidence.
  • TheChaplain
    TIL about limiting logs. Very useful, I had no idea.
  • tacker2000
    Yes of course, im running a few production projects with it.Granted, its B2B Saas with not many users, maybe 100 concurrent.80% of workloads dont need the complexity of Kubernetes and run fine with compose.
  • hmontazeri
    Couple of months ago I published the way I use docker compose in production with mushak.sh and it’s really convenient to deploy this way
  • xenophonf
    Yes.It's nice to get an easy question every once in a while.
  • wewewedxfgdf
    Use Nspawn. It's on every machine.
  • dzonga
    well written guide.even their follow up - Docker Compose vs Kubernetes.Docker compose for me has been great - no complexity.
  • DeathArrow
    I am doing just this. Running docker compose on a server. When there will be to many microservices, we will move them in managed Kubernetes on a cloud platform or Nomad if any cloud platform offers it.
  • _nhh
    Yes. It's perfectly fine.
  • ksk23
    Yes and no :)
  • ViewTrick1002
    Personally I have moved to k3s, although after learning a bit too deep how k8s operates when writing custom controllers at the day job.Docker/containers are great, especially for local development. But I feel the docker compose model quickly becomes a lot of messy brittle squeeze for little gain when multiple containers need to integrate.Better then to just take the plunge for the "real deal" and set up a non-HA k8s/k3s cluster with the interactions between the workloads clearly specified.In other words. I care care more about the interactions declaratively spelled out than the "scale to the moon" HA, auto-scaling, replicas or whatever people get sold on.And LLMs make this even easier. If you love reviewing yaml manifests....
  • pm90
    No
  • kurtis_reed
    No
  • mdrzn
    AI article with 27 occurrences of dashes —
  • rho4
    I really liked the specific actionable steps in the TLDR.
  • justsomehnguy
    > if you close the operational gaps it leaves: cleanup, healing, image pinning, socket security, and updates.Ie you need a sysadmin. Oops, you fired them all 10 years ago when the agile devopsing became the best thing after the pumpkin latte.
  • eddyaipt
    [flagged]
  • aykutseker
    [dead]