Need help?
<- Back

Comments (117)

  • miyuru
    > We cannot issue an IPv4 address to each machine without blowing out the cost of the subscription. We cannot use IPv6-only as that means some of the internet cannot reach the VM over the web. That means we have to share IPv4 addresses between VMs.Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
  • morpheuskafka
    They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either. With only one routable IP for the host, all traffic on a port shared by VMs has to go to a server on the host first (unless you route based on port or source IP with iptnbles, but that is not hostname based).The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:ssh user@vm1.box1.tld becomes: ssh -j jumpusr@box1.tld user@vm1And just make jumpusr have no host permissions and shell set to only allow ssh.
  • dlenski
    SSH is an incredibly versatile and useful tool, but many things about the protocol are poorly designed, including its essentially made-up-as-you-go-along wire formats for authentication negotiation, key exchange, etc.In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
  • c45y
    I would love it if more systems just understood SRV records, hostname.xyz = 10.1.1.1:2222So far it feels like only LDAP really makes use of it, at least with the tech I interact with
  • krautsauer
    SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?
  • thaumaturgy
    Yeah, I ran into this problem too. I tried a few different hacky solutions and then settled on using port knocking to sort inbound ssh connections into their intended destinations. Works great.I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.Been using this for a few years and no problems so far.
  • dspillett
    The workaround I use for my own stuff is to have a single jump-host that listens on the public IPv4 address and from there connect to the others. I can still just ssh username@namedhost (which could be username@www.websitehostedonthevm.tld, though I usually give short aliases in .ssh/config) without extra command-line options with the on-time config of adding a host entry in .ssh/config listing the required jump host and internal IP address. Connecting this way (rather than alternatives like manual multi-hop) means all my private keys stay local rather than needing to be on the jump host, without needing to muck around with a key agent.I even do this despite having a small range of routable IPv4s pointing at home, so I don't really need to most of the time. And as an obscurity measure the jump/bastion host can only be contacted by certain external hosts too, though this does still leave my laptop as a potential single point of security failure (and of course adds latency) and one or any bot trying to get in needs to jump through a few hoops to do so.
  • elric
    Two options I use:1. Client side: ProxyJump, by far the easiest2. Server side: use ForceCommand, either from within sshd_config or .ssh/authorized_keys, based on username or group, and forward the connection that way. I wrote a blogpost about this back in 2012 and I assume this still mostly works, but it probably has some escaping issues that need to be addressed: https://blog.melnib.one/2012/06/12/ssh-gateway-shenanigans/
  • otterley
    This is a clever trick, but I can’t help but wonder where it breaks. There seems to be an invariant that the number of backends a public key is mapped to cannot exceed the number of proxy IPs available. The scheme probably works fine if most people are only using a small number of instances, though. I assume this is in fact the case.Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
  • binarin
    In kinda the same situation, I was using username for host routing. And real user was determined by the principal in SSH certificate - so the proxy didn't even need to know the concrete certificates for users; it was even easier than keeping track of user SSH keys.Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
  • loktarogar
    I'm building something that has to share a pool of phone numbers for SMS between many businesses with many clients and the architecture I had planned out looks a lot like this - client gets assigned a phone number from the pool for all its interactions with a certain business.Good write up of a tricky problem, and glad to real-world validate the solution I was considering.
  • ulrikrasmussen
    Wouldn't a much simpler approach be to have everyone log in to a common server which sits on a VPN with all the VMs? It introduces an extra hop, but this is a pretty minor inconvenience and can be scripted away.
  • 3r7j6qzi9jvnve
    I wonder if it's something like https://github.com/cea-hpc/sshproxy that sits in the middle (with decryption and everything) or if they could do this without setting up a session directly with the client.Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiperwill try to remember to look later.
  • niobe
    I had to reread the first paragraph several times before I understood - the author was misuing a term.> unexpected-behaviour.exe.devThat is not a URL, that's a fully qualified domain name (FQDN), often referred to as just 'hostname'.
  • gorgoiler
    Hosting DNS on the same machine as your application opens up all sorts of nice hacks. For example, you can add domain names to nf_conntrack by noticing the client resolving example.com to 10.0.0.1, then making a connection to 10.0.0.1 tcp/443. This was how I made my own “little snitch” like tool.
  • thomashabets2
    While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxyNot exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.
  • dwedge
    This is a problem I've come up against a few times. Enforcing a different key per server would also help solve it in their case, but really I just want a haproxy plugin that allows selecting a backend based on the public key
  • hamandcheese
    This would be a great use case of SSH over HTTP/3[0]. Sadly it doesn't seem to have gained traction.[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
  • ksk23
    Once hooked into PAM to have a central „ssh box“ mount remote boxes filesystems on user connect. Just need to have a lookup table: which username belongs to wich customer(s server). Ezpz.
  • Eikon
    I am not sure to understand what this is this achieving compared to just assigning a ip + port per vm?
  • est
    jump servers, it's a thing and a good security measure.
  • jamesvzb
    surprised this isn't talked about more
  • spwa4
    True, BUT you can use proxycommand in sshconfig, along with wildcard matches to make this sort of thing very practical, at the cost of a single config change.
  • TZubiri
    It's hard to think of a clearer example for the concept of Developer Experience.One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
  • XorNot
    The solution to this is TLS SNI redirecting.You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
  • YooLc
    Why not include header in the username field :)Take a look at this repo: https://github.com/mrhaoxx/OpenNGIt allows you to connect multiple hosts using the same IP, for example:ssh alice+hostA@example.com -> hostAssh alice+hostB@example.com -> hostB
  • fcpk
    I mean it works... but it's really ghetto. You have to handle username collisions(or enforce unique usernames). IPv4 should be non free, and that'd cover the costs...
  • snvzz
    The solution is ipv6.
  • charcircuit
    You don't need SSH. Installing an SSH server to such a VM is a hold over from how UNIX servers worked. It puts you in the mindset of treating your server as a pet and doing things for a single vm instead of having proper server management in place. I would reconsider if offering ssh is an actual requirement here or if it could be better served by offering users a proper control panel to manage and monitor the vms.
  • shablulman
    [dead]
  • oleh_kos38
    [dead]
  • spanjer
    [dead]