When I run
FOO=$(ssh -L 40000:localhost:40000 root@1.2.3.4 cat /foo)
I get the contents of /foo
, but then it disconnects.
What I'd like to do is somehow get the content of /foo
and keep the connection open so that port 40000 is still forwarded to the same server. Is this possible?
You might ask, why not just issue two ssh connections like this
FOO=$(ssh root@1.2.3.4 cat /foo)
ssh -L 40000:localhost:40000 root@1.2.3.4 -f -N
In my situation, the reason I can't do this is because the ip (1.2.3.4
) is a load balancer that forwards the connection to a number of random backends. Each time I ssh to 1.2.3.4
I get a different machine, and the contents of /foo
are different for every machine. Moreover, the data I send over the forwarded port (40000) depends on the contents of /foo
. If I grab the contents of /foo
on machine A and then sent data over port 40000 to machine B, things don't work.
Best Answer
What you are describing is known as SSH multiplexing.
I use that setup in a devops setting for caching my connections to any VMs.
In that way I reuse the same connection for up to 30 minutes/cache the connection, without renegotiating the entire SSH connection (and authenticating the user) in each new command.
It gives me an huge boost in speed, when sending (multiple) commands in a row to a VM/server.
The setup is done on the client side, and for a cache of 30 minutes, the setup can be done in
/etc/ssh/ssh_config
as:The
MaxSessions
parameter, also inssh_config
also defines how many multiplexed connections simultaneous connections are allowed; the default value is 10. If you need more simultaneous cached connections, you might want to change it.For instance, for a maximum of 20 cached connections:
For more information, see OpenSSH/Cookbook/Multiplexing
Also see Using SSH Multiplexing
Lastly, as the multiplexing keeps the TCP connection open between the client and the server, you will have the guarantee that you are talking with the same machine in the load balancer, as long as the cache is open/active.