What you did
You used three ssh commands:
While inside a B console you did:
ssh -4 -N -f -R 18822:localhost:22 <user>@<vps>
Command sshd (the server) to open port 18822
, a remote port vps:18822
connected to localhost (B) port 22.
While at a vps console you did:
ssh -g -f -N -L 0.0.0.0:18888:localhost:18822 <user>@localhost
Command ssh (the client) to open port 18888
available as an external (0.0.0.0
) port on (vps
) that connects to internal port 18822.
That opens an internet visible port vps:18888
that redirects traffic to 18822
which, in turn, redirects to B:22
.
While at a A console (and the only connection in which A participate):
Connect from Client A directly to Client B at vps:18888
.
What matters is this last connection.
The whole SSH security depends on the authentication of A to B.
What it means
The SSH protocol
SSH provides a secure channel over an unsecured network
By using end-to-end encryption
End-to-end encryption (E2EE) is a system of communication where only the communicating users can read the messages. In principle, it prevents potential eavesdroppers – including telecom providers, Internet providers, and even the provider of the communication service – from being able to access the cryptographic keys needed to decrypt the conversation.
End to end encryption is a concept. SSH is a protocol. SSH implements end to end encryption. So can https, or any other number of protocols with encryption.
If the protocol is strong,and the implementation is correct, the only parties that know the encrypting keys are the two authenticated (end) parties.
Not knowing the keys and not being able to break the security of the protocol, any other party is excluded from the contents of the communication.
If, as you describe: from Client A directly to Client B you are authenticating directly to system B, then, only Client A and client B have the keys. No other.
Q1
Case A: The VPS itself is not altered with, but traffic and files are monitored completely.
Only the fact that a communication (day, time, end IPs, etc.) is taking place and that some amount of traffic (kbytes, MBytes) could be monitored but not the actual contents of what was communicated.
Q2
Case B: The VPS is completely compromised, filesystem content can be altered.
It doesn't matter, even if the communication is re-routed through some other sites/places, the only two parties that know the keys are A and B. That is: If the authentication at the start of the communication was between A and B.
Optionally, check the validity of the IP to which A is connecting, then: use public key authentication (use only once a private-public key pair that only A and B know), done.
Understand that you must ensure that the public key used is carried securely to the system B. You can not trust the same channel to carry the keys and then carry the encryption. There are Man-in-the-middle attacks that could break the protocol.
Q3
If I now send a file from Client A to Client B over SFTP, would it be possible for the company hosting the VPS to "intercept" it and read the file's (unencrypted) content?
No, if the public keys were safely placed on both ends, there is a vanishingly small probability of that happening.
Walk with the disk with the public key to the other side to install it, never worry again.
Comment
From your comment:
Q1
So, basically the VPS in my setup does nothing but forward the ports, and is not involved in the actual SSH connection or authentication happening from Client A to B, correct?
Kind of. Yes the VPS should not be involved in the authentication. But it is "In-The-Middle", that is, it receives packets from one side and delivers them (if it is working correctly) to the other side. But there is an alternative, the VPS (or anything In-The-Middle) could choose to lie and perform a "Man-In-The-Middle-Attack". It could lie to Client-A pretending to be Client-B and lie to Client-B pretending to be Client-A. That would reveal everything inside the communication to the "Man-In-The-Middle". That is why I stress the word should above.
I should also say that:
...there are no tools implementing MITM against an SSH connection authenticated using public-key method...
Password-based authentication is not the public-key method.
If you authenticate with a password, you could be subject to a Man-In-The-Middle-Attack. There are several other alternatives but are out of scope for this post.
Basically, use ssh-keygen to generate a pair of keys (lets assume on side A), and (for correct security) carry the public part inside a disk to Side B and install it in the Authorized-keys file. Do not use the network to install the public key, that is: do not use the ssh-copy-id over the network unless you really do know exactly what you are doing and you are capable of verifying the side B identity. You need to be an expert to do this securely.
Q2
About the public key though, isn't it, well, public?
Yes, its public.
Well, yes, the entity that generated the public-private pair could publish the public part to anyone (everyone) and have lost no secrets. If anybody encrypts with its public key only it could decrypt any message with the matching (and secret) private key.
SSH encryption.
By the way, the SSH encryption is symmetric not asymmetric (public). The authentication is asymmetric (either DH (Diffie-Hellman) (for passwords) or RSA, DSA, Ed25519 Key strength or others (for public keys)), then a symmetric key is generated from that authentication and used as communication encryption key.
Used for authentication.
But to SSH, the public key (generated with ssh-keygen) carry an additional secret: It authenticates the owner of the public key.
If you receive a public key from the internet: How do you know to whom it belongs? Do you trust whatever that public key claims it is? You should not !!
That is why you should carry the public key file to the remote server (in a secure way) and install it there. After that, you could trust that (already verified) public key as a method to authenticate you to log-in to that server.
Q3
I've connected from the VPS, mostly for testing, to Client B before too, doesn't that exchange the public key already?
It exchange one set of public keys (a set of DH generated public keys) used for encryption. Not the authentication public key generated with ssh-keygen. The key used on that communication is erased and forgotten once the communication is closed.
Well, you also accepted (and used) a key to authenticate the IP of the remote server. To ensure that an IP is secure gets even more complex than simple(??) public-key authentication.
My impression was that the public key can be shared, but the private key or passphrase must be kept safe.
And your (general) impression is correct, but the devil is in the details ...
Who generated a key pair could publish his public key without any decrease of his security.
Who receives a public key must independently confirm that the public key belongs to whom he believes it belongs.
Otherwise, the receiver of a public key could be communicating with an evil partner.
Generate your key
Best Answer
Here's a solution using OpenSSH >= 6.7 + socat:
OpenSSH >= 6.7 can use Unix domain socket forwarding
That means the reverse tunnel endpoint will be an UNIX listening socket instead of a traditional TCP listening socket. You can then manage more easily the flotilla of RPIs with an easy naming scheme: the socket's name will be the RPI's chosen (and fixed) name, like
OfficeDevice1991
. It could be even be an unique property from the RPI as long as it's a valid filename (since unix socket names adhere to filename conventions). For example its hostname, the MAC address of its ethernet or wifi card ...SSH can handle unix sockets for tunnels, not for connecting itself. It will need the help of a
ProxyCommand
to be able to work as unix-socket client. socat can handle many kind of connections, including unix sockets.UPDATE:
There is also a specific issue to handle: the unix socket file is not deleted on clean exit, nor would it have been be deleted anyway for example after a crash. This require the option
StreamLocalBindUnlink=yes
. I didn't find initially that, as the name perhaps imply, this option must be set on the node creating the unix socket. So in the end it's set on the client with a local forward (-L
) or else on the server (insshd_config
) with a remote forward (-R
). OP found it there. This solution uses a remote forward.Configuration on VPS:
(as root) edit the
sshd_config
file (/etc/ssh/sshd_config
). It requires this additional option:Depending on default options it might also require
AllowStreamLocalForwarding yes
UPDATE2:
Also set in
sshd_config
the parametersClientAliveInterval
andClientAliveCountMax
, thus allowing to detect a disconnect in a reasonable time, eg:Stale ssh connections should then be detected earlier on the VPS (~10mn with the example), and the corresponding sshd process will then exit.
Usage on RPI:
In a config file this would be similar to this:
Repeating it again: the
StreamLocalBindUnlink yes
set onsshd
on VPS side option is important: the socket just created is not removed, even upon normal exit. This option ensures that the socket is removed if it exists before use, thus allowing to be reused for further reconnections. This also means one can't consider the mere presence of the socket as meaning the RPI is connected (but see later).Now this allows to do on VPS:
As a configuration file, considering for example RPIs have all a name starting with OfficeDevice:
To keep the link, just use a loop
The RPI can run a loop reconnecting ssh to the VPS whenever the connections ends. For this it musn't use the background mode (no
-f
). A keepalive mechanism should also be used. TCPKeepAlive (system level) or ServerAliveInterval (application level) are available. I think TCPKeepAlive is useful only on the server (the side receiving the connection), so let's rather use ServerAliveInterval.Its value (as well as ServerAliveCountMax) should probably be adapted depending on various criteria: a firewall dropping inactive connections after a certain time, the wished recovery delay, not generating useless traffic, ... let's say 300s here.
OfficeDevice1991 RPI:
Even if the remote side didn't yet detect the previous connectivity failure, and for a few more time has the old ssh connection still running,
StreamLocalBindUnlink yes
will anyway forcefully refresh the unix socket to the new connection.it's already handled by 1.
There's no
customcommandsearch
needed. With the right settings set in 1. just usingssh OfficeDevice1991
will connect to OfficeDevice1991.If needed on the VPS, as
root
user only, this command:can show which RPIs are currently connected (of course except those that recently lost the connection before detection). It won't show the stale unix socket files because there's no process tied to them.