Your error message "Can't open display: 192.168.0.76:0.0" doesn't sound like SSH X11 forwarding is in use. Instead, programs on the remote host are trying to connect directly to C1, which won't work for quite a few reasons (Xorg does not listen for TCP connections by default; your firewall blocks them; the Xauth data was not sent correctly...) If X11 forwarding was in effect, then $DISPLAY would point back at the same machine (localhost), and the display number would start at :10.
First, check the $DISPLAY value on C1, and make sure clients on C1 can connect to the X server.
Then retry the connection C1→S1 using ssh -X -v -v S1
and make sure there are no error messages regarding X11 forwarding; it might be disabled on the server. (If you connect with PuTTY, then Ctrl+rightclick the console and select "Event Log".)
To see the server logs, add LogLevel DEBUG2
to /etc/ssh/sshd_config and restart sshd.
Problem: ssh
's LocalCommand
is executed on the local (client) side, not the remote as you wish. There is no RemoteCommand
option, but you can hack the functionality into your config file. Note, all of these assume that your remotehost:.gnupg
directory exists before hand.
Option 1: Use two separate host specifications in your ~/.ssh/config
:
Host remote
HostName remotehost
PermitLocalCommand yes
LocalCommand ssh -f %r@%n-mount -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg
Host remote-mount
HostName remotehost
ForwardAgent yes
RemoteForward 10000 localhost:22
Downsides: both entries need to exit for each host you want this mount point.
Option 2: Combine ssh options and port forwarding into LocalCommand:
Host remote
HostName remotehost
PermitLocalCommand yes
LocalCommand ssh -f %r@%h -o RemoteForward="10000 localhost:22" -o ForwardAgent=yes -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg
Note the subtle difference in the two LocalCommand
lines is the use of %n in the first example and %h in the second. This will work, but has one huge ASSUMPTION: you NEVER ssh to a host by it's true name and only via "short names" that exist in your .ssh/config
file, otherwise you'll end up with a infinite loop of ssh
connections trying to execute your LocalCommand
.
Option 3: Use SSH Multiplexing to setup only one connection to the remote:
Host remote
HostName remotehost
PermitLocalCommand yes
LocalCommand ssh -f %r@%h -o RemoteForward="10000 localhost:22" -o ForwardAgent=yes -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg
ControlMaster auto
ControlPersist 30m
ControlPath ~/.ssh/controlmasters/%r@%h:%p
I think that's the only winning solution, and can even work in Host *
rules, AND doesn't suffer from any downsides. It even solves the issue that second ssh sessions to the same host will NOT attempt to remount the same directory via sshfs.
Caveat: One final issue I've not bothered to resolve: your remote sshfs
will persist long after you log out of the remote host. In fact, it will never unmount
, unless your localhost goes offline or the connection is otherwise broken.
You could look at some other option to umount
that sshfs
mount as you log out of the remote host, perhaps using ideas such as this. Or you could play games with the LocalCommand
to execute something that watches and self-umounts after it sees some trigger event occur, but seems fragile at best.
Another option would be to wrap ssh
commands in some shell or perhaps use ProxyCommand
to do something tricky, but I'll leave that as an exercise to the reader.
Best Answer
You have to set DISPLAY environment variable to X server you want your program to connect to.
For example xterm could be started with:
DISPLAY=:0 xterm