ssh user@host "ls --color=auto"
ls
only outputs colors when it is writing to a terminal. When you specify a command for ssh
to run on the remote host, ssh doesn't allocate a TTY (terminal interface) by default. So, when you run the above command, ssh doesn't allocate a terminal on the remote system, ls sees it's not writing to a terminal, and it doesn't output colors.
You can run ssh with the -t
option to make it allocate a terminal. The following should print colors:
ssh -t user@host "ls --color=auto"
If ssh
is being run non-interactively, and is own local output isn't going to a terminal, then it will ignore a single -t
flag. In this case, you can specify -t
more than once to force ssh to allocate a TTY on the remote system:
ssh -tt user@host "ls --color=auto"
So basically after 40 or so hours of trials and failures I did manage to find solution. Shadowcoder's and Gaurav's answers were helpful indeed but to no avail.
So basically I have no idea why would this answer work, but it does. The problem lies somewhere in the fact that I was running a bash script that existed on one machine, via SSH, on other machine. That script was calling another script and had to exit SSH while leaving the remote script running. It sounds complicated but actually it looks reasonable to me. At least it sounded reasonable when I started writing it.
Let's further explain things (didn't seam important when I was writing the question).
My CI/CD software was running builds when detecting a push to master branch. On successful build it should call deploy.sh
script automatically in order to post changes to staging server. Fairly standard. That script tells remote server to pull changes, stop server and restart it again. Obviously server can be started in 3 modes (staging, production and development) with single script that is called start.sh
.
So the deployment script's main goal is to stop server, pull changes and start the starting script again.
So at first I tried to use nohup.
My deploy.sh script was made like this
cd project
killall node;
nohup ./start.sh -s < /dev/null >/dev/null 2>&1 &
exit;
Looking at my CI logs I saw that no matter what I do, server actually runs smoothly, and I can see my changes on the staging server, however this script never ends! SSH connection remains open forever. Now strange things happened. When closing it manually in CI the server remained active, which was the desired behavior - but I did not want to have to manually intervene whenever build is triggered. I did notice one peculiar thing when looking at my CI logs. It was as if the script was waiting for ANOTHER command after exit. So in desperation I added one more exit. My script was now
cd project
killall node;
nohup ./start.sh -s < /dev/null >/dev/null 2>&1 &
exit;
exit;
This did work in fact but for some reason the start.sh
process was killed when SSH was exited. Back to the square one.
Couple of (more like ten) of hours later. I noticed that screen solution (although not working at start) that Shadowcoder proposed might be the way to go though.
My start.sh
script was doing boot-up of my server. That included database migrations, seeding of initial data, building the UI, doing setup of in-memory cache on tmpfs
and eventually it was calling node index.js
. Since I was trying and failing to reproduce the issue so I could update the question for Shadowcoder's information I tried to screen
only the last command (node index.js).
That is how I came up to solution.
deploy.sh script looks like this
./start.sh -s;
exit;
exit; #I have no idea why the hell I need 2 exits but it works
start.sh script looks like hell but the important thing is that
screen -dmS server_name node index.js &
Please notice that I still had to end the command with &
It just would not work without it, have literally no idea why.
So the solution seams hackish, and I am still sure I missed something and that things do not have to be this way, but this does work. Not sure why it does work or why it did not work before, but if this can save someone else's time - the solution is here. I would also like to know the explanation for this behavior.
Best Answer
Already answered for example here on Serverfault:
You can set up
~/.ssh/config
, with these options:Then make sure you
mkdir ~/.ssh/controlmasters/
and from that time, your connections tomachine1
will persist for 10 minutes so you can issue more sessions or data transfers during one connection.Then it will Just WorkTM:
If you have some reason not to use the config, then you can do it also on command-line: