I am writing a tutorial on setting up Ceph the hard way, shell all the way.
I am not happy with the number of ssh commands it takes to connect to a remote host as root, create new user, then scp keys over… there must be a smarter, simpler way – especially on Ubuntu.
Here is the exact problem:
local user FOO that has access to remote server with user ROOT needs to setup keys (and possibly the user as well) for user CEPH. Repeat n times with n remote hosts.
Any clever one-liners I am missing?
current steps:
scp -i digitalocean id_rsa.pub storage-1:/root
ssh -i digitalocean storage-1
useradd ceph
mkdir ~ceph/.ssh
cat id_rsa.pub >> ~ceph/.ssh/authorized_keys
chmod 700 ~ceph/.ssh
chmod 600 ~ceph/.ssh/authorized_keys
chown ceph:ceph ~ceph/.ssh/authorized_keys
chown ceph:ceph ~ceph/.ssh/
rm id_rsa.pub
Best Answer
Part of your problem lies in the creation of
.ssh
. What I'd do use usessh-keygen
, which will create it if doesn't exist and set permissions properly (and, of course, create a key pair for the user).Notes:
adduser
instead ofuseradd
- it creates a skeleton home directory, from/etc/skel
.--gecos ""
and--disabled-password
are used to avoid prompting. If you don't mind prompts for name and password, you skip these options.ssh-keygen
can create.ssh
with the right permissions-N ""
and-f ~ceph/.ssh/id_rsa
are used to avoid prompts. You can skip these if you don't mind prompts for the key location (for which the default is fine) and if you wish to set a passphrase..ssh
nor.ssh/authorized_keys
need to have700
as the mode. As long as only the owner can write to them, it's fine (755
for.ssh
and644
for.ssh/authorized_keys
is just fine).