Why root over SSH is bad
There are a lot of bots out there which try to log in to your computer over SSH.
These bots work the following way.
They execute something like ssh root@$IP
and then they try standard passwords like "root" or "password123".
They do this as long as they can, until they find the right password.
On a world wide accessible server you can see a lot of log entries in your log files. I can go up to 20 per minute or more.
When the attackers have luck (or enough time), and find a password, they would have root access and that would mean you are in trouble.
But when you disallow root to log in over SSH, the bot needs first to guess a user name and then the matching password.
So lets say the list of plausible passwords has N
entries and the list of plausible users is M
entries large. The bot has a set of N*M
entries to test, so this makes it a little bit harder for the bot compared to the root case where it is only a set of size N
.
Some people will say that this additional M
isn't a real gain in security and I agree that it is only a small security enhancement. But I think of this more as these little padlocks which are in itself not secure, but they hinder a lot of people from easy access. This of course is only valid if your machine has no other standard user names, like tor or apache.
The better reason to not allow root is that root can do a lot more damage on the machine than a standard user can do. So, if by dumb luck they find your password, the whole system is lost while with a standard user account you only could manipulate the files of that user(which is still very bad).
In the comments it was mentioned that a normal user could have the right to use sudo
and if this user's password would be guessed the system is totally lost too.
In summary I would say that it doesn't matter which user's password an attacker gets. When they guess one password you can't trust the system anymore. An attacker could use the rights of that user to execute commands with sudo
, the attacker could also exploit a weakness in your system and gain root privileges. If an attacker had access to your system you can't trust it anymore.
The thing to remember here is that every user in your system that is allowed to log in via SSH is an additional weakness.
By disabling root you remove one obvious weakness.
Why passwords over SSH are bad
The reason to disable passwords is really simple.
- Users choose bad passwords!
The whole idea of trying passwords only works when the passwords are guessable.
So when a user has the password "pw123" your system becomes insecure.
Another problem with passwords chosen by people is that their passwords are never truly random because that would then be hard to remember.
Also it is the case that users tend to reuse their passwords, using it to log into Facebook or their Gmail accounts and for your server.
So when a hacker gets this user's Facebook account password he could get into your server. The user could easily lose it through phishing or the Facebook server might get hacked.
But when you use a certificate to log in, the user doesn't choose his password.
The certificate is based on a random string which is very long from 1024 Bits up to 4096 Bits (~ 128 - 512 character password).
Additionally this certificate is only there to log into your server and isn't used with any outside services.
Monitoring root access
The comment from @Philip Couling which should have been an answer:
There's an administrative reason for disabling root. On commercial servers you always want to control access by person. root is never a person. Even if you allow some users to have root access, you should force them to login via their own user and then su - or sudo -i so that their actual login can be recorded. This makes revoking all access to an individual much simpler so that even if they have the root password they can't do anything with it. – Philip Couling
I would also add that it allows the team to enforce the principle of least privilege, with a proper sudo configuration (but writing one sounds easier then it is). This enables the team to distribute uncritical better, without giving away the key to the castle.
Links
http://bsdly.blogspot.de/2013/10/the-hail-mary-cloud-and-lessons-learned.html
This article comes from the comments and I wanted to give it a bit more prominent position, since it goes a little bit deeper into the matter of botnets that try to log in via SSH, how they do it, how the log files look like and what one can do to stop them. It's been written by Peter Hansteen.
There are two possible solutions to your problem, which both address slightly different scenarios.
The first one would be using umask
. The umask is a value which tells the kernel, which access-bits to clear on newly created files (this primarily affects the open(2)
and creat(2)
system calls — you would still be able to set access bits »forbidden« through the umask value by explicitly calling chmod(2)
on the freshly created file). Thus, if you set umask to 0
, all access bits requested by the program creating the file are set. If you set it to 04777
, all bits are cleared.
Most of the times, the default umask value would be 022
, meaning that write permissions are cleared for group and others.
So to solve your problem, you could have the umask value set to 027
, which would result in every created file having write access removed for group and all bits cleared for others:
$ umask
022
$ touch testfile.022
$ ls -l testfile.022
-rw-r--r-- 1 user user 0 May 15 18:56 testfile.022
$ umask 027
$ touch testfile.027
$ ls -l testfile.027
-rw-r----- 1 user user 0 May 15 18:57 testfile.027
To set this in every shell you start, put the appropriate umask
call into one of the startup-files of your shell (~/.profile
or ~/.bashrc
being a good place to start).
Another approach would be using a default Access Control Lists (ACLs) on the directory you want to have those bits cleared in. This approach has two advantages compared to using umask
. First, it adds more granularity, namely on directory level, and second, it works for everyone (you won't have to alter the global umask
value):
$ umask
022
$ mkdir acldir
$ cd acldir
$ getfacl .
# file .
# owner: user
# group: user
user::rwx
group::r-x
other::r-x
$ ls -la
total 12
drwxr-xr-x 2 user user 6 May 15 19:11 .
drwxr-xr-x 83 user user 8192 May 15:19:08 ..
$ touch testfile.noacl
$ ls -l testfile.noacl
-rw-r--r-- 1 user user 0 May 15 19:14 testfile.noacl
$ setfacl -m 'user::rwx,group::r-x,other::---' .
$ setfacl -d -m 'user::rwx,group::r-x,other::---' .
# file: .
# owner: user
# group: user
user::rwx
group::r-x
other::---
default:user::rwx
default:group::r-x
default:other::---
$ touch testfile.acl
$ ls -l testfile.acl
-rw-r----- 1 user user 0 May 15 19:16 testfile.acl
As you could see, with the default ACL in place, testfile.acl
has access bits for others cleared, even with the umask value set to 022
.
For a deeper understanding of ACLs, have a look at the acl(5) man page.
Edit: If you want to prevent the file's owner from doing an explicit chown(2)
on the file, you would have to bring out the heavy artillery; I think it's not possible in a standards compliant way (in terms of POSIX). There are several security frameworks out there which might allow you to intercept and filter system calls, but which need additional configuration afford. Some of them are:
- SELinux (hadn't worked with this, yet, but it's fairly generally available; used by RedHat per default, e.g.)
- RSBAC (heavy configuration afford, but you'll have really fine grained control over what specific users may do and what not)
- AppArmor (I think Ubuntu uses this)
Best Answer
This is not fundamentally any different than the recommendation to prevent other users from reading any other user's home directory.
If the default is world readable, there will be a window of opportunity when you are saving a new file which you intend to keep private. There is always a chance that somebody could copy it before you can
chmod go-r
it.