I want to run small untrusted programs, but restrict them from accessing any files outside of their folder, network access, and everything else that they don't really need. What is the simplest way to achieve this?
Ubuntu – How to sandbox applications
Security
Related Solutions
The specific attack you've expressed concern about is:
often an attacker will just fool a gullible user into running an executable by downloading and clicking.
At least in the common case where the file is downloaded in a web browser, this should already be prevented in Ubuntu by the browser's adherence to the Execute-Permission Bit Required policy. The most directly relevant parts of that policy are:
Applications, including desktops and shells, must not run executable code from files when they are both:
- lacking the executable bit
- located in a user's home directory or temporary directory.
- Files downloaded from a web browser, mail client, etc. must never be saved as executable.
So if a user is told to download a program in a web browser, does so, and attempts to run the file by double-clicking on it, it won't run. This applies even if the file downloaded is a shell script or even a .desktop file. (If you've ever wondered why .desktop files in your home directory have to be marked executable even though they're not really programs, that's why.)
It is possible for users to alter this behavior through configuration changes. Most will not, and while those who do probably shouldn't, that's not really what you have to worry about. The bigger concern is the more complex attack that I think you're already worried about, in which a malicious person (or bot) instructs the user to download a specific file, mark it executable themselves (through their file browser or with chmod
), and then run it.
Unfortunately, restricting a user's ability to set the execute bit on a file or to execute files other than those on some whitelist wouldn't noticeably mitigate the problem. Some attacks will already work, and those that don't could be trivially modified so that they do. The fundamental issue is that the effect of running a file can be achieved even if the file doesn't have executable permissions.
This is best illustrated by example. Suppose evil
is a file in the current directory that, if given executable permissions (chmod +x evil
) and run (./evil
), would do something evil. Depending on what kind of program it is, the same effect may be achieved by one of the following:
. ./evil
orsource ./evil
runs the commands inevil
in the currently running shell.bash ./evil
runsevil
inbash
.python3 evil
runsevil
inpython3
.perl evil
runsevil
inperl
.- ...and in general,
interpreter evil
runsevil
in the interpreterinterpreter
. - On most systems,
/lib64/ld-linux-x86-64.so.2 ./evil
runs the binary executableevil
.
None of those, not even the last one, require that the file have executable permissions or even that the user be able to give the file executable permissions.
But the malicious instructions don't even have to be that complicated. Consider this non-malicious command, which is one of the officially recommended ways to install or update NVM:
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
The reason that's not malicious is that NVM isn't malware, but if the URL were instead to someone's script that does evil when run, that command would download and run the script. At no point would any file need to be given executable permissions. Downloading and running the code contained in a malicious file with a single command like this is, I believe, a pretty common action that attackers trick users into taking.
You might think of trying to restrict what interpreters are available for the users to run. But there isn't really a way to do this that doesn't substantially impact the ordinary tasks you presumably want users to be able to do. If you're setting up an extremely restricted environment on which nearly everything a user would think of to do on a computer is disallowed, like a kiosk that only runs a couple programs, then this might provide some measure of meaningful protection. But it doesn't sound like that's your use case.
So the approximate answer to your question is, "No." The fuller answer is that you could probably manage to prevent users from executing any files except those that you supply on a whitelist. But that's in the strict, technical sense of "execute," which is not needed to achieve the full effect of running most programs or scripts. To prevent that, you could try to make the whitelist very small, so it didn't list any interpreters except those that could be highly restricted. But even if you managed that, users couldn't do much, and if you made it so small they couldn't hurt themselves, they probably couldn't do anything. (See Thomas Ward's comment.)
If your users can hurt themselves, they can be fooled into hurting themselves.
You may be able to restrict specific programs from being used or otherwise behaving in ways that are likely to be harmful, and if you're looking at specific patterns ransomware tends to follow, you may be able to prevent some specific common cases. (See AppArmor.) That might provide some value. But it won't give you anything close to the comprehensive solution you're hoping for.
Whatever technical measures (if any) you end up taking, your best bet is to educate users. This includes telling them not to run commands they don't understand and not to use downloaded files in situations where they wouldn't be able to explain why it's reasonably safe to do so. But it also includes things like making backups, so that if something does go wrong (due to malware or otherwise), the harm done will be as little as possible.
Best Answer
If they are really untrusted, and you want to be sure, you'd set up a separate box. Either really, or virtually.
Further, you don't want that box to be in the same network as your important stuff, if you are paranoid enough. In all solutions you'd set up a separate user with no rights, so not to open too much tools to the would-be compromiser.
If you are bound on running it on the same box, you have for instance, this option
chroot
. This is a default option for doing this for lots of people, and for non-specific threats it might even work. But it is NOT a security option, and can be broken out of rather easily. I'd suggest to use this as intended, i.e. not for security.In the end you might need to set up a specific sandboxing model without the hassle of virtualization or separate boxes, or the still-at-risk situation of
chroot
. I doubt this is what you meant, but look at this link for some more in-depth information.