Unprivileged LXC containers are the ones making use of user namespaces (userns). I.e. of a kernel feature that allows to map a range of UIDs on the host into a namespace inside of which a user with UID 0 can exist again.
Contrary to my initial perception of unprivileged LXC containers for a while, this does not mean that the container has to be owned by an unprivileged host user. That is only one possibility.
Relevant is:
- that a range of subordinate UIDs and GIDs is defined for the host user (
usermod [-v|-w|--add-sub-uids|--add-sub-gids]
)
- ... and that this range is mapped in the container configuration (
lxc.id_map = ...
)
So even root
can own unprivileged containers, since the effective UIDs of container processes on the host will end up inside the range defined by the mapping.
However, for root
you have to define the subordinate IDs first. Unlike users created via adduser
, root
will not have a range of subordinate IDs defined by default.
Also keep in mind that the full range you give is at your disposal, so you could have 3 containers with the following configuration lines (only UID mapping shown):
lxc.id_map = u 0 100000 100000
lxc.id_map = u 0 200000 100000
lxc.id_map = u 0 300000 100000
NB: as per a comment recent versions call this lxc.idmap
!
assuming that root
owns the subordinate UIDs between 100000 and 400000. All documentation I found suggests to use 65536 subordinate IDs per container, some use 100000 to make it more human-readbable, though.
In other words: You don't have to assign the same range to each container.
With over 4 billion (~ 2^32
) possible subordinate IDs that means you can be generous when dealing the subordinate ranges to your host users.
Unprivileged container owned and run by root
To rub that in again. An unprivileged LXC guest does not require to be run by an unprivileged user on the host.
Configuring your container with a subordinate UID/GID mapping like this:
lxc.id_map = u 0 100000 100000
lxc.id_map = g 0 100000 100000
where the user root
on the host owns that given subordinate ID range, will allow you to confine guests even better.
However, there is one important additional advantage in such a scenario (and yes, I have verified that it works): you can auto-start your container at system startup.
Usually when scouring the web for information about LXC you will be told that it is not possible to autostart an unprivileged LXC guest. However, that is only true by default for those containers which are not in the system-wide storage for containers (usually something like /var/lib/lxc
). If they are (which usually means they were created by root and are started by root), it's a whole different story.
lxc.start.auto = 1
will do the job quite nicely, once you put it into your container config.
Getting permissions and configuration right
I struggled with this myself a bit, so I'm adding a section here.
In addition to the configuration snippet included via lxc.include
which usually goes by the name /usr/share/lxc/config/$distro.common.conf
(where $distro
is the name of a distro), you should check if there is also a /usr/share/lxc/config/$distro.userns.conf
on your system and include that as well. E.g.:
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
Furthermore add the subordinate ID mappings:
lxc.id_map = u 0 100000 65535
lxc.id_map = g 0 100000 65535
which means that the host UID 100000 is root
inside the user namespace of the LXC guest.
Now make sure that the permissions are correct. If the name of your guest would be stored in the environment variable $lxcguest
you'd run the following:
# Directory for the container
chown root:root $(lxc-config lxc.lxcpath)/$lxcguest
chmod ug=rwX,o=rX $(lxc-config lxc.lxcpath)/$lxcguest
# Container config
chown root:root $(lxc-config lxc.lxcpath)/$lxcguest/config
chmod u=rw,go=r $(lxc-config lxc.lxcpath)/$lxcguest/config
# Container rootfs
chown 100000:100000 $(lxc-config lxc.lxcpath)/$lxcguest/rootfs
chmod u=rwX,go=rX $(lxc-config lxc.lxcpath)/$lxcguest/rootfs
This should allow you to run the container after your first attempt may have given some permission-related errors.
Running unprivileged containers is the safest way to run containers in a production environment. Containers get bad publicity when it comes to security and one of the reasons is because some users have found that if a user gets root in a container then there is a possibility of gaining root on the host as well. Basically what an unprivileged container does is mask the userid from the host . With unprivileged containers, non root users can create containers and will have and appear in the container as root but will appear as userid 10000 for example on the host (whatever you map the userids as). I recently wrote a blog post on this based on Stephane Graber's blog series on LXC (One of the brilliant minds/lead developers of LXC and someone to definitely follow). I say again, extremely brilliant.
From my blog:
From the container:
lxc-attach -n ubuntu-unprived
root@ubuntu-unprived:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 04:48 ? 00:00:00 /sbin/init
root 157 1 0 04:48 ? 00:00:00 upstart-udev-bridge --daemon
root 189 1 0 04:48 ? 00:00:00 /lib/systemd/systemd-udevd --daemon
root 244 1 0 04:48 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid
syslog 290 1 0 04:48 ? 00:00:00 rsyslogd
root 343 1 0 04:48 tty4 00:00:00 /sbin/getty -8 38400 tty4
root 345 1 0 04:48 tty2 00:00:00 /sbin/getty -8 38400 tty2
root 346 1 0 04:48 tty3 00:00:00 /sbin/getty -8 38400 tty3
root 359 1 0 04:48 ? 00:00:00 cron
root 386 1 0 04:48 console 00:00:00 /sbin/getty -8 38400 console
root 389 1 0 04:48 tty1 00:00:00 /sbin/getty -8 38400 tty1
root 408 1 0 04:48 ? 00:00:00 upstart-socket-bridge --daemon
root 409 1 0 04:48 ? 00:00:00 upstart-file-bridge --daemon
root 431 0 0 05:06 ? 00:00:00 /bin/bash
root 434 431 0 05:06 ? 00:00:00 ps -ef
From the host:
lxc-info -Ssip --name ubuntu-unprived
State: RUNNING
PID: 3104
IP: 10.1.0.107
CPU use: 2.27 seconds
BlkIO use: 680.00 KiB
Memory use: 7.24 MiB
Link: vethJ1Y7TG
TX bytes: 7.30 KiB
RX bytes: 46.21 KiB
Total bytes: 53.51 KiB
ps -ef | grep 3104
100000 3104 3067 0 Nov11 ? 00:00:00 /sbin/init
100000 3330 3104 0 Nov11 ? 00:00:00 upstart-udev-bridge --daemon
100000 3362 3104 0 Nov11 ? 00:00:00 /lib/systemd/systemd-udevd --daemon
100000 3417 3104 0 Nov11 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
100102 3463 3104 0 Nov11 ? 00:00:00 rsyslogd
100000 3516 3104 0 Nov11 pts/8 00:00:00 /sbin/getty -8 38400 tty4
100000 3518 3104 0 Nov11 pts/6 00:00:00 /sbin/getty -8 38400 tty2
100000 3519 3104 0 Nov11 pts/7 00:00:00 /sbin/getty -8 38400 tty3
100000 3532 3104 0 Nov11 ? 00:00:00 cron
100000 3559 3104 0 Nov11 pts/9 00:00:00 /sbin/getty -8 38400 console
100000 3562 3104 0 Nov11 pts/5 00:00:00 /sbin/getty -8 38400 tty1
100000 3581 3104 0 Nov11 ? 00:00:00 upstart-socket-bridge --daemon
100000 3582 3104 0 Nov11 ? 00:00:00 upstart-file-bridge --daemon
lxc 3780 1518 0 00:10 pts/4 00:00:00 grep --color=auto 3104
As you can see processes are running inside the container as root but are not appearing as root but as 100000 from the host.
So to sum up: Benefits - added security and added isolation for security. Downside - A little confusing to wrap your head around at first and not for the novice user.
Best Answer
Is this a new project, or do you have a choice? Why not use LXD instead of LXC - much easier to use and you get to the same place. I started out with lxc and quickly made the switch because I was interested in running unprivileged containers which is not easy in LXC, but is the default in LXD.
Take a look here to start: https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24
It's been a few months since I last installed/used it, but here are my notes on installation:
As LXD evolves quite rapidly, we recommend Ubuntu users use our PPA:
The package creates a new “lxd” group which contains all users allowed to talk to lxd over the local unix socket. All members of the “admin” and “sudoers” groups are automatically added. If your user isn’t a member of one of these groups, you’ll need to manually add your user to the “lxd” group.
Because group membership is only applied at login, you then either need to close and re-open your user session or use the “newgrp lxd” command in the shell you’re going to interact with lxd from.
newgrp lxd
https://blog.ubuntu.com/2015/03/20/installing-lxd-and-the-command-line-tool 2018/10/22
To the best of my knowledge you can even run LXD in a virtual machine so you can give it a quick try without messing up whatever system you are working on.
Not exactly the answer to the question you asked, but I hope you find it a helpful alternative.