Since the release of 0.9 Docker has dropped LXC
and uses its own execution environment, libcontainer
. Your question's a bit old but I guess my answer still applies the version you are using.
Quick Answer: To understand the permissions of volumes, you can take the analogy of mount --bind Host-Dir Container-Dir
. So to fulfill your requirement you can use any traditional methods for managing permissions. I guess ACL is what you need.
Long Answer: So as in your example we have a container named dock with a volume /data
.
docker run -tid --name dock -v /usr/container/Databases/:/data \
centos:latest /bin/bash
Inside the container our MySQL server has been configured to use the /data
as its data directory. So we have our databases in the /data
inside the container. And outside the container on the Host OS, we have mounted this /data
volume from /usr/container/Databases/
and we assign a normal user bob to take backups of the databases. From the host machine we'll configure ACLs for user bob.
useradd -u 3000 bob
usermod -R o=--- /usr/container/Databases/
setfacl -R -m u:bob:rwx /usr/container/Databases/
setfacl -R -d -m u:bob:rwx /usr/container/Databases/
To test it out lets take a backup with user bob.
su - bob
tar -cvf container-data.tar /usr/container/Databases/
And tar will list out and you can see that our user was able to access all the files.
Now from inside the container if you check with getfacl
you will notice that instead of bob it shows 3000. This is because the UID of bob is 3000 and there is no such user in the container so it simply displays the UID it receives from the meta data. Now if you create a user in your container with useradd -u 3000 bob
you will notice that now the getfacl
shows the name bob instead of 3000.
Summary: So the user permissions you assign from either inside or outside the container reflects to both environments. So to manage the permissions of volumes, UIDs in host machine must be different from the UIDs in the container.
OpenVPN wouldn't start with that Dockerfile because there's nothing to start it :-). Your entrypoint is sh
; that's all it will run.
If you want to start two daemons inside Docker, your entrypoint needs to be a program that starts both of them. A lot of people use supervisord
for this. Note that Docker is relatively opinionated software, and running multiple daemons in one container is not considered idiomatic.
If this is just about debugging, there's no problem. Just don't run openvpn
with --daemon
or --log
. It will write to stdout (allegedly, though I wouldn't be surprised to see stderr). This is great for debugging if you start it manually. You'll see all the log messages immediately in the terminal.
If you set up the entrypoint and manually start the container in interactive mode - same deal. If you start it as a background container (pardon my vagueness), the output will be captured for docker logs
. It's the same technique favored by modern init systems like systemd (and the systemd "journal" logging system).
Once you have the daemon set up how you want, you may be interested in more customized systems for capturing logs, like the other answers.
Docker has pluggable logging drivers, according to the manpage for docker logs
. There's a "syslog" driver which says it writes to the host syslog. It says docker logs
won't work, but I don't expect that's a problem for you.
WARNING: docker logs
does work if you use the journald logging driver. However, on Debian defaults, my assumption is this would cause logs to be lost on reboot. Because Debian doesn't set up a persistent journal. It's not hard to change though, if that's what you want.
The other logging driver which supports the docker logs
command is called "json-file". I expect that's persistent, but you might prefer one of the other solutions.
The "why" question
The point is that Docker containers don't necessarily work the same as the OS they're based on. Docker isn't OS virtualization like LXC
, systemd-nspawn
, or a virtual machine. Although Docker was derived from LXC
, it was specifically designed for "application containers" that run a single program.
(Current) server distributions are designed as a combination of several running programs. So you can't take a package from them and expect it to behave exactly the same way inside one of these application containers.
Communication with a logging daemon is a great example. There's nothing there that's going to change, except that people will become more familiar with the concept of application containers. And whether that's what they actually want to use :). I suspect a lot of sysadmins would be more interested in a mashup of LXC (OS containers), with something like NixOS to share packages between containers; it just hasn't been written yet AFAIK. Or just a better LXC.
Best Answer
Actually Docker doesn't do any virtualization, it's just a tool that handles images and uses LXC container virtualization to run them. I guess you're actually looking for LXC and its capabilities, here. LXC can do virtual networking and MySQL can be accessed over the network. The only thing you need is to connect the building blocks together ;).
In a typical setup, each host has its own IP address and set of open ports and each host can access other host's TCP/IP services over the virtual network. Security is handled by the Linux kernel. One way to handle security is the good old iptables based firewall. But there may be other ways based on selinux labeling.