Debian – How to configure logging inside a Docker container

containersdebiandockerrsyslogsyslog

I was trying to dockerize (into Debian 8.2) an OpenVPN server (yes, I do know, there already are such containers) but something went wrong inside the container and the server failed to start.

I decided to inspect logs but /var/log/syslog (OpenVPN logs here on my host machine) was missing inside the container.

I thought that rsyslog was not istalled and added its installation before OpenVPN installation to the Dockerfile. But this had no effect, the syslog was still missing.

My Dockerfile is:

FROM debian:8.2
USER root
EXPOSE 53/udp
EXPOSE 1194/udp
EXPOSE 443/tcp
RUN apt-get update
RUN apt-get install -y rsyslog
RUN apt-get install -y openvpn
# ...
# Some configuration stuff
# ...
ENTRYPOINT service openvpn start && sh

The questions are:

  • Why does OpenVPN logs to syslog after default installation on my host Debian 8.2 and doesn't do it inside a container? I didn't configure anything on my host machine to force OpenVPN log to syslog. It was a default behavior.

  • How do I configure logging of the OpenVPN server running inside a docker container?

Best Answer

OpenVPN wouldn't start with that Dockerfile because there's nothing to start it :-). Your entrypoint is sh; that's all it will run.

If you want to start two daemons inside Docker, your entrypoint needs to be a program that starts both of them. A lot of people use supervisord for this. Note that Docker is relatively opinionated software, and running multiple daemons in one container is not considered idiomatic.

If this is just about debugging, there's no problem. Just don't run openvpn with --daemon or --log. It will write to stdout (allegedly, though I wouldn't be surprised to see stderr). This is great for debugging if you start it manually. You'll see all the log messages immediately in the terminal.

If you set up the entrypoint and manually start the container in interactive mode - same deal. If you start it as a background container (pardon my vagueness), the output will be captured for docker logs. It's the same technique favored by modern init systems like systemd (and the systemd "journal" logging system).

Once you have the daemon set up how you want, you may be interested in more customized systems for capturing logs, like the other answers.

Docker has pluggable logging drivers, according to the manpage for docker logs. There's a "syslog" driver which says it writes to the host syslog. It says docker logs won't work, but I don't expect that's a problem for you.

WARNING: docker logs does work if you use the journald logging driver. However, on Debian defaults, my assumption is this would cause logs to be lost on reboot. Because Debian doesn't set up a persistent journal. It's not hard to change though, if that's what you want.

The other logging driver which supports the docker logs command is called "json-file". I expect that's persistent, but you might prefer one of the other solutions.

The "why" question

The point is that Docker containers don't necessarily work the same as the OS they're based on. Docker isn't OS virtualization like LXC, systemd-nspawn, or a virtual machine. Although Docker was derived from LXC, it was specifically designed for "application containers" that run a single program.

(Current) server distributions are designed as a combination of several running programs. So you can't take a package from them and expect it to behave exactly the same way inside one of these application containers.

Communication with a logging daemon is a great example. There's nothing there that's going to change, except that people will become more familiar with the concept of application containers. And whether that's what they actually want to use :). I suspect a lot of sysadmins would be more interested in a mashup of LXC (OS containers), with something like NixOS to share packages between containers; it just hasn't been written yet AFAIK. Or just a better LXC.

Related Question