To get all errors for running services using journalctl
:
$ journalctl -p 3 -xb
where -p 3
means priority err
, -x
provides extra message information, and -b
means since last boot
The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.
This can be counter-intuitive, since as you mentioned, running python3 run.py
directly on the command-line works and shows logs properly and also timestamps look correct on the logs.
The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), but buffered when connected to a file (in case of StandardOutput=file:...
) or to a pipe (in case you're logging to the journal, which is the default.)
The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.
Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.
On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:
- Passing a
-u
flag to python3
in your command.
- Setting
PYTHONUNBUFFERED=1
in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1
line.)
You confirmed this worked for your specific case, so that's great!
For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer
and stdbuf
which can often solve this same problem.
Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.
Best Answer
The problem that systemd has with
logger
is the same problem that it has with its ownsystemd-notify
tool. The protocol is datagram-based and asynchronous, and the tools do one job. The invoker forks a process to run the tool; it fires off a datagram and forgets about it; and the process exits.systemd
server processes for the logging and readiness notification protocols want to know what service the sender belongs to, the former in order to add the right service name field to the log entry and the latter in order to know what service is being talked about. They get the process ID of the datagram sender from Linux, and then they go to the process table to look up what control groups the process belongs to, and hence what service it belongs to.If the sending process has done its job and immediately exited, this does not work (subject to a race condition). The process is no longer in the process table.
systemd-notify
notifications fail;logger
information does not get marked as belonging to the relevant service. Switching to a stream protocol (e.g. withlogger
's--tcp
option) wouldn't fix this unless the logging protocol itself were also altered so that the client awaited a response from the server before closing the stream and exiting, which it does not. RFC 5426 has no server acknowledgements sent back to clients.So although the log information is in the journal, it isn't tagged with the service name and isn't pulled out when you query by service name. (These aren't separate logs as you think, by the way.
journalctl
is simply applying filters to one big log.-u
is a filter.)This is a long-standing, widely-known, bug.
The
systemd
people describe this as a deficiency of Linux. It doesn't have proper job objects that can be used to encapsulate and track sets of processes; nor does itsAF_LOCAL
datagram socket mechanism transport such information. If it did,systemd
could put all service processes into a job, and its logging and readiness notification servers could pull out the client-end job information whenever they receive a datagram, even if the client process had since exited.There is an idiosyncratic protocol specific to
system-journald
, that some versions oflogger
even speak. No,_SYSTEMD_UNIT
is a "trusted field" set at the server end, and attempts by clients to set it will be ignored; and that's a datagram-based, asynchronous, protocol without acknowledgements too. It has exactly the same problem.To reliably have log entries tagged with the right service, write to standard error. That's long-lived and more reliably connectable to the service unit name at the server end. Yes, you cannot specify the old facilities and priorities; that's the trade-off that you have to make.
Further reading