A pull-request implementing the feature is currently under review.
In the meanwhile, if you control the script run with this unit, you may use the python-systemd module to send messages from your script to the journal with the priority and options you may wish.
The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.
This can be counter-intuitive, since as you mentioned, running python3 run.py
directly on the command-line works and shows logs properly and also timestamps look correct on the logs.
The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), but buffered when connected to a file (in case of StandardOutput=file:...
) or to a pipe (in case you're logging to the journal, which is the default.)
The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.
Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.
On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:
- Passing a
-u
flag to python3
in your command.
- Setting
PYTHONUNBUFFERED=1
in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1
line.)
You confirmed this worked for your specific case, so that's great!
For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer
and stdbuf
which can often solve this same problem.
Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.
Best Answer
Yes. It's dependent from how much log information is generated, but eventually the boot information will scroll off the beginning of both the kernel's ring buffer and the systemd journal. It's no guide to how long it takes on anyone else's systems, but I have systems which have uptimes in the hundreds of days whose boot log data have long since scrolled off the top of the systemd journal. This is one of the disadvantages of having one giant combined log stream that everything fans into and then fans back out from again.
So take a leaf from FreeBSD and NetBSD and their derivatives. They all have services that run once, at bootstrap just after local filesystems have mounted, that simply do:
Thus a snapshot of the kernel log as it was at bootstrap is available in
/var/run/dmesg.boot
even if it has since scrolled off the actual logs.You simply need to write a systemd service that does the same. Use the shell for redirection,
or use something like Laurent Bercot'sredirfd
or the nosh toolset'sfdredir
Substitute
journalctl -k
if you want to snapshot the systemd journal rather than just the kernel's log, and make this aType=oneshot
service. Either make it wanted bymulti-user.target
or make it aDefaultDependencies=no
service that is wanted bybasic.target
. Note that it does not have to be ordered after local filesystem mounts (i.e.local-fs.target
). That ordering is necessary for FreeBSD and OpenBSD because/var/run
could be a disc filesystem with them. On systemd operating systems/run
is an "API filesystem" that is created at bootstrap before any services.(The approach that I personally prefer is not to have the giant central log stream in the first place. A dedicated service feeds off the kernel log feed alone and logs to a private log directory. That takes a lot longer to reach the point where last bootstrap information scrolls off the top. And it also contains boot logs from prior boots.
However, this is a lot more complex to set up in a systemd world than a oneshot that writes a
/run/dmesg.boot
. It is simple in a daemontools family world, though. It's a trivial exercise in the use of tools such asfifo-listen
andklog-read
, orsocklog
. Piping the output through a log dæmon that writes to a private, reliably size-capped, auto-rotated, log directory comes as standard with a daemontools/runit/s6/nosh/perp-managed service.)