The traditional way to log and track user CPU time is process accounting. On Linux, install the GNU accounting utilities, typically provided by a package called acct
. I'm not sure how accurate it will be at keeping track of the time spent in very short-lived processes, but it'll at least list all the processes ever executed.
Run lastcomm
to get a list of all the commands executed by any user and the time spent in each (rounded to ~10ms for short-lived processes, expect to see a lot of 0.00
). Run sa
to display various sums and statistics. In particular, sa -m
displays per-user totals. The statistics accumulated by sa
run from the last rotation of the accounting logs (typically located in /var/log/account/
).
Note that you aren't going to catch all processes by sampling at intervals, not by a far cry. You'll miss almost all short-lived processes and the last few seconds of long processes. Process accounting does list all past processes.
In /proc/$pid/stat
, the user time is the time spent doing computation, as opposed to the system time spent doing I/O. Which one to count depends on what you want to do with the information.
Counting all the PIDs is right. I don't know what the parent PID has to do with this.
On the system side, your description of /proc/uptime
seems wrong. Wikipedia has it right as I write. The first field is the real time elapsed since the system booted, minus any time spent suspended or hibernating. The second field is the cumulated time spent in the idle task on all CPUs. I'm not sure what that really means; it's certainly not the total idle time on my machine. In the kernel, the value is summed in uptime_proc_show
from variables updated in account_idle_time
.
First, you can not convert the addresses to have just 8 digits. Memory addresses can and will have much larger values than could be represented with just 8 digits.
The reason why memory addresses are represented in /proc/pid/maps
as they are is on the line 283 in fs/proc/task_mmu.c
(or task_nommu.c
) in a recent kernel source tree:
283 seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu ",
284 start,
285 end,
286 flags & VM_READ ? 'r' : '-',
287 flags & VM_WRITE ? 'w' : '-',
288 flags & VM_EXEC ? 'x' : '-',
289 flags & VM_MAYSHARE ? 's' : 'p',
290 pgoff,
291 MAJOR(dev), MINOR(dev), ino);
What this boils down to is that in any memory address which has a hex string representation shorter than 8 digits, will get padded with leading zeros. Any value larger than that will be represented as it is, not truncated to 8 digits. That's just the way how printk()
's printf-style formatting works.
Now what to make out of all this? Probably you should take a minute to think about why would you want to truncate memory addresses to 8 digits. What do you think is the benefit of doing so?
Best Answer
The documentation says they are in "jiffies", but the documentation is outdated. Try running a CPU-intensive task and sampling the counters a few seconds apart, and you'll see they increment too quickly to feasibly be in jiffies.
The documentation became wrong with the adoption of the Completely Fair Scheduler (CFS) which is the default on modern kernels, so divide by 1000000000 to convert to seconds.
https://lkml.org/lkml/2019/7/24/906