An advanced question: I think my load averages are too high as compared to a linux system. I have around 0.40 1min with basically no cpu use (0-1%) and even if this is spread out over 4 cores it still equals roughly 0.10 = 10% cpu use which isn't correct. I've now learned that load average takes not only cpu use into account but also io to disk and network. I've therefore tried finding the io wait value but this seems to not be available on mac for some reason? I have US and SY and ID of course in the iostat tool but no sign of io wait % (called WI if I don't misremember).
Everything is just fine and I have the same load averages on my other macs, what I'm after here is understanding WHY the averages are calculated this way (this high) and how I can analyze it further?
I've googled a good 2 hours on the topic but there is little or none talking about this, any ideas?
Best Answer
The load is the average number of runnable processes.
man 3 getloadavg
says:You can also obtain the same information by running
sysctl vm.loadavg
.Assuming Mac OS X 10.7.2, the
getloadavg
function calls this code here (search for the second occurrence ofsysctl_loadavg
), which, in essence, returns the current value ofaverunnable
.This, in turn, is defined here:
This file also defines
compute_averunnable
, which computes the new weighted value ofaverunnable
.The scheduler header file sched.h declares it as
extern
, and all scheduler implementations inxnu-1699.24.8/osfmk/kern/sched_*.c
periodically call it viacompute_averages
in sched_average.c.The argument to
compute_averunnable
, issched_nrun
insched_average.c
, getting its value fromsched_run_count
insched.h
.This number is modified by the macros
sched_run_incr
andsched_run_decr
, used exclusively in the filesched_prim.c
, which are the scheduling primitives responsible for unblocking, dispatching, etc. of threads.So, to recap:
It simply uses the number of runnable threads to compute load averages in 5 second intervals.
While the systems are totally different, I find it hard to believe that Linux always has lower loads than OS X. In fact, it appears that Linux simply shows a different value.
Quoting Wikipedia:
Judging from this article, Linux really uses the number of processes that are runnable as opposed to XNU's threads.
Since every runnable process has at least one runnable thread, the load average values on OS X will, assuming an equivalent load average calculation (which I didn't bother to check), always be at least as big, since the item counts they're based on are different.