Load Average Unix/Linux – What Does Load Average Mean on Unix/Linux?

cpu usagelinuxunixuptime

If I run uptime, I get something like this:

10:50:30 up 366 days, 23:27,  1 user,  load average: 1.27, 2.06, 1.54

What do those numbers on the end mean? The man page tells me it's "the load average of the system over the last 1, 5, and 15 minutes.". But what is the scale? Is 1.27 high? Low? Does it depend on my system?

Best Answer

Load average is a gauge of how many processes are on average, concurrently demanding CPU attention.

Generally, if you have one process running at 100%, and it just sits like that for all eternity, you can expect all values to approach '1'.

Generally, this is as efficient computing as you can get, no losses due to context-switches.

However, on modern multitasking OS's, there is more than one thing that needs CPU attention, so under a moderate amount of load from a single process, load average should float between 0.8 and 2.

If you decide to do something insane, like build a kernel with make -j 60, despite only having one logical processor, then load average would rush towards 60, and your computer would be incredibly useless to you (death by context switch).

Also to note, this metric is irrespective of how many cores/CPUs there are. For a two-cored system, running one process that consumes a whole core (leaving the other idle) results in a load average of 1.0. In order to decide how loaded a system is, you'll need to know the number of cores and do the division yourself.

Related Question