Why real time can be lower than user time

time

I have a script converting video files and I run it at server on test data and measure its time by time. In result I saw:

real    2m48.326s
user    6m57.498s
sys     0m3.120s

Why real time is that much lower than user time? Does this have any connection with multithreading? Or what else?

Edit: And I think that script was running circa 2m48s

Best Answer

The output you show is a bit odd, since real time would usually be bigger than the other two.

  • Real time is wall clock time. (what we could measure with a stopwatch)
  • User time is the amount of time spend in user-mode within the process
  • Sys is the CPU time spend in the kernel within the process.

So I suppose if the work was done by several processors concurrently, the CPU time would be higher than the elapsed wall clock time.

Was this a concurrent/multi-threaded/parallel type of application?

Just as an example, this is what I get on my Linux system when I issue the time find . command. As expected the elapsed real time is much larger than the others on this single user/single core process.

real    0m5.231s
user    0m0.072s
sys     0m0.088s

The rule of thumb is:

  • real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs.
  • real ≈ user: The process is CPU bound and takes no advantage of parallel exeuction.
  • real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage.