The jiffy does not depend on the CPU speed directly. It is a time period that is used to count different time intervals in the kernel. The length of the jiffy is selected at kernel compile time. More about this: man 7 time
One of fundamental uses of jiffies is a process scheduling. One jiffy is a period of time the scheduler will allow a process to run without an attempt to reschedule and swap the process out to let another process to run.
For slow processors it is fine to have 100 jiffies per second. But kernels for modern processors usually configured for much more jiffies per second.
I think the reason you don't see a syscall happening, is that some Linux system calls (esp. those related to time, like gettimeofday(2)
and time(2)
) have special implementations through the vDSO, which contains somewhat optimized implementations of some syscalls:
The "vDSO" (virtual dynamic shared object) is a small shared library
that the kernel automatically maps into the address space of all
user-space applications.
There are some system calls the
kernel provides that user-space code ends up using frequently, to the
point that such calls can dominate overall performance. This is due
both to the frequency of the call as well as the context-switch
overhead that results from exiting user space and entering the
kernel.
Now, the manual mentions that the required information is just placed in memory so that a process can access it directly (the current time isn't much of a secret, after all). I don't know about the exact implementation, and could only guess about the role of the CPU's time stamp counter in the it.
So, it's not really glibc doing an optimization, but the kernel. It can be disabled by setting vdso=0
on the kernel command line, and it should be possible to compile it out. I can't find if it's possible to disable it on the glibc side, however (at least without patching the library).
There's a bunch of other information and sources on this question on SE.
You said in the question:
After reviewing the latest POSIX draft, part of the answer is clear: there is a way to request the clock from the CPU, but GNU glibc has wrongly forced this implementation on its users.
Which I think is a rather bold statement. I don't see any evidence of "wrongly forcing" anything on users, at least not to their disadvantage. The vDSO implementation is used by almost every Linux process running on current systems, meaning that if it didn't work correctly, some very loud complaints would have been already heard. Also you said yourself that the time received is correct.
The quote you give from the clock_gettime
manual only seems to mention that the call must support clock id's returned by clock_getcpuclockid
, not anything about the behaviour of CLOCK_REALTIME
or gettimeofday
.
Best Answer
If what you have in mind is the usage of
time
shell builtin, then 3 digit precision is the design limitation - as perman bash
:For seemingly more precise measurements, you can try using
date
with customized output formatting to show nanoseconds:However, notice that in both cases (
time
anddate
), you have some overhead, as the system takes some time to run the starting/stopping commands. The overhead is smaller fortime
, thanks to it being a builtin. So the three-digit limit is there for a reason: the rest is just random garbage.See also this similar question, though the accepted answer contains a little bug currently.