I have found the solution thanks to the tip given by Nils and a nice article.
Tuning the ondemand CPU DVFS governor
The ondemand governor has a set of parameters to control when it is kicking the dynamic frequency scaling (or DVFS for dynamic voltage and frequency scaling). Those parameters are located under the sysfs tree: /sys/devices/system/cpu/cpufreq/ondemand/
One of this parameters is up_threshold
which like the name suggest is a threshold (unit is % of CPU, I haven't find out though if this is per core or merged cores) above which the ondemand governor kicks in and start changing dynamically the frequency.
To change it to 50% (for example) using sudo is simple:
sudo bash -c "echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold"
If you are root, an even simpler command is possible:
echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold
Note: those changes will be lost after the next host reboot. You should add them to a configuration file that is read during boot, like /etc/init.d/rc.local
on Ubuntu.
I have found out that my guest VM, although consuming a lot of CPU (80-140%) on the host was distributing the load on both cores, so no single core was above 95%, thus the CPU, to my exasperation, was staying at 800 MHz. Now with the above patch, the CPU dynamically changes it frequency per core much faster, which suits better my needs, 50% seems a better threshold for my guest usage, your mileage may vary.
Optionally, verify if you are using HPET
It is possible that some applicable which incorrectly implement timers might get affected by DVFS. This can be a problem on the host and/or guest environment, though the host can have some convoluted algorithm to try to minimise this. However, modern CPU have newer TSC (Time Stamp Counter) which are independent of the current CPU/core frequency, those are: constant (constant_tsc), invariant (invariant_tsc) or non-stop (nonstop_tsc), see this Chromium article about TSC resynchronisation for more information on each. So if your CPU is equipped with one of this TSC, you don't need to force HPET. To verify if your host CPU supports them, use a similar command (change the grep parameter to the corresponding CPU feature, here we test for the constant TSC):
$ grep constant_tsc /proc/cpuinfo
If you do not have one of this modern TSC, you should either:
- Active HPET, this is described here after;
- Not use CPU DVFS if you have any applications in the VM that rely on precise timing, which is the one recommended by Red Hat.
A safe solution is to enable HPET timers (see below for more details), they are slower to query than TSC ones (TSC are in the CPU, vs. HPET are in the motherboard) and perhaps not has precise (HPET >10MHz; TSC often the max CPU clock) but they are much more reliable especially in a DVFS configuration where each core could have a different frequency. Linux is clever enough to use the best available timer, it will rely on first the TSC, but if found too unreliable, it will use the HPET one. This work good on host (bare metal) systems, but due to not all information properly exported by the hypervisor, this is more of a challenge for the guest VM to detect badly behaving TSC. The trick is then to force to use HPET in the guest, although you would need the hypervisor to make this clock source available to the guests!
Below you can find how to configure and/or enable HPET on Linux and FreeBSD.
Linux HPET configuration
HPET, or high-precision event timer, is a hardware timer that you can find in most commodity PC since 2005. This timer can be used efficiently by modern OS (Linux kernel supports it since 2.6, stable support on FreeBSD since latest 9.x but was introduced in 6.3) to provide consistent timing invariably to CPU power management. It allows to build also easier tick-less scheduler implementations.
Basically HPET is like a safety barrier which even if the host has DVFS active, the host and guest timing events will be less affected.
There is a good article from IBM regarding enabling HPET, it explains how to verify which hardware timer your kernel is using, and which are available. I provide here a brief summary:
Checking the available hardware timer(s):
cat /sys/devices/system/clocksource/clocksource0/available_clocksource
Checking the current active timer:
cat /sys/devices/system/clocksource/clocksource0/current_clocksource
Simpler way to force usage of HPET if you have it available is to modify your boot loader to ask to enable it (since kernel 2.6.16). This configuration is distribution dependant, so please refer to your own distribution documentation to set it properly. You should enable hpet=enable
or clocksource=hpet
on the kernel boot line (this again depends on the kernel version or distribution, I did not find any coherent information).
This make sure that the guest is using the HPET timer.
Note: on my kernel 3.5, Linux seems to pick-up automatically the hpet timer.
FreeBSD guest HPET configuration
On FreeBSD one can check which timers are available by running:
sysctl kern.timecounter.choice
The currently chosen timer can be verified with:
sysctl kern.timecounter.hardware
FreeBSD 9.1 seems to automatically prefer HPET over other timer provider.
Todo: how to force HPET on FreeBSD.
Hypervisor HPET export
KVM seems to export HPET automatically when the host has support for it. However, for Linux guest they will prefer the other automatically exported clock which is kvm-clock (a paravirtualised version of the host TSC). Some people reports trouble with the preferred clock, your mileage may vary. If you want to force HPET in the guest, refer to the above section.
VirtualBox does not export the HPET clock to the guest by default, and there is no option to do so in the GUI. You need to use the command line and make sure the VM is powered off. the command is:
./VBoxManage modifyvm "VM NAME" --hpet on
If the guest keeps on selecting another source than HPET after the above change, please refer to the above section how to force the kernel to use HPET clock as a source.
"Occasional guests need more memory" sounds like a good application of overcommitting memory. The idea is you assign each guest a large amount of memory (more than you can actually give out) because they're generally not using it. Then you do the math to ensure you have enough swap space that the guests can actually swap out to disk in the worst-case-scenario where they all actually do use all that memory.
The swap space goes on the host machine, and it needs to obey
host swap space = sum of all guest memory + recommended host swap space
in order for it to be safe.
So if you have 10 guests and 2 GiB of RAM, you could experiment with something like
- 512 MiB RAM per guest (512 * 10 = 5120 MiB total)
- 2GiB swap on the host
Meaning your host swap space should be at least 512 * 10 + 2048 = 7168 MiB
to handle this safely, assuming you can dedicate 2GiB of swap to the host (for that little host memory, this is recommended).
Always test these kinds of setups first to make sure your machine can handle them. Benchmarking them is even better, and will allow you to experiment with different loadouts and choose the one that works best.
Best Answer
Being cynical I could say that is "normal" for Windows guests but not for Linux guests (at least I never saw one behave like that).
With Windows it depends a lot on the applications running (near idle). A plain XP or W2K (I have no experience with newer versions in KVM yet) causes 10% to 20% on the host (being shown about 0% within) but MS SQL server gets this easily above 30%. This seems to be related to timer access and / or ACPI somehow. But even in a non-ACPI VM I never got Windows below 10% on the host.
Edit 1 (integrating comments)
What is the output of
cat /sys/devices/system/clocksource/clocksource0/current_clocksource
(in the guest)? That should be kvm-clock. Check your kernel config (/proc/config.gz) for CONFIG_PARAVIRT_CLOCK and CONFIG_KVM_CLOCK.This is a list of kernel config options relevant to KVM.