Linux – Enabling IOMMU in the kernel for graphics card pass-through

grub2linux-kernelpcivirtualbox

Short question:

How can I turn on the intel_iommu setting in the Linux kernel? I run a Debian host, using the grub2 bootloader. Documentation I've seen says to edit /boot/grub/menu.lst, which seems to be relevant only for grub 1.x, as I don't have that file.

It is my understanding (and last option I can think of) that changing this boot option might get rid of the following error message in /var/log/kern.log

vboxpci: No IOMMU domain (attach)

Long question:

Giving a guest OS direct access to graphics card

I recently realised that it's possible to pass through a PCI-express device to guest OS's running in Virtualbox. Cool, I thought! I've got two NVIDIA Quadro FX graphics cards (with SLI bridge connection in place, which I hope isn't causing the grief) and would like to dedicate the 2nd graphics card to the guest OS, so that I can use OpenGL features within Photoshop et al.

NVIDIA market this "SLI Multi-OS" configuration, which is basically what I've wanted to set up for ages, but I don't want to spend over a grand on the Virtualisation software (Parallels workstation extreme), when I have been using VirtualBox quite happily for years now.

Host System

I'm running linux-3.5.0-19 from the Debian repositories, on quite high-end workstation equipment (Asus P6T7 WS Supercomputer mobo w/ Intel ICH10R chipset and Xeon W3680 CPU) and would like to turn on IOMMU support in the kernel, preferably without having to compile it myself.

BIOS

In the BIOS settings, I have VT-x and VT-d support enabled. I couldn't see anything specifically mentioning IOMMU, though.

Attaching the PCI device

This was pleasantly surprisingly simple! The official VirtualBox documentation is here. What I did, which I found less ambiguous, was to open nvidia-settings, select the secondary graphics card and note the Bus ID ("PCI:5:0:0" in my case). Then, from the host's command line:-

VBoxManage modifyvm "Windows Guest" --pciattach 05:00.0

(When I first ran this, there was an error because VirtualBox was emulating a PIIX chipset; it said that PCI pass-through only works with ICH9 chipsets. So I changed the Chipset to ICH9 in the VirtualBox VM System settings and turned on the guest to install the necessary new drivers. A reboot later and everything was working fine, so I shut down the guest, and re-ran the command.)

There was no output, and I was returned to the command line almost immediately.

Using host GPU from the guest

Before turning on the guest, I first rebooted the host machine, in case something undocumented needed to happen in the kernel, by virtualbox-dkms. As I ran the previous command without sudo privileges, I doubt any changes were made, though.

When I next started the guest, Windows Update started doing its thing and automatically detected and installed the correct NVIDIA drivers. All looking good so far. Before I could use the device though, I had to reboot the guest…

Problem

Now that the graphics card drivers are installed on the guest and the PCI device attached, I can't get into the Windows desktop. I get to the Windows login screen, then after logging in, the screen freezes, just saying "Welcome", with a should-be-spinning-but-isn't blue circle next to it.

In /var/log/kern.log, the last messages printed are:-

vboxpci: vboxPciOsDevInit: dev=500
vboxpci: detected device: 10de:05ff at 05:00.0, driver pci-stub
vboxpci: vboxPciOsDevInit: dev=500 pdev=ffff88061bea0000
pci-stub 0000:05:00.0: irq 76 for MSI/MSI-X
vboxpci: enabled MSI
500: linux vboxPciOsDevGetRegionInfo: reg=0
got mmio region: fa000000:16777216
500: linux vboxPciOsDevGetRegionInfo: reg=1
got mmio region: d0000000:268435456
500: linux vboxPciOsDevGetRegionInfo: reg=3
got mmio region: f8000000:33554432
500: linux vboxPciOsDevGetRegionInfo: reg=5
got pio region: 8c00:128
500: linux vboxPciOsDevGetRegionInfo: reg=6
got mmio region: fb980000:524288
got PCI IRQ: 76
device eth0 entered promiscuous mode
power state: 0
vboxpci: No IOMMU domain (attach)

Any idea how to fix this?

UPDATE:

I've got the kernel booting now with intel_iommu=on, but things still aren't working fully.. After rebooting the host, the guest starts, logs in okay and everything seems as it was before starting any of this. My 2nd graphics card isn't outputting anything.

In Device Manager, there is an exclamation mark next to the Quadro FX device, and there is an error code of 12 in the device properties, with a message saying "This device cannot find enough free resources". Further description on technet.microsoft.com.

In the host kernel log, it looks promising:-

vboxpci: detected device: 10de:05ff at 05:00.0, driver pci-stub
vboxpci: vboxPciOsDevInit: dev=500 pdev=ffff88061baa0000
pci-stub 0000:05:00.0: irq 76 for MSI/MSI-X
vboxpci: enabled MSI
500: linux vboxPciOsDevGetRegionInfo: reg=0
got mmio region: fa000000:16777216
500: linux vboxPciOsDevGetRegionInfo: reg=1
got mmio region: d0000000:268435456
500: linux vboxPciOsDevGetRegionInfo: reg=3
got mmio region: f8000000:33554432
500: linux vboxPciOsDevGetRegionInfo: reg=5
got pio region: 8c00:128
500: linux vboxPciOsDevGetRegionInfo: reg=6
got mmio region: fb980000:524288
got PCI IRQ: 76
created IOMMU domain ffff88058377c9a0
device eth0 entered promiscuous mode
power state: 0
vboxpci: iommu_attach_device() success

If I start the guest OS a second time, without rebooting the host, the display freezes again at the "Welcome" stage. It definitely finishes the log-in stage though, as I could use windows shortcuts to shutdown the machine without forcing a shutdown..

Now I'm kind of out of ideas… Any suggestions to get this working? Any more info I can provide?

UPDATE2:

dmesg contains some more interesting errors, but I don't know what I can do about them:

IOMMU 0 0xfbfff000: using Queued invalidation
IOMMU 1 0xfbffe000: using Queued invalidation
------------[ cut here ]------------
WARNING: at /build/buildd/linux-3.5.0/drivers/iommu/intel-iommu.c:4254 init_dmars+0x39b/0x74f()
Hardware name: System Product Name

Your BIOS is broken; DMA routed to ISOCH DMAR unit but no TLB space.

BIOS vendor: American Megatrends Inc.; Ver: 0811   ; Product Version: System Version
...
Your BIOS is broken; RMRR ends before it starts!

Best Answer

I got VGA passthrough working with an NVIDIA GTX 760 using KVM as a hypervisor with vfio-vga; I have never tried it with Virtualbox. It was a pain, but works well after getting the configuration right. KVM is just as convenient as Virtualbox for quick VMs from your desktop and you might consider it as another option.

This thread has tons of information on lots of different configurations and troubleshooting steps, and was really helpful: https://bbs.archlinux.org/viewtopic.php?id=162768

Related Question