Linux – Dual-boot AND Virtualize Both Windows 8 and Ubuntu

linuxmulti-bootUbuntuvirtualizationwindows 8

Dual-booting Windows and Linux is well-documented, as is running one of the bootable OSes as a VM inside the other (ex, dual-boot Windows & Ubuntu; in Windows, host Ubuntu in a VM using VMWare Workstation/Player/etc).

I am planning out a build and I want to have the best of both worlds: to be able to boot into Windows 8 and run Ubuntu in a VM, then to boot into the same Ubuntu install and run the same Windows 8 install in a VM, with both OSes sharing a partition for project files, etc. It seems that it should be possible.

  • Are there any caveats that would make this impossible / extremely difficult / unstable?
  • Are there any suggested disk configurations (two physical drives, etc)?

UPDATE:

I found some information on getting Windows 7 & 8 running in this setup. It's quite hacktastic and there's a good chance you'll bork your Windows install, but in case anyone is more committed to this than I am: http://geekery.amhill.net/2010/01/27/virtualbox-with-existing-windows-partition/

Best Answer

This part is key:

run the same Windows 8 install in a VM

Yes, there are some serious caveats that would make this extremely difficult and possibly unstable. Virtual machine monitors (VMMs) like VirtualBox, VMware Workstation/Player, Virtual PC, etc. typically present different virtualized hardware than what your actual system has.

QEMU-based VMMs typically present a PIIX3 chipset to the guest OS. VirtualBox offers an option for PIIX3, ICH9. VMware Workstation and VMware Player also emulate a chipset, which your PC probably doesn't have.

In previous versions of Windows, there was a feature to select Hardware Profiles, which might allow you to pull this off safely, but that's probably not the intended use for it. There isn't really a good way to choose the profile automatically, either.

Other features which would drastically differ between your real hardware and the virtualized hardware include: video adapter (any accelerated graphics will need a paravirtualized driver, and that will likely fail), disk controller (you might be able to pull this one off if you have the real hardware for it, like the LSI 1068e, a real BusLogic card, or, most likely, you can set your disk controller to present the AHCI SATA interface and convince your VMM to present a bootable AHCI disk controller), USB (if you have an Intel board, you probably have a UHCI USB 1.x controller instead of the OHCI that VirtualBox presents), audio (SoundBlaster 16 and Ensoniq Audio PCI are commonly emulated, but outdated hardware), etc.

Doing this on Linux isn't so bad, as most of the hardware configuration details that are presented to the OS are chosen at boot-up time, with the exception of things like udev rules, so you may get weird network interface names.

The real danger, I think, lies in having the VMM's paravirtualized drivers which perform hypercalls by doing things that are undefined on a real platform. If the paravirtualized drivers blindly assume that the VMM is present without checking, it will very likely cause your kernel to trigger an unexpected fault, which translates to a kernel panic on Linux and a BSoD on Windows (if you don't trigger the triple-fault shutdown first). How likely is this? I've actually had trouble with this on Linux with the PV VMware display drivers triggering a hang on Virtual PC.

Related Question