Short answer - No, not on Windows. None of the VM softwares that can run inside of windows support VT-d, and I'm not sure they can as long as they continue to run inside of Windows (rather than Windows and everything else running inside of them). However if your concern is simply to run Windows and Linux, both of which have access to a GPU, then the effect can be achieved using a hypervisor like XEN, KVM, or ESX that support VT-d. Sadly, while hyper-v is a hypervisor type VM software like Xen, KVM and ESX, it does not support VT-d like they do and won't give PCI devices to anything except the main installation of Windows.
Beyond that, there are other concerns that I'll list below that will affect your system unless something changes significantly in the near future. The summary is that not all motherboards support vt-d and cheap AMD GPUs are easier than cheap NVidia GPUs to send off to Virtual Machine land.
First, I highly recommend you tell us your motherboard model as VT-d also has to be supported in the chipset and then in the BIOS/UEFI and not all models do that, even if they technically have the correct chipset and CPU combination. Asus doesn't make a single board that works with VT-d, Asrock and Gigabyte have support in most of their Z77/H77/Q77 boards, especially Asrock. I haven't looked into MSI, Intel, or any other companies for their levels of support.
Secondly, passing a VGA card to a VM is a bit more complex it seems than passing through a simpler sound card, USB host adapter, NIC, or Sata adapter (all of which I did, and they worked without any issues). I've only heard of this being done Hypervisors like Xen, KVM, and ESXi. Hyper-V does not support VT-d, and thus can't support VGA pass-through either. AMD Graphics cards have had a much higher success rate than NVidia. My experience is with Xen, from what I gathered at the time KVM support was less mature, and I didn't try ESX.
My Radeon HD 6950s, and a Radeon HD 3750 worked without any issues, but each VM could only take one at a time (so no hope for crossfire). My NVidia GTX 480s on the other hand refused to work at all, and others have also found it difficult to get NVidia cards other than high end Quadro's to function. The steps involved compiling from source specific revisions of XEN with modified code, pulling the GPU bios off the card and making XEN run it manually from the hard drive on the VM startup, and also finding out what ranges of memory the NVidia card was using and forcing the VM to use those ranges, since it failed to do this automatically. Hopefully NVidia cards have become easier to deal with, but I wouldn't cross my fingers on that.
To the best of my knowledge, VirtualBox is the only hypervisor not offering nested virtualization: KVM, VMWare ESXi and Xen use Intel's VT extension to implement nested virtualization. (In AMD, the equivalent feature is called SVM).
This is especially useful to study the behaviour of hypervisors (yours, your competitors') in a safe and controlled environment.
Xen has the feature since version 4.4 (the only one for which I have direct experience). You can find here an introduction to the topic. The ever useful Arch Linux Wiki provides a discussion of the feature for KVM. For VMWare ESXi, you can find the relevant information on their Web page, here.
Best Answer
If you run a virtualization system that uses a single process per machine, such as VirtualBox or VMWare Server, you can set the affinity of that process to a particular processor.
This guide shows you how:
http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html