Ubuntu – How to use CUDA with NVIDIA Prime

cudanvidianvidia-optimusnvidia-prime

I've found a half a dozen posts on this all over the web, but none of them really answer the question.

I want to set up my nvidia GPU to only do computations, not drive the display. But when I switch to using the Intel GPU in the nvidia-prime configuration, I can no longer load the nvidia module.

modprobe: ERROR: could not insert 'nvidia_352': No such device

Without the module, CUDA doesn't work, obviously.

So what exactly is nvidia-prime doing that makes it impossible to load the module? It's not blacklisted. There's no xorg.conf file, so how does the system know to use the Intel GPU instead the discrete one?

I'm on a Dell 5510 Precision with Ubuntu 14.04 factory installed, and my GPU is Quadro M1000M.

Some suggest using bumblebee, but that shouldn't be necessary for pure compute loads.

Also, apparently bumblebee is able to load the module. So what exactly is it doing?

Update: So why does it always seem that I find the answer when I finally post a question, after hours of trying to figure it out. This actual only a partial answer, but I'm on to something.

So far I've determined that prime does at least two things:

  • Switch the GPU off using bbswitch.
  • Changes the alternatives for /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf.

By using bbswitch to turn the GPU back on, I'm now able to load the NVIDIA module.

But the question still remains: What's the best way to configure the system to use the NVIDIA card only for computations?

Should I set nvidia-prime to use the Intel GPU, and try to manually unravel what that did to get CUDA working?

How do I ensure that the system still uses the Intel GPU for the display?

How would I go about simply disabling NVIDIA prime, and configuring it all manually?

Or should I jsut give in and use Bumblebee and optirun? What are the disadvantages of this if any?

Any recommendations?

Best Answer

In my case I found that the NVidia card was not actually turned off, and the only thing I actually needed to do to run CUDA code was:

export LD_LIBRARY_PATH=/usr/lib/nvidia-352

in the shell where I want to run it (I am assuming that globally changing the alternatives setting would break compiz, etc, etc...)

To get to this point (on a Dell Optiplex 7010, with Ubuntu 14.04, CUDA 7.5, and a GTX 980) I believe the steps were:

  1. Use the PRIME Profiles tab to select Intel
  2. Reboot, and select Intel as the default in the BIOS
  3. Shut down the computer
  4. Plug the monitors into the onboard video :)

Everything seems to be working fine so far (nvidia-smi sees the card, cuda samples run, theano uses the card, etc...)

Related Question