Given your requirements, it seems likely to me that you may not need to write your own driver. You may be able to use llvmpipe, which I believe conforms to your requirements. In particular, it is a "real driver" by some meanings of the word, and it does not require that X11 be running.
llvmpipe creates what might be called a virtual GPU that interprets OpenGL shaders by converting them on-the-fly to machine language for your CPU and runs it. It uses parts of the LLVM infrastructure to accomplish this.
However, it might not meet your needs, since what is actually going on is that llvmpipe is linked against by the binaries calling it. In other words, this is not a real, live, running-in-the-kernel driver. Instead, it creates an alternative libGL.so
which interprets your OpenGL shaders.
If you're not able to compile your 3D graphics accelerated program from source, you probably cannot use llvmpipe to good effect. (But you want this to help with debugging your own programs, so that shouldn't be a problem.)
You might want to provide more information about what you need, in your question. In particular: Why do you need to debug your code from the driver side? Why can't you put the necessary debugging code in your program itself (or in your programs themselves)? Both X libraries and OpenGL libraries provide information about what went wrong, when a call fails. Why can't you use that information--plus kernel messages--in your program to facilitate debugging? And why would you expect that debugging information you get on the driver side, with a virtual driver implemented in the kernel, would correspond to what actually happens on real computers? More importantly, why would you assume that if your program produces low-level problems, those problems would be the same for different GPU's and drivers when it's run in the real world? You may have perfectly good answers to these questions (plus maybe I'm missing something), but I think it would be easier to answer your question if you explained this.
(By the way, one interesting application of llvmpipe is to enable graphical user interfaces to be written only in 3D-accelerated versions, but still run on computers without 3D acceleration. Theoretically this should facilitate running GNOME Shell without 3D acceleration, though some development work might be necessary to make it actually work; I think GNOME Shell makes some assumptions relating to compositing that might not be automatically fulfilled. Also, there are apparently some performance problems. A real-world instance of this that actually works is Unity, which in Ubuntu 12.10 will come in just one version, and be able to run on top of llvmpipe instead of there being a separate "Unity 2D" implementation.)
Without at least the threat of harm you cannot enforce a student booting your Exam OS. Skip to the horizontal rule for how I would achieve your overall goal. This is how I would fool your system:
I would boot into my Linux, where I will always have root privileges and everything I may need. Then I would mount the Exam OS root file system. That file system is on the CD you gave me and I will always be able to mount it. Then I would open a new terminal, chroot
into Exam OS and fire up Exam OS inside that chroot-jail. If you based Exam OS on Debian or one of its derivatives I'd
/etc/init.d/rc S
/etc/init.d/rc 2
In effect I will have your Exam OS, which means I have everything you want to give me and at the same time I have my own Linux for everything you don't want to give me.
Maybe I'm overshooting a little by running /etc/init.d/rc S
and probably should restrict myself to running /etc/init.d/rc 2
only. Every distribution has an easy to find out similar magic incartation.
Yes, you can go to war and make this cost me valuable exam time, but for every minute you cost me you have to invest several hours! Also, you have to make Exam OS refuse to boot inside a virtual machine, as that's another option to fool your system.
So much for the destructive part, let's get constructive. :-)
If you don't want students doing something, tell them and fail them, if they break that rule. That has worked for centuries! I would hand out Exam OS CDs and tell them using another OS during this exam is considered cheating. Now we have that problem solved. All we need to do now is catch them cheating (which they most likely won't anyway anymore).
To create Exam OS I'd modify some existing Live-CD. Which is almost irrelevant, however, it will be easiest if it is some "full scale" Linux so Ubuntu and Fedora are the first that come to mind, here is an incredibly large list of alternatives. I'd pick the one with the best documentation on how to create/rebuild the live-cd! The finished Exam OS would have to fulfill the following requirements:
- students have quick access to all programs I want them to have
- the desktop has distinctive design/skin so I can tell from a distance with 80% certainty if somebody is not using Exam OS
- no username password authentication is required by default
- to log in as another user, su or sudo you need an USB-stick (the one on your keychain) to authenticate
- have funky network settings so to restrict the internet and local communication to fit my needs.
- have a customised kernel that doesn't even have support for bluetooth, irda or any other subsystem than ethernet that could enable communication. USB needs to be restricted as well!
1 basically means I put desktop shortcuts to everything I want them to be able to use. That way I would have easy testing if everything works.
To achieve 2 I would skin the desktop with the school colors and an occational logo so that I recognise the Exam OS desktop when I see it. During the exam, when I see odd colors or pictures or even miss something I know should be there I can investicate more closely.
3 is easy, makes Exam OS uncomplicated and when I think somebody is cheating I can close the laptop, turn it around and open it up. When I see an Exam OS I appoligise, if I don't or even see a locked screen I caught someone cheating.
4 means to harden the system so you can only become root by having my usb key. There is a pam module for that so the USB-stick authentication is not hard to do (google pam_usb). With the basic OS of a live CD and that pam module in place for any service that can be used to get super user privileges, i.e., login, su, sudo and maybe some desktop manager and sshd, it should be sufficiently hard to become root. After that one still has to make sure that the normal user may not edit important configuration files and directories!
5 means no nameserver-entry in /etc/resolv.conf
but some entries in /etc/hosts
for all the websites I want the users to be able to visit. Also there needs to be a restrictive firewall which drops all incomming and outgoing packets and for each host I want the students to be able to reach an exception by ip-address has to be defined.
6 means just that. take the config from the generic kernel of the live-cd you chose and rip out all the subsystems you don't want. sound, video for linux, bluetooth, everything that is not needed during the exam. Rip out everything usb that is not usb-hid and usb-storage. Very important w.r.t. USB-stricks: remove all the filesystems that you do not need, and keep only those you need. For the authentication stick I would choose some odd FS that nobody anymore. Maybe even an experimental one that is not part of standard linux.
That's a rough guide only, sorry :D
Best Answer
Virtual machine can give you highest security without reboot, but lowest performance.
Another option, for even higher security than a virtual machine: boot a "live" CD/DVD/pendrive without access to the hard drive (temporarily disable the HDD in BIOS; if you can't, at least do not mount the drive / unmount it, if mounted automatically - but this is much less secure)
A docker container is a bit less secure alternative to a full virtual machine. Probably the crucial difference (in terms of security) between these two is that systems running in docker actually use the kernel of your host system.
There are programs such as isolate that will create a special, secured environment - this is generally called a sandbox - those are typically chroot-based, with additional supervision - find one that fits you.
A simple chroot will be least secure (esp. in regards to executing programs), though maybe a little faster, but... You'll need to build/copy a whole separate root tree and use bind mounts for
/dev
etc. (see Note 1 below!). So in general, this approach cannot be recommended, especially if you can use a more secure, and often easier to set up,sandbox
environment.Note 0: To the aspect of a "special user", like the
nobody
account: This gives hardly any security, much less than even a simplechroot
. Anobody
user can still access files and programs that have read and execute permissions set for other. You can test it withsu -s /bin/sh -c 'some command' nobody
. And if you have any configuration/history/cache file accessible to anybody (by a mistake or minor security hole), a program running withnobody
's permissions can access it, grep for confidential data (like "pass=" etc.) and in many ways send it over the net or whatever.Note 1: As Gilles pointed in a comment below, a simple chroot environment will give very little security against exploits aiming at privilege escalation. A sole chroot makes sense security-wise, only if the environment is minimal, consisting of security-confirmed programs only (but there still remains the risk of exploiting potential kernel-level vulnerabilities), and all the untrusted programs running in the chroot are running as a user who does not run any process outside the chroot. What chroot does prevent against (with the restrictions mentioned here), is direct system penetration without privilege escalation. However, as Gilles noted in another comment, even that level of security might get circumvented, allowing a program to break out of the chroot.