I've heard that NTFS compression can reduce performance due to extra
CPU usage, but I've read reports that it may actually increase
performance because of reduced disk reads.
Correct. Assuming your CPU, using some compression algorithm, can compress at C MB/s and decompress at D MB/s, and your hard drive has write speed W and read speed R. So long as C > W, you get a performance gain when writing, and so long as D > R, you get a performance gain when reading. This is a drastic assumption in the write case, since Lempel-Ziv's algorithm (as implemented in software) has a non-deterministic compression rate (although it can be constrained with a limited dictionary size).
How exactly does NTFS compression affect system performance?
Well, it's exactly by relying on the above inequalities. So long as your CPU can sustain a compression/decompression rate above your HDD write speed, you should experience a speed gain. However, this does have an effect on large files, which may experience heavy fragmentation (due to the algorithm), or not be compressed at all.
This may be due to the fact that the Lempel-Ziv algorithm slows down as the compression moves on (since the dictionary continues to grow, requiring more comparisons as bits come in). Decompression is almost always the same rate, regardless of the file size, in the Lempel-Ziv algorithm (since the dictionary can just be addressed using a base + offset scheme).
Compression also impacts how files are laid out on the disk. By default, a single "compression unit" is 16 times the size of a cluster (so most 4 kB cluster NTFS filesystems will require 64 kB chunks to store files), but does not increase past 64 kB. However, this can affect fragmentation and space requirements on-disk.
As final note, latency is another interesting value of discussion. While the actual time it takes to compress the data does introduce latency, when the CPU clock speed is in gigahertz (i.e. each clock cycle is less then 1 ns), the latency introduced is negligible compared to hard drive seek rates (which is on the order of milliseconds, or millions of clock cycles).
To actually see if you'll experience a speed gain, there's a few things you can try. The first is to benchmark your system with a Lempel-Ziv based compression/decompression algorithm. If you get good results (i.e. C > W and D > R), then you should try enabling compression on your disk.
From there, you might want to do more benchmarks on actual hard drive performance. A truly important benchmark (in your case) would be to see how fast your games load, and see how fast your Visual Studio projects compile.
TL,DR: Compression might be viable for a filesystem utilizing many small files requiring high throughput and low latency. Large files are (and should be) unaffected due to performance and latency concerns.
Avoiding fragmentation
The secret is to not write uncompressed files on the disk to begin with.
Indeed, after you compress an already existing large file it will become horrendously fragmented due to the nature of the NTFS in-place compression algorithm.
Instead, you can avoid this drawback altogether by making OS compress a file's content on-the-fly, before writing it to the disk. This way compressed files will be written to the disk as any normal files - without unintentional gaps. For this purpose you need to create a compressed folder. (The same way you mark files to be compressed, you can mark folders to be compressed.) Afterwards, all files written to that folder will be compressed on the fly (i.e. written as streams of compressed blocks). Files compressed this way can still end up being somewhat fragmented, but it will be a far cry from the mess that in-place NTFS compression creates.
Example
NTFS compressed 232Mb system image to 125Mb:
- In-place compression created whopping 2680 fragments!
- On-the-fly compression created 19 fragments.
Defragmentation
It's true that NTFS compressed files can pose a problem to some defragment tools. For example, a tool I normally use can't efficiently handle them - it slows down to a crawl. Fret not, the old trusty Contig from Sysinternals does the job of defragmenting NTFS compressed files quickly and effortlessly!
Best Answer
According to the book "Microsoft Windows Server 2003: Delta Guide" (the top of page 33, compressed folders) the compression and decompression is done on the Server:
I've not (yet) found a Microsoft web page stating it that clear.