SQL Server – Splitting TempDB into Multiple Files Equal to Number of CPUs

disk-structuressql serversql-server-2008-r2storagetempdb

The article SQL Server tempdb Best Practices Increase Performance suggests that I should split tempdb into a number of files equal to the number of cores. So for 4 cores you get 4 files.

By having the larger number of files, you can increase the number of
physical I/O operations that SQL Server can push to the disk at any
one time. The more I/O that SQL Server can push down to the disk
level, the faster the database will run. With standard databases, SQL
Server can cache a large amount of the data that it needs into memory.
Because of the high-write nature of the tempdb, the data needs to be
written to the disk before it can be cached back up into memory.

Though it sounds good in theory, is it really that good as a general optimisation? Is it something that may only apply for specific systems where IO is very high?

Best Answer

A ratio of 1/4 to 1/2 times the number of TempDB data files to machine cores has long been the recommendation...

But there's now even better guidance. At PASS in 2011, my good friend Bob Ward, who’s the top guy in SQL Product Support, espoused a new formula: if you have less than 8 cores, use #files = #cores. If you have more than 8 cores, use 8 files and if you’re seeing in-memory contention, add 4 more files at a time. [link]

The last sentence has always been relevant. If you're not seeing contention, why add additional files? To play safe, most will add 2-4 files as a starting point for the majority of builds but beyond that, measure and react.