Sql-server – More CPU cores vs faster disks

configurationhardwareperformancesql server

I'm part of a small company so as usual covering a number of different roles. The latest of which is procuring a dedicated SQL Server box for our .NET web app. We've been quoted on a dual Xeon E5-2620 (six core) 2.00 GHz CPU configuration (12 cores in total), with 32 GB of RAM. This has left us with a limited budget for the disk array, which would essentially consist of two 2.5" SAS 300 GB drives (15k RPM) in a RAID 1 config.

I know that the disk setup is sub-optimal for SQL Server and I'd really like to push for RAID 10 so we can put the database, log files and tempdb on their own drives. In order to make this compatible with our budget should I consider reducing the number of CPU cores? or would I get better bank for buck keeping the cores and using fewer drives, perhaps 4 in a dual RAID 1 setup?

Here are some additional stats

  • The SQL Server database is tilted towards high numbers of reads to writes, probably 80% vs 20% respectively. The current DB size is around 10 GB 26 GB at present, growing at rate of 250 MB per month.

  • Currently running on SQL Server 2008 R2 Standard on a single quad core Xeon box shared with the web-server (12 GB Ram, 2 x 10k 300GB SAS drives in RAID 1), looking to move to SQL Server 2012 Standard.

  • Database serves approx 100-150 concurrent users with some background scheduling tasks thrown in. Reading this, I'm thinking that 12 cores is serious overkill!


I deployed the whole application to an Azure cloud service (2 small instances) linked to an SQL Azure DB. Although performance was reasonable when testing (almost zero load) I lost the courage to use in production due to the unpredictability I'd read so much about. It may work better with a scale-out approach, but with just a 10 GB database I can probably get away with scaling up right now and save some cash.

I initially overlooked the licensing costs and didn't realise that SQL Server 2012 licensing is based on the number of cores. I have a BizSpark MSDN subscription with a SQL Server 2012 Standard licence so I'd need to read up on how many cores this would utilise out of the box.

Best Answer

Speaking from experience which is humble but I think worth sharing, the major bottleneck with SQL databases (Sybase and SQL server here) is storage.

But I think it's only fair that someone first benchmarks their setup before making any wrong assumptions. In my case, CPU usage has never risen high enough to justify upgrading the CPU any time soon. Instead, I have upgraded from single drive to RAID 1 and then to RAID 10 + a bump from 8GB to 16GB of RAM.

All these RAID upgrades helped to reduce previous wait times by a factor of 2 to 6. I suspect upgrading to SSDs would be even better. If you think about it, it can all be reduced to (theoretical) bandwidth. Your [(RAM Speed + RAM Size + Memory controller) link to CPU] combo has a bandwidth limit which would be the most important factor in read operations when your data should always be cached, your particular (RAID) storage has a bandwidth limit (affects reads when cache missed and writes when flushing or with many clients writing lots of data combined).

Normalize all those limits as much as possible (bring them closer such that you don't have wasted resources) and raise them as much as possible (upgrade where needed and only if needed, don't let resources become wasted if the system won't be able to use them because some other bottleneck is in the way). In the end, your worst bottleneck will be the least performing (with least bandwidth) server subsystem in your particular configuration.

I might also add that, in the process of upgrading, I have created separate RAID configurations for the database files and the database log files. The reason was that database log files tend to be write intensive. A log file is used to recover a database from a crash and it's always written to immediately as a transaction is committed before any data is written to the database file.

A log file is also used by some database replication servers but most replication is not done instantly but frequently so the read performance impact is minimal here. At least I think so. I have done minimal benchmarking while making this upgrade so again, I recommend anyone first benchmarks their different configurations and first upgrades their storage, then RAM and network link, before thinking about upgrading their CPUs.

After more extensive upgrades on more than 5 servers, I came back to share my experience. I definitely still advocate for first upgrading storage, then RAM and then CPU. Reason is the disparity of bandwidths in the system between storage, RAM and CPU, in order of lowest to highest. So I upgraded a bunch of servers from RAID10 and dual RAID1s to SSDs.

The way I did it because of cost concerns (I have 20 more servers to upgrade) is to move the database data and objects files to an SSD (yes, just one SSD in RAID0) and move the transaction log plus tempdb to a 4xHDD RAID10 configuration. I tested with the tempdb on the SSD as well with great results (in fact very nice results with more than 15 times faster queries sometimes, yielding some reports taking seconds instead of minutes in the past) but later moved the tempdb to the disk RAID10 because of intensive write-wear concerns for the SSD.

So now basically I have observed 10-15 times faster response times per some of the longest queries. The SSD is great for reading data into RAM fast because SQL Server does not bring data into RAM until requested to and of course data first needs to be loaded into RAM to be processed by the CPU (later, in L1,L2,L3 cache), so SSDs help reduce that initial wait time by a huge factor. And SSDs also help reduce swapping times... clearing out RAM and loading new data, especially if your database is larger than can fit into RAM.

All in all, we're very pleased and been running like so for several months in a sort of slow migration process to allow the servers to run so I can collect wear-level information before I switch all my servers to this configuration. And this is just SQL Server Express! :D - Just make sure your SSDs can provide constant IOPS because that's another thing that makes a huge difference (just google it). That's why I chose Intel DC (DataCenter) series SSDs.