Sql-server – Help with Cybernetics SAN Configuration

hardwaresql serversql-server-2008-r2storage

There are tons of articles out there recommending RAID/ LUN/ drive configurations for SQL Servers, but most leave me still questioning as they usually state "it depends" and for this reason I'd like to provide some specifics and get some more direct recommendations:

I have a client who already installed SQL Server on a Cybernetics SAN and here are the specifics:

• SQL Server 2008 R2 installed on a virtual server

• Storage:

 o       Cybernetics miSAN-D8/T16 SAN iSCSI storage appliance 
 o       4+1 GbE ports 
 o       8 GB SAN controller cache
 o       USB and eSATA ports
 o       Two miniSAS device
 o       8 drive bays all filled with 600 GB SATA-II 7.2K drives (5.4 TB total) 
 o       All 8 drives set up in RAID 6
 o       Three logical drives:
       1. C (156 GB) – for system and apps (IIS, web app, SQL, SSIS, CA ArcServe backup app, VIPRE antivirus)
       2. E (1.64 TB) – for data, logs and tempdb
       3. F (1.80 TB) – for backups

The SQL Server is more read intensive, with data bulk loaded nightly like a data warehouse and the web app used for looking up and reporting on the data.

Periodically the client experiences sluggishness, sometimes severely, for short periods with the web application. I’ve also noticed Brent Ozar’s Blitz script reveals slow writes to the E drive at times.

I am wanting to suggest the client reconfigure their SAN as follows:

  1. C ( 156 GB, raid 1) – for system and app files.
  2. D ( 1200 GB, raid 10/5) – for sql data files.
  3. L ( 444 GB, raid 1) – for log files.
  4. T ( 1200 GB, raid 5) – for tempdb data files.
  5. Backup to a different san or network location and archive to tape.

But at the very least I believe the client should do the following:

  1. Move backups to a different SAN or network location, in order to free up the F drive for SQL use as well as for better DR.
  2. Move log files and tempdb data files to the F drive.

Suggestions?

Best Answer

SQL Server 2008 R2 installed on a virtual server

IMO, the SQL Server VM should live with the rest of the VMs. If this device goes kaput (the Cybernetics website says nothing about the number of controllers in this model range, so presumably there's only one), you don't want all your eggs in one basket. Hopefully there are other, better protected devices available.

The SQL Server is more read intensive, with data bulk loaded nightly like a data warehouse and the web app used for looking up and reporting on the data.

You haven't said how big the data warehouse is or what the load time SLA is, but I strongly suspect RAID 5/6 is not going to be appropriate for that, particularly since 7.2k RPM drives are involved. High drive latency combined with RAID 5/6 is two feet in the grave for write performance.

The client can also look at optimizing the bulk loading process to be more efficient. It's likely there's inefficiencies in this area.

  1. Move backups to a different SAN or network location, in order to free up the F drive for SQL use as well as for better DR.
  2. Move log files and tempdb data files to the F drive.

Move the backups for DR: Yes, definitely. If there's space available on another device, use it.

Since the storage is all a big pool of RAID 6, the performance characteristics will be the same regardless of which logical volume is used, so I don't see any point in moving things around unless more space is needed.

I am wanting to suggest the client reconfigure their SAN as follows:

Aside from moving the backups off this SAN, this configuration will probably make things worse:

  • Using RAID 1 for a single database log file is fine, but as soon as you have more than one log file, that's random access, and performance will nosedive.

  • Reads from and writes to data files were distributed over the 8 drives before, now the same load will be on 3 drives.

I would recommend using all 8 drives in one pool of RAID 10, which is the only RAID level that makes sense given the workload and the available hardware, and this is only 1 drive less usable space than your proposed configuration. There's just not enough wiggle room to micromanage the physical pools here. The whole thing will need to be redone if the initial sizing is wrong, or if the sizing changes drastically over time. Using one pool gives way more flexibility.

Now having said all of this, depending on how big the data set is, it's usually a really good ROI to buy more RAM so that SQL Server needs to do fewer physical reads in the first place, thus increasing performance. But that won't help with the write workload, of course.