Sql-server – What throughput should I expect with MPIO

sql server

Dell PowerEdge 2950 with two 1 Gbps NIC's going to two 1 Gbps ports on a switch which then goes to a NetApp with four 1 Gbps NIC's that present as one virtual interface. 24 drives, 7200k SATA, NetApp RAID-DP. I've mapped each host NIC to the NetApp using MPIO in the Microsoft iSCSI initiator. Testing with SQLIO my write throughput appears reasonable at about 200 MBs, but my reads are closer to 100 MBs.

Shouldn't my reads be closer to 200 MBs just like my writes? Is this a configuration problem or is there a fundamental storage issue I don't understand?

enter image description here

Update:
Here are IOPS for the random workload. The reads make sense, though, I'm not sure what to make of 20000 for the writes. SAN cache is 3.2 GB. SQLIO tests are against a 25 GB file.

enter image description here

Best Answer

Disk writes are actually going to memory (NVRAM) on the filer, to be flushed to disk later - on an idle filer, these will be incredibly fast, and iops of 20,000 are quite believable (you'll see similar speeds from most SSDs).

Reads, on the other hand, need to come from disk unless they already happen to be in the filer's read cache (which, unlike writes, are on volatile memory).

It's hard to pin storage vendors down on iops for spinning disks, but for a 7200RPM drive, 80-120 iops is quite believable. Considering that you've probably lost a couple of disks to NetApp's RAID-DP and/or spares, 2,200 iops is close to what you could expect from 22 disks performing around 100 iops each.

This may not explain your read speeds (your disks may not be doing the full 2200 iops when you're performing a sequential read), but it may at least help explain your write performance.