If doing a hardware upgrade, is the number of disks key to performance?
Yes, as a hard disc - even SAS - has a head that takes time to move.
Want a HUGH upgrade?
Kill the SAS discs, go to SATA. Plug in SATA SSD - enterprise level, like the Samsung 843T.
Result? You can do around 60.000 (that is 60 thousand) IOPS per drive.
This is the reason SSD are killers in DB space and so much cheaper than any SAS drive. Phyiscal spinning disc just can not keep up with the IOPS capabilities of discs.
Your SAS discs were a mediocre choise to start with (too large to get a lot of IOPS) For a higher use database (more smaller discs would mean a lot more IOPS), but at the end SSD are the game changer here.
Regarding software / kernel. Any decent database will do a lot of IOPS and flush the buffers. THe log file needs to be WRITTEN for basic ACID conditions to be guaranteed. The only kernel tunes you could do would invalidate your transactional integrity - partially you CAN get away with that. The Raid controller in write back mode will do that - confirm the write as flushed even if it is not - but it can do so because the BBU is assumed to safe the day when power fails. Anything you do higher up in the kernel - better know you can live with the negative side effects.
At the end, databases need IOPS, and you may be surprised to see how tiny your setup is compared to some others here. I have seen databaes with 100+ discs just to get the IOPS they needed. But really, today, you buy SSD and go for size on them - they are so superior in IOPS capabilities, it makes no sense to fight this game with SAS drives.
And yes, your IOPS numbers do not look bad for the hardware. Within what I would expect.
Transaction log writes are sequential. Only one of the log files will ever be written to at any one time, so having multiple files - in and of itself - can't possibly change your I/O patterns for that database.
Unless you are getting lucky. For example, you've added a second log file to an SSD or otherwise faster or less busy disk, or split the log files across multiple disks and have done so for multiple databases, and you are observing better I/O now because the log has switched to that file on the faster disk, or is more isolated from your other data/log files. In other words, I believe any observed I/O difference is due entirely to other factors and is merely a coincidence, not due to the fact that you added log files alone. SQL Server is explicitly designed to only use one log file at a time - so how could multiple log files possibly improve log write performance, unless the current log file is on faster / more isolated disk? I think you need to provide better empirical evidence (and in doing so you may discover for yourself the true cause of the improved performance).
Please read these posts in full - they were written by a pretty smart guy who worked on the SQL Server storage team for quite some time, so I don't think he's making any of this stuff up for fun:
Also Kimberly Tripp touches on this in a worthwhile article:
Note that none of the 8 steps involves adding a transaction log file. In fact she recommends against it.
There are other perils to having multiple log files, particularly if they are large (think RTO) - and there really is nothing to gain.
Best Answer
As far as not caring about redundancy, I think you're crazy. Note that a modern Dell PERC (LSI Logic) controller can get >1GB/s sequential data transfer off of a set of 6 SATA SSD's in RAID5, and the random rates are also in the hundreds of MB/s if you keep your IO depth deep enough. Note also there's a huge different between high-end SSD's and low-end SSD's, particularly for random writes (i.e. probably your update process).
This is a general performance tuning question, so all the normal performance tuning advice applies. Run Profiler on your selects during the day, look at reads, writes, duration, CPU. Use the SSMS-based Standard Report for Top Queries by Total CPU and Total IO during the day. Tune your queries and data structure, starting with the highest impact and working your way down.
Do the same thing with the update, but separately.
If you really think IO is really the issue, look at IOStall in sys.dm_io_virtual_file_stats or use Perfmon (Avg sec/Read and Avg sec/Write).
As a general shotgun approach, if during the day it's 100% read only (selects), you can ALTER DATABASE x SET READ_ONLY to reduce possible locking. Set it to READ_WRITE before the update, and back to READ_ONLY afterwards.
As part of your update, are you rebuilding indexes (which also updates their statistics with FULLSCAN as part of the rebuild)? Or making sure stats are up to date?
If you're really having IO issues on pure reads (selects), and you're unwilling or unable to tune your queries and schema, then there's always buying more RAM or increasing the maximum amount of RAM allocated to this instance of SQL Server.