I have an SSD that, using IOmeter test, shows performance over 200MB/s. However when I run any SQL query from local machine the windows resource monitor never shows disk IO above 7MB/sec. This holds true even for queries that take over 2 minutes to run. What could the bottleneck be that it is only using 7MB/sec from SSD?
I'm running:
- Windows Server 2012 Standard
- SQL server 2008 r2
- Intel i7 3820
- 32GB of ram
- sandisk SSD
Best Answer
From the comments chain, it looks like you're interpreting
ASYNC_NETWORK_IO
waits to mean that the problem is related to the network. It (typically) isn't.As @MartinSmith hinted at (twice) the most likely explanation for that is SSMS or the application you're using not consuming the results as fast as SQL Server is serving them. Follow either of the suggested methods to remove the consumption of the rows from your measurement and you'll get a true(r) picture of the maximum IO throughput:
Just in case you haven't already, you'll obviously need to
DBCC DROPCLEANBUFFERS
to ensure the data is actually read from disk rather than buffer cache. Usual caveats of "on test only, don't do this in an active live environment" etc apply.Honing in on a couple of your other comments:
What exactly are we testing here, how and why? If your 9 million row query is anything other than a
SELECT * FROM dbo.SomeTable
then there are a 1001 factors that come in to play, other than just raw IO throughput.Your Intel I7-3820 is a 4-core processor. If your test query doesn't generate a parallel plan, I'd be surprised if you could thrash more than 20% CPU utilisation from the system.
The 3 minutes to return 9 million rows is very suspicious and suggests we're not getting a full picture of what your testing. My guess would be this is a case of sub-optimal (non-parallel) query plan, stuffed full of nested-loop operators pulling millions of rows i.e. not just a single table
SELECT
to verify the IO consumption.I suggest:
SELECT *
to test just IO through SQL Server.