SQL Azure – Performance of New Servers

azure-sql-database

I have a fairly small database ~200MB which when running in the web edition of SQL azure is lightning fast. I've switched it to the basic edition and all of a sudden a group of queries which took less than 1 second to execute is now taking ~25 Seconds (running in basic mode)
Switched it to a D1 and the halved the speed to 12.5 seconds.

Running on premise with an average server and the query took ~1 second.

In both tests I was the only one using the database.
I'm shocked that it's running so slowly. I know the new databases work slightly differently but was really shocked at the performance difference.

Does anyone have any good suggestions on how to optimise your database design to improve performance when running in SQL Azure.

If anyone has any suggestions for me to check I'd love to hear it. I'm figuring that it's the amount of queries which it's executing in one hit.

I'm about to create a view to do all the querying in one hit. Something that I've already run on the database and on the basic edition it's instant.

This is only a little test app so it's easy for me to alter but i'm concerned about our productions apps which may suffer from the same issue.

Best Answer

I am the author of the blog post mentioned above by Andrew.

The new service tiers scale (ie. limit) I/O, CPU and memory by tier. Higher levels/tiers provide more resources. There are no documented limits by resource type (only vague DTUs), though my tests provide some indicative I/O numbers.

UPDATE: I have now tested the memory limits by Service Tier, see: http://cbailiss.wordpress.com/2014/11/11/azure-sql-database-memory-limits-by-service-tier/

In terms of optimising design, I would first see which resource type I am hitting the limit on. The sys.resource_stats DMV in the master database has columns that show the %age resource usage (e.g. % CPU) you are using over time in that service tier/level. If you are able to run a test for say at least half an hour, then check that table and see if one of those columns is max'd at 100% then you have an idea of where you are bound. Of course, resolving that may or may not be easy/possible, but you have a starting point. The sys.dm_db_resource_stats is a newer DMV which has data for the last hour at 15 second intervals so would be useful for shorter tests.

If the I/O is the limiting factor, then for a small database I would expect performance to increase dramatically once the data is in the buffer pool. That was a very clear pattern in the tests I undertook and blogged, e.g. see the dramatic performance increase in this sequential read test on Basic Edition: http://cbailiss.files.wordpress.com/2014/07/2014-07-04-seqsel-metrics1.png taken from this post: http://cbailiss.wordpress.com/2014/07/06/microsoft-azure-sql-database-performance-tests-sequential-selects/

Of course, if/when the buffer pool is cleared, then performance will drop off again. That should in theory happen less with the new tiers - since the guaranteed performance levels in the new tiers are supposed to prevent noisy neighbours / load balancing database failovers that happen in Web/Business.
However, as far as I know, there are no documented numbers / guarantees about the buffer pool size that is allocated to each database in the new service tiers.

Hope that helps

Chris