Sql-server – is there a way to configure Azure SQL databases for high-insert scenarios

azure-sql-databasesql server

I'm working on a legacy application that uses SQL Server as a backing store. There's a portion of the app that does a ridiculous amount of inserts to a single table. Re-architecting this portion of the app represents a large time investment, probably 2 weeks for one developer. It works as is so we're, well happy isn't the right word, I guess we grudgingly accept it.

However, that portion of the app performs terribly when connected to an Azure database. I'm talking a sudden 4x increase in time that it takes to insert records. Each cycle inserts about 4000 rows into this table via a call to a stored proc (4000 calls; it's awful, I know). On a local SQL Server it will take about 5 minutes but on Azure that spikes up to >15 minutes and our users find that unacceptable.

The table has about 20 columns, mostly bits and ints. It has one primary key (int) and no other indexes. It has 2 foreign keys. Including the full CREATE script here seems excessive, but I can if needed.

What I'm wondering is if there's any way to "tune" an Azure database to handle large numbers of inserts better. Altering the code isn't a great option right now, but it is the right choice down the road.

Azure v12. We use S2 or S3 instances and don't see any high CPU or high DTU usage other than spikes for a few seconds here and there. Databases are all under 1GB.

Best Answer

Table compression is now supported in V12. You can compress the table to reduce the IO (as you mentioned CPU is not a problem). Try to batch the requests as each commit has overhead. You are just taling about 20 * 4 = 80 bytes per row and 80 * 4k = 320k data which is 0.5MB. This should be doable if tune the app in both S2 & S3. As you mentioned you are not hitting DTU limits, concurrent connections help too.