Have two partitioned tables (32 partitions / 16 files / 8 file groups). First table with 56 columns and Second table with 5 columns. One of requirment of application is to test procedures for future workload. I did it in Jmeter application. Put all procedures in one transaction controller
:
GetSomeDataFromNonPartitioneTable1
GetSomeDataFromNonPartitioneTable2
InsertNewTransaction
(insert new row in two partitioned tables)GetOneRecordFromPartitionedTable
UpdatePartitionedTable
At the beggining of the test (0 rows in paritioned tables) got performance around 1300
operations per second ("one operation" means above set of procedures running one after the other). I tried same test (with same number of virtual users) when got around 4 000 000
of records in partitioned tables (4 000 000 per table
). Get performance around 1050
operations per second. Going deeper I have got average response times:
1 test (0 rows in partitoned table)
GetSomeDataFromNonPartitioneTable1
– 22 msGetSomeDataFromNonPartitioneTable2
– 22 msInsertNewTransaction
(insert new row in two partitioned tables) – 160 msGetOneRecordFromPartitionedTable
– 22 msUpdatePartitionedTables
– 136 ms
2 test (around 4 000 000 rows in partitoned tables)
GetSomeDataFromNonPartitioneTable1
– 25 msGetSomeDataFromNonPartitioneTable2
– 25 msInsertNewTransaction
(insert new row in two partitioned tables) – 217 msGetOneRecordFromPartitionedTable
– 23 msUpdatePartitionedTables
– 177 ms
My questions:
- Why bigger table decrease performance so much ?
- What can I do to avoid it ? Bigger partitioned table got 2 non clustered indexes and cluster index, and smaller got only cluster index.
Best Answer
Assuming you're using Microsoft SQL Server (as your tag implies), if you're using JMeter, ensure your tests run directly on SQL Server and don't involve any middleman that can affect results. I'm not personally familiar with JMeter but the biggest difference you have between your two tests is about 57 milliseconds which is little enough to be affected by any possible variable in your process.
Although the difference between your two InsertNewTransaction tests is about a 35% difference, it's still only 57 milliseconds, which is very small. It can even be just the difference of a single run of one test vs it's second run, or more outside factors like temporary hardware blips. You'd have to run a large number of instances of each test to average and really compare when you're dealing with such granular time measurements.
That being said, some general advise is as follows:
INSERTS
take longer when there are more indexes on a Table. So when applicable, reducing the number of low-use or unused indexes can help yourINSERT
performance.Partitioning isn't meant to improve performance from a
SELECT
orINSERT
perspective. Rather it helps with better granularity on maintaining the data, and can be used smartly to improve maintenance tasks performance (such as index maintenance).If the table with 56 columns is being slow, then normalizing it, if possible, can help your
INSERT
performance (especially if you're not always inserting data for all 56 fields) and can help yourSELECT
performance.If you need further performance critical changes, you can look into if Memory-Optimized Tables fit your use cases.