Sql-server – SQL Server Latching from Multithreading/Parallel Inserts and Hash Partitioning

concurrencydatabase-designhashingpartitioningsql server

We are conducting parallel, multithreading inserts in SQL server tables, and want to reduce latching.

What are the disadvantages of utilizing hash partitioning to reduce latching? Is it reduced query slowness from essentially querying all these split up partition tables?

We have around 120 Table Inserts Per Second, financial system.

Other Notes: SQL 2016 System will use around 50 GB of SSD Hard drive space a year.
Currently, have 50 Core processors, and 150 GB of RAM.

Platform is not built, so no baseline to test; but I need to develop testing plan and strategies.

Hash Partitioning Example:
http://www.madeiradata.com/how-to-solve-the-tail-insert-problem-2/

CREATE PARTITION FUNCTION pf_hash (TINYINT) 
AS RANGE LEFT FOR VALUES (0,1,2,3,4,5,6,7,8);

CREATE PARTITION SCHEME ps_hash 
AS PARTITION pf_hash ALL TO ([PRIMARY]);

CREATE TABLE dbo.UserEntries_RegularWithHash 
(   
Id BIGINT IDENTITY NOT NULL,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL,
HashId AS CAST(Id % 9 AS TINYINT) PERSISTED NOT NULL,
CONSTRAINT PK_UserEntries_RegularWithHash 
PRIMARY KEY CLUSTERED (Id,HashId)
) 
ON ps_hash(HashId);

Best Answer

Queries that don't specify the partitioning column will need to touch all partitions. This is particularly an issue with a table partitioned using a computed hash column because the hash value isn't commonly specified in queries. Although you can specify the HashId value, it's not natural to do so. Examples:

--touches all 10 partitions
SELECT *
FROM dbo.UserEntries_RegularWithHash
WHERE Id = @Id;

--touches 1 partition
SELECT *
FROM dbo.UserEntries_RegularWithHash
WHERE Id = @Id
AND HashId = CAST(@Id % 9 AS TINYINT);

Latch contention against a non-partitioned table with an incremental key generally occurs only at very high insert rates.

Latch contention will not happen on a healthy machine at 120 inserts per second. David Browne-Microsoft suggested 10,000 inserts per second is the point at which latch contention is a concern. That's probably a good a SWAG as any, as it will depend on factors such as row size and hardware. I think one can assume a rate of several thousand per second will be needed before latch contention is a consideration. At that point you might consider In-Memory OLTP tables, and/or bulk/batch inserts.

I've heard it said that premature optimization is the root of all evil. In this case, I would certainly not introduce partitioning solely to avoid latch contention.