Sql-server – SQL Server Object Concurrency

concurrencysql serversql server 2014

Background: SQL Server Version 2014 and the design is an invoice order system.

I have a few questions to confirm my thoughts on SQL Server concurrency. As a brief background, the developer who created this model, built every order inventory table (items that are part of an order) and order table (the orders) with an InUse field. What happens is that when a sales person tries to update an order, by changing the name of an item ordered (say a book named Harry Poter needed to be renamed to Harry Potter), if someone else is reading that data, they can't access the object.

The actual database design works like this: the SalesOrder table would have a SalesPersonId with an InUse column set to 1 where no one could access that OrderId. This means that for every read, update, insert, or delete, there are three transactions happening: first the update to set InUse to 1 so that no one could access the specific OrderId, then the actual transaction, then a final update to set InUse to 0 to allow people to access that OrderId. That means multiple reads and writes rather than fewer – hello performance nightmare as it scales!

The developer explained that this solves concurrency issues if two users try to write to, or update items in an order. Huh?

From my knowledge of SQL Server, if two users try to write to the same order – like add two new items, they will both be added because these will be inserts in the log as metadata and then will be added on disk. In addition, if the inserts are exactly the same, we should design tables with unique constraints to prevent this (to be fair, this developer uses no constraints and no foreign keys at all), so the second insert wouldn't happen. This means that two users can write inserts to the same table and generally inserts occur quickly.

In addition, if two users are updating the same record – say an OrderItemId of 1 with the name of "Harry Poter" to update to "Harry Potter" (which is an item of a specific OrderId), even two different updates will occur, but the second won't matter because it's the same update. Only when two separate updates to the same object occurs that are different like update one being "Harry Potter" and update two being "Hary Potter", would there be an issue because these differ. However, not allowing user access to objects doesn't solve this problem (his design), as user one's update may be the wrong update, which locks out user two (or vice versa).

I don't get this design; to me, it reflects a complete misunderstanding of SQL Server concurrency, along with how SQL databases handle transactions, and most of what he's created is already handled on the back-end by SQL Server (in addition to not solving the problem of wrong updates). However, maybe I'm missing something and someone else can enlighten me that this is a standard practice with SQL Server (I've never seen this design before).

Best Answer

This is a perfectly acceptable way to implement pessimistic locking without affecting concurrency.

You need to ensure, at the logical level, that whenever one session edits a record (which can translate to multiple rows in multiple tables at the physical level), no other session can modify any components of that logical record -- this is necessary to ensure consistency of your business data. To enable this you need to place some sort of a lock on all affected table rows. This, however, presents a problem in an interactive application, because humans are inherently slower than computers, they tend to look out of the window, make phone calls, go for a smoke or out for lunch before finishing the edit, which potentially leaves records locked for long time.

If you were to rely on physical U or X row locks in a pessimistic locking scenario, you sometimes might prevent other concurrent transactions from reading (or even reading past) the locked rows, thus affecting overall application performance. A safer, from the concurrency point of view, alternative is to logically mark certain rows as being edited by some session (by setting the InUse flag in your example) without placing physical locks on them.

The cost of these extra updates to InUse in an interactive application is negligible, since, as we noted, the human component in it is still much slower.

This, of course, only works if all applications accessing the database know about and honour these logical locks.