I experimented with various configurations of SQL Server databases and ended up setting READ_COMMITTED_SNAPSHOT
to ON
while ALLOW_SNAPSHOT_ISOLATION
is OFF
.
I noticed that when enabling this, many queries got a lot faster. I'm still using the default READ COMMITTED
isolation level to connect to the database.
What is actually happening here? I would think that when ALLOW_SNAPSHOT_ISOLATION
is OFF
, setting READ_COMMITTED_SNAPSHOT
to ON
wouldn't have any effect… I'm still not actually using the snapshot isolation, or am I? Can anybody please explain? I'm confused.
I tried researching this topic online, but whenever I see READ_COMMITTED_SNAPSHOT
being used, it's always together with ALLOW_SNAPSHOT_ISOLATION
, which I didn't enable.
Best Answer
Dan noted this:
In addition,
READ_COMMITTED_SNAPSHOT
(henceforth called RCSI) will only give you some properties of optimistic concurrency. RCSI and Snapshot (SI) differ in three main ways:RCSI is turned on, thus all RC isolation queries are being promoted to RCSI where readers don't block writers and writers don't block readers. The "speed up" you're seeing is most likely due to the reduced waits on blocking for those queries as it is now using the optimistic concurrency's version store instead of blocking and waiting. This may or may not be giving you the desired results in your queries.
See above. You are using it when RCSI is turned on (which is the one you did in fact turn on) but SI is not automatically used when turned on: You need to explicitly
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
(or equivalent) to use SI.For more information, see Row Versioning-based Isolation Levels in the SQL Server Database Engine in the product documentation.
Other useful details relating some gotchas and caveats when using RCSI/SI can be found in these articles: