Sql-server – Why is sql server waiting for acknowledgment

blockingsql serversql-server-2012wait-types

I have an application that often request a big set of rows [about 20K] from a sql view, in the mean time other applications can't update the table from where the data comes from, the update query's gets blocked by the select from the other application.

The wait code is "ASYNC_NETWORK_IO" which I found to be described as exactly whats happening by me, I quote:

This is usually where SQL Server is waiting for a client to finish
consuming data. It could be that the client has asked for a very
large amount of data or just that it’s consuming it really slowly
because of poor programming.

Clients often process one row at a time, instead of caching the data on the client and
acknowledging to SQL Server immediately.

My question is why is sql server waiting for the client to finish processing that select job, before allowing updates to occur on the table? is that a setting or a configuration that I could change?

Best Answer

SQL Server by default adheres to ACID database rules. The A in ACID stands for atomic, meaning that the whole transaction happens, or none of it does. The I in ACID stands for isolation, meaning that transactions are isolated from one another, so either all the results from one transaction are visible to another transaction, or none of them are. This is accomplished in SQL Server by using locks for both reads and writes (at least by default). Reads (SELECTs) obtain a read lock, which prevents any write (UPDATE or INSERT) operations on those items (or pages, or the whole table depending on what SQL's black magic lock escalation algorithm decides on). In order to preserve Isolation, read locks for queries must be held until all results have been sent or buffered. This prevents the writes from obtaining their locks (or rather causes them to wait to obtain them), unless the writes are otherwise isolated (see other answer about snapshot isolation). If not, some of the results from a subsequent write could end up in the results of the query that occurred before the write.

There are several solutions for this, each with trade-offs:

  1. Buffer results on the client side (read all rows immediately as quickly as possible and do further processing later). This may require lots of client-side memory if there are lots of results, and will still block updates until the client is done buffering (which is why it must be completely buffered before any subsequent processing).
  2. Buffer results on the server-side. You can do this by selecting the data into a temp table and then returning the temp table instead of the original query. This will require server-side memory and/or disk and could therefore interfere with other server-side operations.
  3. Use WITH (NOLOCK) on the SELECT of the read operation. This tells SQL Server to ignore isolation for this read, which means that you could get the original version of a row at the beginning of the results and a modified version of another row later in the results when both rows were modified by the same transaction and the versions returned were never in the database at the same time (ie. the rows were modified during the query, after the first row was returned but before the second row was). This answer has the least performance cost, but produces non-ACID results by breaking Isolation and/or Atomicity (depending on how you look at it).
  4. Use snapshot isolation (see other answer--I'm not an expert with snapshot isolation). Note that I've had significant performance issues with snapshot isolation, and it will require plenty of server-side resources as well.

There may be other solutions as well, but these are the most common.