There's no need to check for disk space when allocating a new extent to an object. SQL Server already owns that space on the disk. It knows which pages within it's data file are allocated and which aren't, so there's no need to verify that we own the pages within the extent as we know for a fact that we already do. It will simply allocate space in the buffer pool and write the data into memory, then overwrite whatever is in that space on the disk when checkpoint occurs.
The only time that SQL Server will bother to see how much free space is on the disk is when it's actually expanding the data file.
If the UPDATE
statement is shown in Profiler with no error, then does this mean that it should have updated the database?
Yes, but whether the change is visible to other database connections depends on whether the change is committed or not, and on the current transaction isolation level in use by the other connection. To be clear, the SQL:BatchCompleted Profiler Event is triggered when a T-SQL batch completes, it does not mean that any containing transaction has also committed.
What could be causing the SELECT
statement to abort (considering that it runs fine when I try it in SQL Server Management Studio)?
In your case, the update was performed inside a transaction which was not committed. The SELECT
statement was issued on a separate database connection under an isolation level that does not permit reading uncommitted data. From the description given, the SELECT
was blocked by the exclusive lock held by the uncommitted change.
The fact that you were able to issue the SELECT
and have it return the pre-update version of the row means your Management Studio session was using a row-versioning isolation level, almost certainly READ COMMITTED SNAPSHOT ISOLATION
.
The SELECT
issued from the application must be running under a different isolation level, one that does not use row-versioning, and blocks when it encounters an exclusive lock, perhaps SERIALIZABLE
. You should check the application code to determine this.
If you know of a way to show uncommitted update calls in Profiler (rather than an error-free update followed by an abort on a select) then can you add this as an answer please? This would help a lot when I am debugging this code.
There are lots of options here. You might find also tracing the SQLTransaction Event Class sufficient for your needs. Alternatively, there are many system views and commands to help you see uncommitted transactions. You may find a query like the following useful to show details for open user transactions:
SELECT
DES.session_id,
DES.[host_name],
DES.[program_name],
DES.login_name,
DES.last_request_start_time,
DES.reads,
DES.writes,
transaction_isolation_level =
CASE DES.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'Read Uncomitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable Read'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END,
DES.open_transaction_count,
DTDT.database_transaction_begin_time,
transaction_state =
CASE DTDT.database_transaction_state
WHEN 1 THEN 'Not initialized'
WHEN 3 THEN 'Initialized'
WHEN 4 THEN 'Open and has generated log records'
WHEN 5 THEN 'Prepared'
WHEN 10 THEN 'Committed'
WHEN 11 THEN 'Aborted'
WHEN 12 THEN 'Being committed'
END
FROM sys.dm_exec_sessions AS DES
JOIN sys.dm_tran_session_transactions AS DTST
ON DTST.session_id = DES.session_id
JOIN sys.dm_tran_database_transactions AS DTDT
ON DTDT.database_id = DES.database_id
WHERE
DES.is_user_process = 1
AND DTST.is_user_transaction = 1
AND DTDT.database_transaction_type = 1
AND DES.open_transaction_count > 0;
The system views used in that code are all documented in Books Online:
sys.dm_exec_sessions
sys.dm_tran_session_transactions
sys.dm_tran_database_transactions
You may also find DBCC OPENTRAN
useful as a way to quickly see the longest open transaction.
Best Answer
Here's the pre-Extended Events way to do this.
Instead of using Profiler interactively, use it to generate a Server Side SQL Trace that writes to a file. After running the trace for a bit, you can query the trace files and/or load the trace files into a table and query those. Find a session that has the rollback, and then query for all the events for that session, in order.