Have you considered using snapshot isolation? Enabling read_committed_snapshot in the database will cause all reads (selects) to be lock free:
alter database [...] set read_committed_snapshot on;
No application changes. Some semantics change under snapshot and your application may react weirdly, but that is the exception not the norm. The vast majority of applications don't notice any difference, they just get a free performance boost.
Anyway, I though to answer also the original question: how to detect (and possibly kill) a long running query. Actually, the engine already does that for you. There is an event raised when a threshold is passed: Blocked Process Report Event Class. The threshold is configured via the blocked process threshold Option. Any trace event can be turned into an Event Notification and event notifications can activate procedures. Connect the dots and you have on-demand activated code that runs when the engine detects a query that has crossed an execution time threshold. No polling, no monitoring. Note though that the notification is asynchronous, by the time you process it the query may had completed so that has to be taken into account.
Here is an example:
use msdb;
go
create queue blocked_process_report_queue;
go
create service blocked_process_report_service
on queue blocked_process_report_queue
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
create event notification blocked_process_report_notification
on server
for BLOCKED_PROCESS_REPORT
to service N'blocked_process_report_service',
N'current database';
go
sp_configure 'show advanced options', 1 ;
GO
RECONFIGURE ;
GO
sp_configure 'blocked process threshold', 20 ;
GO
RECONFIGURE ;
Now in a new query set up a WAITFOR
expecting a notification:
use msdb;
waitfor(
receive cast(message_body as xml), *
from blocked_process_report_queue);
And go ahead and cause some blockage. I used a process that created a table and didn't commit, and from another query windows I tried to select from the table. In 20 seconds (my configured threshold above) I got the blocking report:
<blocked-process-report>
<blocked-process>
<process id="process1b5a24ca8" ...>
<executionStack>
<frame line="1" stmtstart="-1"... />
</executionStack>
<inputbuf>
select * from t </inputbuf>
</process>
</blocked-process>
<blocking-process>
<process status="sleeping" spid="51" ...>
<executionStack />
<inputbuf>
begin transaction
create table t (a int) </inputbuf>
</process>
</blocking-process>
</blocked-process-report>
I'll leave the task of wrapping this up into an automated process as an exercise to the reader. And yes, the queue/service/activated procedure must be in [msdb]
.
Best Answer
Well, if the backend is sitting on an external server, your network speed now plays a major part in the connection. Accessing local files off harddisk is fast, but accessing it of an internet connection is slower.
Try testing your app using a faster internet connection and see if there are differences. If there are minimal differences, then it could be the server hosting your app that is slowing it down. But my main bet will be the internet.