Step 1: Create a service to receive the notifications and a queue for it:
use msdb;
go
create queue dbm_notifications_queue;
create service dbm_notification_service
on queue dbm_notifications_queue
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
go
create event notification dbm_notifications
on server
for database_mirroring_state_change
to service N'dbm_notification_service', N'current database';
go
Note that I'm using msdb
, this is not an accident. Because the server level event notifications are sent from msdb
it is much better if you create the opposite conversation endpoint (the target) also in msdb
, which implies that the destination service and the queue must also be deployed in msdb
.
Step 2: create the event notification processing procedure:
use msdb;
go
create table dbm_notifications_errors (
incident_time datetime not null,
session_id int not null,
has_rolled_back bit not null,
[error_number] int not null,
[error_message] nvarchar(4000) not null,
[message_body] varbinary(max));
create clustered index cdx_dbm_notifications_errors
on dbm_notifications_errors (incident_time);
go
create table mirroring_alerts (
alert_time datetime not null,
start_time datetime not null,
processing_time datetime not null,
database_id smallint not null,
database_name sysname not null,
[state] tinyint not null,
[text_data] nvarchar(max),
event_data xml not null);
create clustered index cdx_mirroring_alerts
on mirroring_alerts (alert_time);
go
create procedure dbm_notifications_procedure
as
begin
declare @dh uniqueidentifier, @mt sysname, @raw_body varbinary(max), @xml_body xml;
begin transaction;
begin try;
receive top(1)
@dh = conversation_handle,
@mt = message_type_name,
@raw_body = message_body
from dbm_notifications_queue;
if N'http://schemas.microsoft.com/SQL/Notifications/EventNotification' = @mt
begin
set @xml_body = cast(@raw_body as xml);
-- shred the XML and process it accordingly
-- IMPORTANT! IMPORTANT!
-- DO NOT LOOK AT sys.database_mirroring
-- The view represents the **CURRENT** state
-- This message reffers to an **EVENT** that had occured
-- the current state may or may no be relevant for this **PAST** event
declare @alert_time datetime
, @start_time datetime
, @processing_time datetime = getutcdate()
, @database_id smallint
, @database_name sysname
, @state tinyint
, @text_data nvarchar(max);
set @alert_time = @xml_body.value (N'(//EVENT_INSTANCE/PostTime)[1]', 'DATETIME');
set @start_time = @xml_body.value (N'(//EVENT_INSTANCE/StartTime)[1]', 'DATETIME');
set @database_id = @xml_body.value (N'(//EVENT_INSTANCE/DatabaseID)[1]', 'SMALLINT');
set @database_name = @xml_body.value (N'(//EVENT_INSTANCE/DatabaseName)[1]', 'SYSNAME');
set @state = @xml_body.value (N'(//EVENT_INSTANCE/State)[1]', 'TINYINT');
set @text_data = @xml_body.value (N'(//EVENT_INSTANCE/TextData)[1]', 'NVARCHAR(MAX)');
insert into mirroring_alerts (
alert_time,
start_time,
processing_time,
database_id,
database_name,
[state],
text_data,
event_data)
values (
@alert_time,
@start_time,
@processing_time,
@database_id,
@database_name,
@state,
@text_data,
@xml_body);
end
else if N'http://schemas.microsoft.com/SQL/ServiceBroker/Error' = @mt
begin
set @xml_body = cast(@raw_body as xml);
DECLARE @error INT
, @description NVARCHAR(4000);
WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
SELECT @error = CAST(@xml_body AS XML).value('(//ssb:Error/ssb:Code)[1]', 'INT'),
@description = CAST(@xml_body AS XML).value('(//ssb:Error/ssb:Description)[1]', 'NVARCHAR(4000)');
insert into dbm_notifications_errors(
incident_time,
session_id,
has_rolled_back,
[error_number],
[error_message],
[message_body])
values (
getutcdate(),
@@spid,
0,
@error,
@description,
@raw_body);
end conversation @dh;
end
else if N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog' = @mt
begin
end conversation @dh;
end
commit;
end try
begin catch
declare @xact_state int = xact_state(),
@error_number int = error_number(),
@error_message nvarchar(4000) = error_message(),
@has_rolled_back bit = 0;
if @xact_state = -1
begin
-- Doomed transaction, it must rollback
rollback;
set @has_rolled_back = 1;
end
else if @xact_state = 0
begin
-- transaction was already rolled back (deadlock?)
set @has_rolled_back = 1;
end
insert into dbm_notifications_errors(
incident_time,
session_id,
has_rolled_back,
[error_number],
[error_message],
[message_body])
values (
getutcdate(),
@@spid,
@has_rolled_back,
@error_number,
@error_message,
@raw_body);
if (@has_rolled_back = 0)
begin
commit;
end
end catch
end
go
Writing service broker procedure is not your run-of-the-mill code. One must follow certain standards and is very easy to stray off into quicksand territory. This code shows some good practices:
- wrap the message dequeue and the processing in a transaction. No brainer, obvious.
- always check the message type received. A good service broker procedure must handle
Error
and EndDialog
messages appropriately by ending the dialog from it's side. Not doing so results in handle leaks (sys.conversation_endpoints
grows)
- always check if a message was dequeued by RECEIVE. Some samples check@@rowcount after
RECEIVE
, which is perfectly OK. This sample code relies on the message name check (no message implies NULL message type name) and handles that case implicitly.
- create a processing errors table. the background nature of SSB activated procedures makes it really difficult to troubleshoot errors if the messages simply vanish w/o a trace.
Besides, this code also does some good-practice code with regard to the task at hand (monitoring DBM):
- differentiate between
post_time
(when was the notification sent?), start_time
(when did the action that triggered the notificaton start?) and processing_time
(when was the notification processed?). post_time
and start_time
will likely be identical or very close, but processing_time
can be seconds, hours, days apart from post_time
. the interesting one for audit is usually post_time
.
- since the
post_time
and the processing_time
are different, it should be obvious that a DBM monitoring task in an even notification activated procedure has no business looking at sys.database_mirroring
view. That view will show the current state at the moment of processing, which may or may not be related to the event. If the processing occurs a long time after the event is posted (think maintenance downtime) than the issue is obvious, but it can handle also in 'healthy' processing if the DBM changes state very fast and post two (or more) events in a row (which happens frequently): in this situation the processing, as in the code you posted, audit the event as they occur, but will record the current, final, state. Reading such an audit could be very confusing later.
- always audit the original XML event. This way you can later query this XML for any information that was not 'shredded' into columns in the audit table.
Step 3: attach the procedure to the queue:
alter queue dbm_notifications_queue
with activation (
status=on,
procedure_name = [dbm_notifications_procedure],
max_queue_readers = 1,
execute as owner);
Repeating the above steps on all instances involved in mirroring ensures you get notified no matter which instance is the principal.
Best Answer
A trigger would not be the most appropriate solution for you particularly for a large table which could have lots of hits.
Right a simple stored procedure and just have a SQL Server Agent Job that runs it periodically (as often as is required for your business purposes).
Something simple like:
Would work. I don't think the first half of the WHERE clause is strictly necessary, but it doesn't hurt. You could just stick the code straight into the Job, but it'll be more easily maintained as a stored procedure and can be manually run that way
This could easily be modified to delete instead of update, or even possible delete/update a different row in the same table (by joining the table with itself), although if that is the case then best practise would dictate denormalising your db and splitting the tables up.
Even with a large table, that should be a fairly quick statement to run if it's done frequently enough - obviously the more changes it has to make at once the longer it could be. Make sure you run it manually yourself the first time and add/update any indexes if necessary.
You can do this across multiple tables - I won't get into it here and it's beyond the scope of your question (and I know nothing about your DB), but my first impression, purely from what you say in your question, is that holding customer and subscription data in a single table could lead to lots of denormalisation, might be something to bear in mind.