Wouldn't you want to ensure the secondary logs are restored to the secondary server at certain intervals? With log shipping, I thought you ship the logs over to your secondary DB and then restore from those to keep the secondary up-to-date as close as possible to your primary.
Do you see any reason why you would not want to automate the log shipping restore on your secondary DB in the log shipping configuration?
Once those log files are applied to secondary, then the job will purge the logs that are not needed any longer since those are reflected in the secondary DB at that time.
Test it out and see how it goes.
EDIT: @nulldotzero
Okay, since you cannot run restore operations on primary or secondary in an AlwaysOn configuration, and because you are also copying the TRN log files to the seconary as well as the DR for redundancy, then you could just setup a SQL Agent job on the secondary instance to do the below for whatever hours you feel need to be purged.
--This will set the date time stamp to pass onto the Stored Proc for 72 hours from the current date when it runs
--The stored proc will recursively delete all TRN files from the parent level specified all the way down
declare @DeleteDate nvarchar(50)
declare @DeleteDateTime datetime
set @DeleteDateTime = DateAdd(hh, -72, GetDate())
set @DeleteDate = (Select Replace(Convert(nvarchar, @DeleteDateTime, 111), '/', '-') + 'T' + Convert(nvarchar, @DeleteDateTime, 108))
--5th argument below "0 - don't delete recursively (default)" and "1 - delete files in sub directories"
EXECUTE master.dbo.xp_delete_file 0,N'S:\MSSQL\Backup\TransactionLog\',N'trn', @DeleteDate,1
Since the log backup files are purged from the locations which the LSRestore for DR runs and grabs those, then that job will take care of those once the restore job completes successfully.
There are several ways to automate this and you are heading down the right path...you can rely on your DBA to manually disable and enable (not very nice ;), you can keep track of job state in a table and enable and disable using a periodic job that runs, or you can categorize your jobs and respond to a role change alert...this is what I do.
A rough outline is:
- Pick a category for all the jobs that you want to only run on the current primary replica, I use the ag name
- Set up a SQL Agent alert for error number 1480 (AG Role Change) that will call a stored procedure in master via a job passing in the category name
- Build the sp to check to see if it is the primary, if it is enable all the jobs categorized with the parameter. If it is not the primary, disable all the jobs categorized with the parameter
- Build some functions to help recognize if the AG is Primary or Secondary
There is a very detailed SQL Magazine series of articles by Michael K. Campbell that give code examples for how to structure either solution, I started with that article and only had to make minor modifications.
Best Answer
So, what happens when you lose a node? What would happen if that server crashed and burned or you lost the data drive? Now you've lost the database backups...
The best option would be to have an SMB 3.x share to which these backups are sent. You could also tie into a 3rd party backup software like Commvault, netbackup, etc., by writing your backup process to hit local disk then copy to the 3rd party service/appliance.
The reason is that the AG could be primary on, potentially, any node. If we're backing up to local disk, the backups could be spread across many nodes. This means we might not be able to have an entire backup chain when it comes time for a disaster.