If something is still setup on the secondary server you need to run sp_delete_log_shipping_secondary_database
there.
You can clear the log shipping configuration on the primary by executing sp_delete_log_shipping_primary_secondary
USE master;
GO
EXEC master.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'<PrimaryDatabase>'
,@secondary_server = N'<SecondaryServer>'
,@secondary_database = N'<SecondaryDatabase>'
GO
Wouldn't you want to ensure the secondary logs are restored to the secondary server at certain intervals? With log shipping, I thought you ship the logs over to your secondary DB and then restore from those to keep the secondary up-to-date as close as possible to your primary.
Do you see any reason why you would not want to automate the log shipping restore on your secondary DB in the log shipping configuration?
Once those log files are applied to secondary, then the job will purge the logs that are not needed any longer since those are reflected in the secondary DB at that time.
Test it out and see how it goes.
EDIT: @nulldotzero
Okay, since you cannot run restore operations on primary or secondary in an AlwaysOn configuration, and because you are also copying the TRN log files to the seconary as well as the DR for redundancy, then you could just setup a SQL Agent job on the secondary instance to do the below for whatever hours you feel need to be purged.
--This will set the date time stamp to pass onto the Stored Proc for 72 hours from the current date when it runs
--The stored proc will recursively delete all TRN files from the parent level specified all the way down
declare @DeleteDate nvarchar(50)
declare @DeleteDateTime datetime
set @DeleteDateTime = DateAdd(hh, -72, GetDate())
set @DeleteDate = (Select Replace(Convert(nvarchar, @DeleteDateTime, 111), '/', '-') + 'T' + Convert(nvarchar, @DeleteDateTime, 108))
--5th argument below "0 - don't delete recursively (default)" and "1 - delete files in sub directories"
EXECUTE master.dbo.xp_delete_file 0,N'S:\MSSQL\Backup\TransactionLog\',N'trn', @DeleteDate,1
Since the log backup files are purged from the locations which the LSRestore for DR runs and grabs those, then that job will take care of those once the restore job completes successfully.
Best Answer
There's three separate parts to this:
Q: Do I use SQL Server's built-in log shipping or roll my own? A: I'd roll your own. What you're doing is beyond what the built-in tools are normally used for.
Q: If I roll my own, where do I start? A: I'd use normal SQL Server full and transaction log backups on the primary. On the secondary, start with MSSQLTips' script to automatically restore backup files from a folder. That example will restore all of the files, though, not just some of them - you'll need to adapt it to only restore log backups in there. In a perfect world, you'd only attempt to restore log files that haven't been restored yet, but if you're in a rush, you could restore all of the log files every time - SQL Server will automatically skip logs that have already been applied.
Q: How do I alert when files aren't being restored? Rather than focus on the files, I'd focus on the data. Do restores with standby when you restore the logs so that you can run queries against them, and after each restore job finishes, query your most transactional table looking for the newest record. If it's older than X minutes, your data may not be coming across. (Granted, this only works for databases with decent change rates, like adding records every few minutes - if you don't have that, then some DBAs add a dbo.LastUpdated table with a datestamp in it, and have a SQL Agent job that updates the only record in there every 5 minutes to set the datestamp to current.)