Scenario:
We have two tables Tbl1
& Tbl2
on the Subscriber Server. The Tbl1
is being replicated from Publisher Server A
and it has two triggers – insert and update. The triggers are inserting and updating the data into Tbl2
.
Now, we have to purge (approx. 900 million records) from Tbl2
which has total 1000+ million record. Below is the data distribution for one month to one minute.
- One Month – 14986826 rows
- One day – 483446 rows
- One hour – 20143 rows
- One minute – 335 rows
What I am looking for;
The fastest way to purge that data without any production issue, data consistency, and possibly no downtime. So, I am thinking to follow the below steps but stuck 🙁
Steps:
- BCP Out the required data from the existing table Tbl2 (around 100 million records, it may take approx. 30 mins).
- Let’s assume I started doing the activity on 1Fab2018 10:00PM, it finished at 1Fab2018 10:30PM. By the time activity will be completed, the table Tbl2 will get new records that becomes delta
- Create a new table in the database with name Tbl3
- BCP in the exported data into the newly created table Tbl3 (around 100 million records, it may take approx. 30 mins)
- Stop the replication job
-
Once BCP-in get completed, use tsql script to insert the new delta data.
-
The Challenge is – How to deal with delta “update” statement?
-
Start the Replication
Additional Question:
What is the best way to deal with scenario?
Best Answer
Since you are deleting 90% of the rows, I'd recommend copying the rows you need to keep into a new table with the same structure, then use
ALTER TABLE ... SWITCH
to replace the existing table with the new table, then simply drop the old table. See this Microsoft Docs page for the syntax.A simple test-bed, without replication which shows the general principle:
First, we'll create a database for our test:
Here, we create a couple of tables, with a trigger to move rows from table "A" to "B", approximating your setup.
Here, we insert 1,000,000 rows into "A", and because of the trigger, those rows will also be inserted into "B".
Clear the transaction log, to avoid running out of room. DO NOT RUN this in production since it sends the transaction log data to the "NUL" device.
This code creates a transaction to ensure none of the affected tables can be written to while we're migrating rows:
The
sp_getapplock
andsp_releaseapplock
prevent multiple instances of this code running at the same time. This would be helpful if you enable this code to be re-used through a GUI.(Note that app locks are only effective if every process accessing the resource implements the same manual resource locking logic explicitly - there is no magic that "locks" the table in the same way that SQL Server automatically locks rows, pages, etc. during an insert/update operation.)
Now, we test the process of inserting rows into "A", to ensure they're inserted into "B" by the trigger.