Sql-server – Transferring large amount (84 million rows) of data efficiently

sql server

I have about 84 millions rows. Of those all of them needs to be transferred to a separate database on the same server, then I delete to delete about 60 millions rows from the source database.

The 84 million rows are all in the same table. That table alone accounts for 90% of the whole database.

So…
Source: 84 million rows -> 24 millions rows
Destination: 0 rows -> 84 million rows

Source is running full recovery mode, destination will be running simple.

I am wondering what would be the most efficient way to do this?

Plan A:

1) INSERT INTO destination SELECT * FROM source

2) TRUNCATE source

3) INSERT INTO source SELECT * FROM destination WHERE keep_condition = 1

Plan B:

1) Restore a backup of source database as the destination database

2) Drop every tables except the one needed on the destination database

3) TRUNCATE source

4) INSERT INTO source SELECT * FROM destination WHERE keep_condition = 1

Plan C:

1) INSERT INTO destination SELECT * FROM source

2) DELETE source WHERE keep_condition = 0

or something else?

Thanks

Best Answer

I would add that, however you decide to approach this, you'll need to batch these transactions. I've had very good luck with the linked article lately, and I appreciate the way it takes advantage of indexes as opposed to most batched solutions I see.

Even minimally logged, those are big transactions, and you could be spend a lot of time dealing with the ramifications of abnormal log growth (VLFs, truncating, right-sizing, etc.).

Thanks