I have this table that contains over a billion rows and to issue the query, I issued:
SELECT
Total_Rows= SUM(st.row_count)
FROM
sys.dm_db_partition_stats st
WHERE
object_name(object_id) = 'audit_tb' AND (index_id < 2)
Output:
1055075195
I was asked to DELETE records in 2014 so to test out, I tried Deleting the last 3 months in 2014, it's been over 1 hr and it's still executing, any ideas? I suspect there are bad rows.
DELETE
FROM [audit_tb]
where datetime >= '2014-01-01 00:00:00'
AND datetime <= '2014-03-31 00:00:00' --3 months in 2014
The table also has a CLUSTERED INDEX.
Best Answer
Here is a script to delete in chunks. It deletes in chunks of 4500 records to avoid lock escalation. It performs 100 loops to delete a total of 450,000 rows. If your database is using the FULL recovery model, you may want to backup the transaction log after each run to keep if from growing too large.
I have used this in a SQL Agent job to run repeatedly until the required number of rows were removed. I just schedule the job to run every 15 minutes so my tran log backup job runs between runs of this job.
I hope this helps