Sql-server – SQL Server indexes and record deletion

deleteindexsql server

I've inherited a large production database in which one of the tables has 72 million rows and is inserted/updated thousands of times during a day. These rows have built up over 3 years and they are currently growing at approx 2 million every 4 weeks.

There are 7 indexes (3 heavily used and least fragmented/ 4 mostly unused but heavily fragmented at about 75-85% and the columns are heavily updated). There was an SP that was scheduled to archive record stubs to another table but we have had to turn this off as it was interfering with the backups and causing a failover and outages on another system.

Ideally we only want 1 years worth of data in this table (approx 25-30m rows) and i'd like to disable the 4 unused indexes. I've read that before i disable them they should be rebuilt but i'm not sure whether to do this before or after i've archived the previous 2 years worth of data.

A redesign has been on the cards for a few years but i'm now a team of 1 with limited knowledge of indexes/SQL tuning etc and trying to do my best with very limited resources.

The table is heavily used between 7am and 1am so most work needs to be a site outage or done in the early hours of the morning to limit downtime to users.

Any suggestions would be welcome. Thanks.

Best Answer

Here some thoughts on your situation:

1) if 4 indexes are not used at all, and you are not planning using them in future, consider dropping them completely, instead of just disabling it

2) to disable an index, no need to rebuild it before disabling it. To enable it back, you have to "rebuild" https://blog.sqlauthority.com/2007/05/17/sql-server-disable-index-enable-index-alter-index/

3) if you are willing to archive/delete previous 2 years of data, better drop unused indexes before archiving data -> this way archiving data will be less resource-intensive as those indexes will not need to be maintained during archive/delete

4) as already mentioned in comments, archive/delete data in small batches (1000,500,100 etc.) -

"delete top (n) from [YourTable] where [Date] < 'some date' "  

to prevent escalating row/page level locks to table-level locks
There must be 1 to few seconds break between each batch deletion
You can write a t-sql code that does all this for you

5) You can also setup Extended Events session and monitor for "lock_escalation" events, by doing this you can find out what batch size is not causing escalation of locks to table-level when you run your archive/delete process, and use this batch size going forward - this will make archiving/deleting painless - there will be no need for outage/downtime for your applications/users