Sql-server – Reclaim disk space from large database’s primary filegroup after truncating tables (other than shrink)

availability-groupsdisk-spaceshrinksql server

We have an 8 TB SQL Server 2014 SP1 database, with one primary filegroup, spread across multiple files on different drives. This is part of an AlwaysOn AG with multiple secondaries (some async).

One old table in this db is 6TB, and we truncated it. Now to reclaim this disk space, we can either DBCC ShrinkDatabase or ShrinkFile.

We don't care about the usual problems with shrinking like fragmentation, or database growing again etc. because we are only focusing on regaining disk space now.

But these shrink operations are REALLY slow, and are looking for a faster way to do this.

First, are these slow because the changes have to sync to the synchronized-secondary?

Second, is there a way to make shrinks faster?

We thought about couple other options:

  • Creating another database and moving all (except the old) tables to it…but might need to setup AG all over.

  • Creating another filegroup and moving all (except the old) table to it…but the old table is on Primary filegroup, which cannot be dropped, right ?

Which method have you guys used in the industry to 'shrink' databases after dropping or truncating large tables?

I have read How do shrink *right* when required? but no answer satisfies my questions.. is there an option other than shrink? Can I move tables off the primary filegroup and delete it?

Best Answer

Community wiki answer:

It's going to take some time for the shrink operations on such large database files to run. Honestly, you should consider running the DBCC command and just letting it run as long as you can, on days and at times you can afford. Just let it do it's thing.

If the shrink operation is stopped before completion it does not roll back the changes so even if it runs for 2 hours on one day, another day for 4 hours, and so forth until a weekend or something where you can let it run a whole day until it's able to complete quickly.

Run scripts to check DB free space before and after each run too.

This way no exporting to new DB structure or whatever, and you can just let it do it's own thing. Do some more research, but in similar-sounding cases, I've been involved in the past (huge data warehouses where data loaders did things all wrong - using the actual DB for temp tables rather than tempdb), it happened and was what it was. I got the issues tracked down, retrained data loaders, and then did the cleanup. Afterwards I worked on taking care of fragmentation, etc. where space was higher priority than performance.

Because of the issues mentioned, we have always shrunk files in smaller chunks. In my experience, the shrinks are excruciatingly slow initially, i guess because SQL Server has to move the pages from the end of the file before shrinking. But as you progress it becomes faster and faster.