All,
I'm familiar with the good reasons not to shrink a database and I am in the process of shrinking a 2TB one. We have a large database that gets backed up each day (by policy) and because of the size, the backup runs most of a day. We have been running for some time with over 60% free and that shouldn't change significantly. I'm reducing the table size to mitigate what the backup is doing to our performance.
The table is large and shrinking is quite slow so I am running a DBCC SHRINKFILE in a looping script where I shrink 250MB at a time. I'm doing this because when the backup starts SHRINKFILE errors out saying it can't change the file system. I'm assuming this is because the backup has the file locked so the file isn't changing in the middle of the backup. I am doing the small shrinks as I been in the middle of one when the backup starts. So I loop it so I can see the shrinkfile errors and kill my script – the backup can start at different times.
My question is that although the DBCC SHRINKFILE can't change the file after the backup starts and I get the error message, the first iteration of the script when the backup is done goes quite quickly and then settles into a normal cadence. As folks have said if you kill a SHRINKFILE, you "don't lose your work". Is the results of the shrink sitting out in tempdb somewhere ready to go? To get this shrink done a little quicker, does anyone think running a larger shrinkfile during the backup that I know won't complete is a good idea? It would give me a jump on the next day.
Thanks,
Pat
Best Answer
Are you shrinking before backup in the hope that it will reduce your backup time? If so that will not make any difference. But if you are doing it so it allocates less space on your target (when you restore) then it will help.
I did not understand if you are doing this every time before backup. If yes you want to resolve this permanently and not do this.
Now to answering your question.
Is the results of the shrink sitting out in tempdb somewhere ready to go?
It does not have to keep the result. It will check everything again once you restart.
To get this shrink done a little quicker, does anyone think running a larger shrinkfile during the backup that I know won't complete is a good idea?
No, because it does 32 pages at one time and commit. See below.
Reference: https://blogs.msdn.microsoft.com/psssql/2008/03/28/how-it-works-sql-server-2005-dbcc-shrink-may-take-longer-than-sql-server-2000/
On a side note backing up 2TB should not take a full day. There are switches in native backup, can speed up/optimize your backup duration.